https://pulumi.com logo
#general
Title
# general
s

salmon-musician-36333

04/25/2023, 8:20 PM
I'm getting a failure converting to a multi-AZ RDS cluster:
Copy code
+-  β”œβ”€ aws:rds:Cluster                postgres   replace     [diff: ~availabilityZones,dbClusterInstanceClass]
Error:
Copy code
error: 1 error occurred:
    	* creating RDS Cluster (...): DBClusterAlreadyExistsFault: DB Cluster already exists
    	status code: 400, request id: ...
I'm still testing the deployment, so I'm going to bring the whole thing down and go from there, just wondering if this is expected.
deletionProtection
is disabled.
b

billowy-army-68599

04/25/2023, 8:20 PM
did you set an explicit name for the db?
s

salmon-musician-36333

04/25/2023, 8:21 PM
clusterIdentifier
? If so, yes.
b

billowy-army-68599

04/25/2023, 8:22 PM
set delete before replace before changing the properties
s

salmon-musician-36333

04/25/2023, 8:22 PM
Right on, thanks a bunch πŸ™‚
In general, is the recommended approach to not set explicit names except where absolutely necessary, in order to let Pulumi bring stuff up while the previous incarnation still exists?
b

billowy-army-68599

04/25/2023, 8:39 PM
yes ideally
s

salmon-musician-36333

04/25/2023, 8:39 PM
I've had a few cases where it was to resolve a circular dependency, e.g. having to specify an ARN for something else before the resource was up, although that's usually policies and I could attach the policy after the dependencies are created. Otherwise, it's for interacting with things outside of Pulumi which expect the resources to have certain names, which is a harder problem to fix, but I could potentially audit what those are and dump the rest.
Tbh for the RDS situation, I'll eventually look into whether I can have Pulumi migrate data from one to another while they are both up, if there are hooks to do such a thing.
On a related note, I'm running into this now with
finalSnapshotIdentifier
. I would generate a tag based on the date, but then it's going to end up modifying/replacing the cluster every run. Is there a way to ask Pulumi for a unique (hopefully prefixed) identifier that will be persisted in state?
This is what I'm referring to:
Copy code
error: deleting urn:pulumi:...::...::aws:rds/cluster:Cluster::postgres: 1 error occurred:
    	* deleting RDS Cluster (...): DBClusterSnapshotAlreadyExistsFault: Cannot create the cluster snapshot because one with the identifier ... already exists.
b

billowy-army-68599

04/25/2023, 9:45 PM
yeah, use
pulumi-random
to generate a random id πŸ™‚
s

salmon-musician-36333

04/25/2023, 11:16 PM
@billowy-army-68599 Nice! I've set that up with
finalSnapshotIdentifier
like so:
Copy code
finalSnapshotIdentifier: postgresRetainFinalSnapshot ? postgresFinalSnapshotNameRandom.hex : undefined,
Hopefully that looks reasonable. I feel like
keepers
might be relevant, but it says to look at the
random.Provider
docs, which don't explain too much πŸ˜‰
Now I'm getting a dependency issue again bringing down a security group, but this time the explicit
dependsOn
is there, which is troubling:
Copy code
error: deleting urn:pulumi:...::...::aws:rds/subnetGroup:SubnetGroup::...: 1 error occurred:
    	* deleting RDS Subnet Group (...): InvalidDBSubnetGroupStateFault: Cannot delete the subnet group '...' because at least one database cluster: ... is still using it.
But the dependency is there for the cluster:
Copy code
{ dependsOn: [postgresSg], deleteBeforeReplace: true },
Know what might be up?
g

gentle-daybreak-46874

04/26/2023, 10:41 PM
Think that’s the opposite? The subnet group depends on the cluster too, at least you cannot delete it until the cluster is gone.
s

salmon-musician-36333

04/27/2023, 1:53 AM
@gentle-daybreak-46874 If you can't delete the SG until the cluster is gone, then the cluster depends on the SG.
dependsOn: [x, y]
means the resource needs to come up after
x
and
y
, and be brought down before
x
and
y
.
@billowy-army-68599 Does the usage of
pulumi-random
above seem legit?
I'm thinking it will be treated like any other dependent resource and will only come down when the cluster goes down, which is basically what a final snapshot should have.
But maybe it needs to be cross-linked with
keepers: {x: somethingFromCluster.apply(x => ...)
.
Since it's higher in the dependency hierarchy, that may work fine when I'm bringing the whole deployment down, but not convinced it's enough without
keepers
for a recreate.
On the other hand, this looks a little bit fishy to me πŸ™‚
Copy code
const postgresSgName = `${deployTag}-postgres-sg`;
const postgresClusterIdentifier = `${deployTag}-postgres`;
let postgresFinalSnapshotNameRandomKeepers: Record<string, string> = {};
const postgresFinalSnapshotNameRandom = new random.RandomId('postgres-final-snapshot-random', {
  prefix: `${deployTag}-postgres-final-snapshot-`,
  byteLength: 4,
  keepers: postgresFinalSnapshotNameRandomKeepers,
});
const postgresCluster = new aws.rds.Cluster(...);
postgresCluster.id.apply((id) => (postgresFinalSnapshotNameRandomKeepers['clusterId'] = id));
4 Views