sparse-intern-71089
01/30/2024, 2:00 AMclever-sunset-76585
01/30/2024, 6:49 AMgray-airplane-38353
01/30/2024, 5:25 PMpulumi up --target ...
(specifying a DIFFERENT ALB), but I had neglected to properly quote the string. That's when I got errors related to that up
command.
Now I get the error on almost every run.clever-sunset-76585
01/31/2024, 3:34 AMup --target
? You can get it from Pulumi Cloud by looking at the Updates tab and find that exact failed update. (Of course, please feel free to remove anything you think might be sensitive such as internal codenames/project names etc., if any.) You may DM it to me as well if you'd like.
Also did you by any chance set deleteBeforeReplace
resource option on the load balancer resource?gray-airplane-38353
02/06/2024, 5:03 PMdeleteBeforeReplace
.elegant-arm-61306
02/07/2024, 4:26 PMelegant-arm-61306
02/07/2024, 4:44 PMpulumi stack export --show-secrets --file stack.json
in order to study the stack file and noticed that search for "urn": "urn:pulumi:stack::infra::aws:lb:ApplicationLoadBalancer$aws:lb/loadBalancer:LoadBalancer::stack-internal"
, was returning nothing even though the resource existed in AWS.
* Added the following snippet as indicated below to the stack.json (snippet generated from a similar resource):
...
{
"urn": "urn:pulumi:stack::infra::aws:lb:ApplicationLoadBalancer$aws:lb/loadBalancer:LoadBalancer::stack-internal",
"custom": true,
"id": "arn:aws:elasticloadbalancing:us-west-2:************:loadbalancer/app/stack-internal/****************",
"type": "aws:lb/loadBalancer:LoadBalancer",
"parent": "urn:pulumi:stack::infra::aws:lb:ApplicationLoadBalancer::stack-internal",
"dependencies": [
"urn:pulumi:stack::infra::aws:s3/bucketPolicy:BucketPolicy::stack-internal-accessLogsPolicy",
"urn:pulumi:stack::infra::awsx:x:ec2:SecurityGroup$aws:ec2/securityGroup:SecurityGroup::stack-internal"
],
"provider": "urn:pulumi:stack::infra::pulumi:providers:aws::default_5_43_0::9009540a-405a-44d9-b0c1-c61d3996fcd2",
"propertyDependencies": {
"accessLogs": [
"urn:pulumi:stack::infra::aws:s3/bucketPolicy:BucketPolicy::stack-internal-accessLogsPolicy"
],
"securityGroups": [
"urn:pulumi:stack::infra::awsx:x:ec2:SecurityGroup$aws:ec2/securityGroup:SecurityGroup::stack-internal"
],
}
},
...
* Tested out on local provider, this involves importing the export file created. More info, https://www.pulumi.com/docs/concepts/state/#migrating-between-state-backends.
* Deleted the old stack on Pulumi Provider after successful testing on local provider.
pulumi stack rm -f stack
* Initiated new stack and added configurations:
pulumi stack init stack
pulumi config set --secret pagerduty:token ************
pulumi config set aws:region us-west-2
pulumi config
* Imported the edited export file.
pulumi stack import --file stage.stack.json
* Refreshed the state to get all details of the missing resource.
pulumi refresh
gray-airplane-38353
02/08/2024, 1:11 AM# pulumi stack export --show-secrets --file stage.stack.json
I manually added the missing ALB resource into the JSON file, using one of the other ALBs as a template and substituting the correct values.
Then (and this was the scary bit), removed the stack:
# pulumi stack rm -f stage1
The re-initialized a new one:
# pulumi stack init stage1
# pulumi config set --secret pagerduty:token ************
# pulumi config set aws:region us-west-2
# pulumi config
And, finally, imported the JSON:
# pulumi stack import --file stage.stack.json
# pulumi refresh
This at least got past the error.
But now, when I re-run my code (pulumi up
) there's several resources that it wants to delete and/or otherwise modify.clever-sunset-76585
02/15/2024, 1:47 AMBut now, when I re-run my code (That's probably because, and I am assuming based on the snippet of commands you pasted above, you initialized a stack with a different name than the previous one. So the resource URNs are completely different. I am actually not sure how you were even able to import the checkpoint from a stack named) there's several resources that it wants to delete and/or otherwise modify.pulumi up
stage
into a stack named stage1
. I would have thought that the CLI would complain that the stack names don't match. In any case, if you are still having a problem, you might try renaming your stack with pulumi stack rename ...
or force remove the stack and recreate a new stack yet again with the original name and re-import the state that you saved.