anyone have the pulumi cli command freeze on a `pu...
# general
c
anyone have the pulumi cli command freeze on a
pulumi destroy
😬?
w
No but can you share more details? especially things regarding your OS and memory and such where the command was running. CC @echoing-dinner-19531 for any other things that would be helpful to know
e
pulumi about
is generally enough info for things like that
c
Mac OS Sequoia 15.6.1 I have a multi-region project: regions are configured in the project config file. Yeah, there’s the pre-requisite
aws:region
but I don’t use it. Instead, I do something like this:
Copy code
import pulumi_aws as aws
import pulumi

cfg = pulumi.Config()
regions = cfg.require_object("regions")

providers = {r: aws.Provider(f"aws-{r}-provider", region=r) for r in regions}
And then most of my resources are doing stuff like this:
Copy code
import pulumi
import pulumi_aws as aws
from providers import providers

cfg = pulumi.Config()
prefix = cfg.require("resourcePrefix")
regions = cfg.require_object("regions")
tags = cfg.get_object("tags") or {}

# Create one bucket per region
my_bucket = {}
for r in regions:
    bucket = aws.s3.Bucket(
        f"{prefix}my-bucket-{r}",
        bucket=f"{prefix}my-bucket-{r}",
        force_destroy=True,
        tags=tags,
        opts=pulumi.ResourceOptions(provider=providers[r]),
    )
    my_bucket[r] = bucket

# Human-friendly map keyed by region (name + arn), exported once
pulumi.export("my_bucket", {
    r: {"name": my_bucket[r].bucket, "arn": my_bucket[r].arn} for r in regions
})
What seemed to hang was either the MRAP that stitched those together or the global DynamoDB tables that stitch together the regional tables (. So I needed to edit the regions to go from region A and B to regions A and C. The
preview
looked good — half of the resources were being torn down, a handful of new resources in the new region. But
pulumi up
froze. The last chunk of the message that appeared after I control-C’d out of it was like this:
Copy code
++  ├─ aws:s3control:MultiRegionAccessPoint  voapps-sourcecode-mrap                     **creating failed**     [diff: ~details
 ~   ├─ aws:dynamodb:GlobalTable              voapps-stack-state-global                  updated (2s)            [diff: ~replica
 ~   ├─ aws:dynamodb:GlobalTable              voapps-release-plans-global                updated (1s)            [diff: ~replica
 ~   ├─ aws:dynamodb:GlobalTable              voapps-builds-global                       **updating failed**     [diff: ~replica
 +   ├─ aws:iam:RolePolicy                    voapps-codebuild-access-us-east-2          created (0.72s)
 +   └─ aws:lambda:Function                   voapps-release-ingestor-us-east-2          created (9s)

Diagnostics:
  aws:dynamodb:GlobalTable (my-table-global):
    error:   sdk-v2/provider2.go:572: sdk.helper_schema: updating DynamoDB Global Table (my-table): operation error DynamoDB: UpdateGlobalTable, https response error StatusCode: 400, RequestID: 7CJ4FTK8KBPGK9FQBFC65BG13NVV4KQNSO5AEMVJF66Q9ASUAAJG, GlobalTableNotFoundException: Global table not found: Global table with name: 'my-table' does not exist.: provider=aws@7.7.0
    error: 1 error occurred:
    	* updating urn:pulumi:global::buildamesh::aws:dynamodb/globalTable:GlobalTable::my-table-global: 1 error occurred:
    	* updating DynamoDB Global Table (my-table): operation error DynamoDB: UpdateGlobalTable, https response error StatusCode: 400, RequestID: 7CJ4FTK8KBPGK9FQBFC65BG13NVV4KQNSO5AEMVJF66Q9ASUAAJG, GlobalTableNotFoundException: Global table not found: Global table with name: 'my-table' does not exist.
Note that I have regional DDB tables like
my-table
, and then a ddb_table_global file that stitches those together. I was able to run
pulumi destroy
and then
pulumi up
and everything seems back on track, but of course, stuff in the buckets and ECR repos was lost (that’s fine as I’m testing though). Hope that helps!
e
That looks reasonable. My guess would either be an issue in the engine around failed updates, or maybe the aws provider itself getting stuck on a failure and not replying to the engine. If it happens again can you see if it's reproducible if you re-call
pulumi up
and if so run with debug logs for us (--logtostderr --logflow --v10)?