chilly-hairdresser-5625905/08/2020, 9:12 PM
hundreds-lizard-1418205/08/2020, 9:18 PM
chilly-hairdresser-5625905/08/2020, 9:22 PM
hundreds-lizard-1418205/08/2020, 9:28 PM
, you should set the region to
to cleanly tear down what you had (partially) deployed thre.
Then you can re-deploy into another region by creating a new stack that is targetting the new region.
Note that you can also create an instance of
targeting a specific region, so you can have a stack with resources deployed to multiple regions. You could alternatively use that to create the new resources in the new region while allowing the old resources to still access the old region (and thus be destroyed cleanly).
As for failing faster when you tried to get in this situation we have https://github.com/pulumi/pulumi-aws/issues/879 tracking some improvements we hope to be able to make here.
is region specific. You could also remove the old bucket from your pulumi state using
and rerun, which would then create a new bucket for you
pulumi state delete <urn>
hundreds-lizard-1418205/11/2020, 3:26 PM
breezy-hamburger-6961905/11/2020, 7:02 PM
— I’ve personally hit this in the AWS console a handful of times as well in that region, where as
have better success
hundreds-lizard-1418205/12/2020, 1:55 PM