is there, uh, any way to undo this?
# kubernetes
l
is there, uh, any way to undo this?
b
are you sure you're using the right kubeconfig?
if you ran refresh and you're connecting to a different cluster i'd expect it to try recreate
s
you can restore state from a backup in case you really did mess it up
l
Well I’ve set the cluster name in my config, but it is possible it was using a local kubeconfig that doesn’t exist any more
will that be tracked as part of the provider?
@steep-toddler-94095 is that something I can do directly via.
pulumi
CLI?
The state bucket has no snapshots atm
One of my concerns about moving some of our more critical apps to Pulumi is the state tracking and getting into a scenario where we have to re-import state and such. We’ve always used Helm which just did a 3 way merge with state from inside a K8s secret, and held the history.
This is only a small app and I could happily nuke the entire thing and reprovision it, but I couldn’t do that with our stateful production apps
This might highlight the need for state snapshots earlier than I’d expected, and I’d even be open to trying the Pulumi service (I’ve just been holding off while we get to grips with Pulumi as a tool), but I wonder if there’s a story for recovering this state. I’m checking out
pulumi import
now, but it looks like it could be a bit of an involved process
s
are you using S3 for state? if so, there should be a backup statefile with the most recent previous state next to your statefile called
<stackname>.json.bak
. to restore from that backup you would just do an
aws s3 cp
command, removing the
.bak
extension
agree it would be nice if pulumi did something like the 3 way merge
👍 1
l
It’s in a GCS bucket and I totally forgot about the history/backups
Is this a totally manual process or something that can be done via the pulumi CLI?
@steep-toddler-94095 Thanks for that reminder. I've restored an older version from the
backups
directory (since the .bak was also b0rked) and it looks like this should work. Going to deploy shortly 👍
👍 1