This message was deleted.
# general
s
This message was deleted.
w
EKS, GKE, AKS? What diff did it report?
h
GKE
no diff
Now everything is pretty hosed because nothing in Pulumi believes it needs to be reinstalled
so I think I need to clear the state and just rebuild from 0
whats the right way to convince a pulumi stack to forget about itself completely so I can go clean it up on the gcp side
also, this is kinda nuts to have to do in prod w/ a system that is supposed to prevent this! ๐Ÿ˜‰
w
Several parts of what you are describing should not have happened: 1. You should not have seen a replace proposed if you did not make changes 2. If a replace was proposed, you should have seen a diff describing what had changed 3. If a replace was proposed, you should have had the chance to review that diff before accepting 4. If you accepted, it should have succeeded to replace
h
it wasnt a replace proposed
it said โ€˜updateโ€™
and then it went ahead and replaced
Previewing update (getmargin/infrastructure): Type Name Plan Info pulumipulumiStack infrastructure-infrastructure ~ โ”œโ”€ gcpcontainerCluster kube update ~ โ”œโ”€ pulumiproviderskubernetes kube update [diff: ~kubeconfig] - โ”œโ”€ gcpdnsRecordSet compoundco-cv-cname delete - โ””โ”€ gcpdnsRecordSet getmargin-cv-cname delete
w
it said โ€˜updateโ€™
Oh - wow - are you sure? That would be extremely bad and my understanding is we have several deep safeguards in place to ensure that cannot happen. If you have any logs of what happened to get there - we would love to see them so we can investigate.
h
I pasted you the output
not sure what logs you need
w
Do you have the logs from the replace when it happened?
Logs = output from
pulumi update
.
h
Logs look normal
it was just slow
got concerned
site went down
checked cloud console and it said the cluster was being deleted
it created a new cluster
but now that new cluster is pretty fโ€™d
w
Logs look normal
Specifically - did your
pulumi update
ever say the word
replace
on the cluster, or did it also say
update
but then behind the scenes your cluster was getting deleted?
h
yup
w
Also - what versions of
@pulumi/gcp
and the
pulumi
CLI do you have?
h
old
0.16
one sec, lemme fix prod real quick the long way
w
That was fixed 3 months ago.
Feel free to DM me if you need any help recovering from this and getting the stack back into a working state.
h
Whats the easiest way to downgrade pulumi to a specific version
w
CLI or packages?
h
CLI
w
curl -fsSL <https://get.pulumi.com/> | bash -s -- --version 0.17.26