Having an issue here. Pulumi reports it deleted a ...
# general
c
Having an issue here. Pulumi reports it deleted a deployment, but I’m looking at it in GKE, it is not deleted. If I delete the pod, it is recreated.
It actually reports a lot of stuff deleted, but is not deleted
Only appears to be k8s resources that it thinks it deleted but didn’t.
I’ve not experienced this previously, but whatever scenario caused this to happen, it’s certainly an issue. Because I’ve not experienced it before, I don’t really think I can duplicate it.
g
Does GKE provide a history of operations like other GCP resources? I wonder if you'll see the delete deployment call listed.
c
It would show up in the event just like any other k8s cluster, but I don’t see a delete event
b
I’ve had
replicasets
remain after I’ve deleted
deployments
before
c
This is more than that. It’s the namespace that was deleted, the PVC, the deployment, the pods, etc.
b
do you see it in both
kubectl get rs
and
kubectl get deploy
?
c
Nothing should be remaining given that the namespace was supposed to be deleted too
To answer your question though, yes.
b
thats weird
so you deleted it with
pulumi destroy
and the resources are still there
only time I’ve run into that is when I ran operations against a different cluster on accident due to a
kubeconfig
misconfig
g
That's a good question... did you do a
pulumi destroy
to delete everything in the stack? Or did you make a code change that only deleted (or was supposed to) a subset of the resources?
c
I did a destroy
Also I confirmed it was the right cluster.
It deleted the non Kubernetes resources.
g
How did you confirm it was the correct cluster? That was going to be next thought, that somehow it was pointed to the wrong cluster and the deletes were "successful" because they 404'd.
Do you have an explicit k8s provider set on your resources? I wonder if Pulumi picked up a different kubeconfig from your provider perhaps.
c
I’ll have to double check when I’m home. I looked at the config I thought but now I’m not sure.
Yep, I forgot the kubernetes:context. What I checked earlier was that it was the right gcp:project, but forgot about that.
Sorry to have wasted all your time
c
in fairness this is super confusing.
once you get bitten by this issue, you don’t really forget it, is the good news.
c
Unfortunately not given I’m used to the issue. The problem is that we have a ton of projects.
So when copying from a project to another (Because generation was a PITA at least previously), we don’t always have the same values.
Not sure if generation has gotten better.