https://pulumi.com logo
l

limited-rainbow-51650

03/02/2020, 4:34 PM
NO! A warning about an unreachable K8s cluster should not remove the resource from the Pulumi state!
Copy code
Diagnostics:
  kubernetes:apps:Deployment (ingress-gloo-qq119b7f/discovery):
     warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get https://<mycluster>.<http://azmk8s.io:443/openapi/v2?timeout=32s|azmk8s.io:443/openapi/v2?timeout=32s>: net/http: TLS handshake timeout
... 15 more times
...
Resources:
    - 15 deleted
    69 unchanged
Nothing of that Helm chart deployment is actually deleted!
g

gorgeous-egg-16927

03/02/2020, 4:49 PM
What would you suggest instead? It’s like this because users would frequently delete the k8s cluster in a separate stack, and then there was no way to clean up the resources that had been deployed to the cluster without manually editing state. Since the state is recoverable, we thought this was a reasonable compromise (warn prior to deleting from state).
l

limited-rainbow-51650

03/02/2020, 4:50 PM
I look at this as an intermittent network issue.
g

gorgeous-egg-16927

03/02/2020, 4:52 PM
The problem is we can’t tell the difference between a transient network partition and the cluster being deleted.
Did you run a preview prior to applying the update?
l

limited-rainbow-51650

03/02/2020, 4:54 PM
Yes. it was ok then. Even the “preview part” of
pulumi up
was still OK
g

gorgeous-egg-16927

03/02/2020, 4:56 PM
Ah, I see. I didn’t consider the case where the partition occurred between preview and update
l

limited-rainbow-51650

03/02/2020, 4:57 PM
It’s like this because users would frequently delete the k8s cluster in a separate stack, and then there was no way to clean up the resources that had been deployed to the cluster without manually editing state.
This should be the exceptional case where users need to fiddle with state, not the normal behavior. I now feel punished for bad user practices of not removing their deployments before deleting the cluster. I now have to fiddle with state and my cluster manually.
g

gorgeous-egg-16927

03/02/2020, 4:57 PM
We should be able to recover your state, and I’ll open an issue to reconsider this behavior.
If you are using the Pulumi Service, you can export older versions using
pulumi stack export --version <n>
or download it from the UI (we added this capability last week).
l

limited-rainbow-51650

03/02/2020, 5:03 PM
Can I list the versions via the CLI?
Or is the version the last part of the permalink posted after
pulumi up
?
g

gorgeous-egg-16927

03/02/2020, 5:08 PM
AFAIK, there’s no programmatic way to get the version currently (https://github.com/pulumi/pulumi/issues/2412#issuecomment-590125246) But yes, the version number is part of the permalink.
So you’d want N-1
l

limited-rainbow-51650

03/02/2020, 5:10 PM
The faulty run was 68, so I exported 67. Do I now just import that version of the state again?
g

gorgeous-egg-16927

03/02/2020, 5:10 PM
Yes, that’s right.
pulumi stack import --file <file>
l

limited-rainbow-51650

03/02/2020, 5:12 PM
OK, I’m going to do that at home. I’m in the train at the moment via my mobile. That feels like doing surgery with a shaking hand. 😄
Thanks for the help!
g

gorgeous-egg-16927

03/02/2020, 5:13 PM
Sure thing. Let me know if you’re still having problems, and sorry for the inconvenience.
l

limited-rainbow-51650

03/02/2020, 5:14 PM
and I’ll open an issue to reconsider this behavior.
Just one last thing, can you @-mention me on the GH issue you would create so I can follow up?
👍 1
g

gorgeous-egg-16927

03/02/2020, 5:39 PM
4 Views