<@UE1LN7A22>, I’m having an issue with `pulumi des...
# kubernetes
p
@gorgeous-egg-16927, I’m having an issue with
pulumi destroy
hanging forever due to a PVC not being finalized because of a pod in another stack relying on it. While there are many ways for us to fix this issue, the simplest way would be to avoid Pulumi waiting for the PVC finalizers to finish. Based on this (https://github.com/pulumi/pulumi-kubernetes/pull/417) pull request and this (https://www.pulumi.com/blog/improving-kubernetes-management-with-pulumis-await-logic/) blog post it seems that the
<http://pulumi.com/skipAwait|pulumi.com/skipAwait>
annotation is what we want. Since those are a year-old and things have changes fast in the Pulumi world and I couldn’t find
skipAwait
when searching the docs, I wanted to ask if this was still the recommended way to do things?
g
Yes, the skipAwait annotation is still the recommended solution in this case. Do you know if there’s an issue open tracking the PVC bug?
(The reason that annotation is not prominently documented is so we can find out about bugs in await logic)
p
I didn’t think of it as a bug since Pulumi is properly waiting and we just happened to have cross-stack dependencies so there was a reliance on stack creation/deletion order. Should I open an issue? Also, is there a recommended way to guarantee a certain order of deployment of inter-related stacks? The logical thing would be to group those resources in the same Pulumi program and input/outputs would make the dependency graph to Pulumi and it will take care of it. But we’re following the project/stack per microservice approach so we have different Pulumi projects per service
For example, is there some way to build a DAG from Pulumi programs based on the
StackReference
s they use?
g
Oh, understood. Thanks for clarifying.
There’s work in progress to make that possible, but I don’t know of a way to handle multi-stack dependencies like that currently.
p
Ok, cool. For now we just have some hard-coded logic that runs
pulumi up -s $STACK
in the correct order
Is there an issue I can watch on GitHub to be kept up to date when something like that is available?
Would be great to have the deployment dependency be automatically generated instead of hard-coding the order in a bash for-loop haha
g
p
Very interesting. So would the idea be that one could create a top-level script that could use the runtime API to find all Pulumi files in our monorepo (we use a monorepo) and then figure out the topological order by taking a look at the
StackReference
s used?
g
Yeah, it should open up a lot of interesting possibilities like that. I don’t think that exact scenario is in progress right now, but it would be enabled by this work.
p
Awesome. Thanks for sharing that Levi!
👍 1
@gorgeous-egg-16927, it seems that the
<http://pulumi.com/skipAwait|pulumi.com/skipAwait>
annotation didn’t work for our deployment. Are there any other escape hatches that could be used?
Even thought the PersistentVolumeClaim has a deletionTimestamp set to when
pulumi destroy
was run and an annotation of
<http://pulumi.com/skipAwait|pulumi.com/skipAwait>
set to
true
, the destroy command is still running (and will soon time out)
g
Hmm, it sounds like it might be Kubernetes waiting then, rather than Pulumi.
Well, no. You said that pulumi is still waiting
Just to check, the annotation looks like
<http://pulumi.com/skipAwait|pulumi.com/skipAwait>: "true"
?
p
Yes, here’s the resource in question:
As you can see it has a
deletionTimestamp
but there’s a finalizer of
<http://kubernetes.io/pvc-protection|kubernetes.io/pvc-protection>
which is what prevents the deletion of the resource until the PVC isn’t active anymore (i.e., no pods use it as a claim)
g
Just took a look at the provider code, and it looks like the PVC delete code path isn’t handling the skipAwait annotation. I’ll open an issue to track that fix
p
Ok, thanks!
So I guess no other escape hatches for this then?
g
Only thing I can think of is to manually delete the PVC entry from your statefile using export/import
p
Hmm, that won’t work out for us. But I’ll figure out what I can do while support for
skipAwait
lands. Thanks for all the help Levi, I really appreciate it 🙂
l
Just wanted to follow up and mention that we merged the first PR for the Automation API Go SDK! https://github.com/pulumi/pulumi/pull/4977 It's in alpha and there will be (mostly additive) breaking changes in the coming weeks. There are complete godocs here that you can check out: https://godoc.org/github.com/pulumi/pulumi/sdk/go/x/auto In addition, there are still a bunch of holes that we'll be plugging over the next few weeks. Here's a list of known issues that we're tracking: https://github.com/pulumi/pulumi/issues?q=is%3Aissue+is%3Aopen+label%3Aarea%2Fautomation-api If you'd like to try it out, you'll need to build pulumi/pulumi as there are CLI changes. We'll cut a CLI release early next week that will make it easier to try all of this out. I'll be updating https://github.com/pulumi/pulumi/issues/3901 with instructions and a call for feedback early next week, but I thought I'd let ya'll know in the mean time in case you're eager to kick the tires. If the feedback on the design is positive, we'll follow up with all of the supported pulumi languages.
p
Awesome!