This message was deleted.
# general
s
This message was deleted.
v
Hi John. I’ve always used release, as you say pulumi will try to delete the resources before provisioning the new ones (or simultaneously). If you can’t afford to have any down time whilst provisioning, I’d probably recommend provisioning a stack side by side and destroying the old one. Is that a possibility?
w
I'm willing to have downtime 🤔 The issue, when I've played around, is that it will delete the chart and install the helm resource simultaneously. Say I have StatefulSet
x
in the chart
simple
. The
pulumi apply
will spit out
resource simple/x was not successfully created by the Kubernetes API server : statefulsets.apps "x" already exists
. I need the delete to happen first. This is not implemented. I'm iffy about running
pulumi destroy --target
. That would solve the simultaneity problem. (I'm lucky that the resource isn't in a
dependsOn
block of another resource. I had a different issue once like this and when the resource is in a dependsOn block of another component, selectively deleting it is difficult).
v
I’d try destroying the stack as it is then running a fresh up
c
Yeah you need to do a two phase release, pulumi has no way to do "delete before upsert" dependencies between resources.
w
I was playing with the code. It is pretty easy to add a manual
isUpgrade
flag to the code. (I have that coded up.) Don't see a way to find out whether something actually is an upgrade (if the k8s.helm.v3.Chart resource is new or being updated). Any clues?
I have a way to manually set the flag on a branch https://github.com/pulumi/pulumi-kubernetes/compare/master...jdmaguire:pulumi-kubernetes:add-is-upgrade-override?expand=1, something like that would fix my issue. The 'best' fix would be to be able to infer that flag in the code 😞 but I'm not sure when/where I could get the information to tell whether the v3::Chart pulumi resource is being created or updated. That will be digging for another day.