Hey Pulumi folks, I'm running into an interesting ...
# general
i
Hey Pulumi folks, I'm running into an interesting problem. I've got a k8s cluster that I'm deploying some things to with Pulumi, and the pod has a volume claim for some block storage, provided by Rook/Ceph. The storage is ReadWriteOnce, which is a Rook limitation. Right now, when I update the stack, I'm seeing a new Replica Set created, which I believe is Pulumi-specific behavior--simply editing the deployment would not do this, IIRC. The end result is that the new pod can't create, because it can't get a volume mount--the existing pod isn't torn down first, so it keeps holding the volume. What's the right solution to this? Is there a way for me to tell Pulumi it's OK to tear down the existing pod/replica set?
w
By default, when there is a need to replace a resource, Pulumi will create the replacement before deleting the previous version to try to ensure no downtime during the deployment. In some cases (like what I understand of your case) though there is a scarce resource and so the delete must happen before the recreate. We just yesterday merged support for opting in to this “delete before replace” semantics for a specific resource. See https://github.com/pulumi/pulumi/pull/2415. We have not yet rolled this out to all providers though - that will likely happen early next week. In the meantime, you may need to go behind the scenes and delete the resource directly from kubernetes, then use
Pulumi state delete
to manually remove it from Pulumi state file: https://pulumi.io/reference/cli/pulumi_state_delete.html
i
Cool, thanks! My state file is probably all sorts of jacked up right now because it took me too long to clear the jam, so the
pulumi up
failed. Is there a way to "refresh" the state file to match the cluster?
Ah, I found
pulumi refresh
.