I need to protect a cluster from possible deletion...
# general
g
I need to protect a cluster from possible deletion, but the first pass might change the vmSize. Anyone know how to order things so it can be protected even on first pass?
e
What do you mean first pass? Are these resources your importing into Pulumi?
g
These are resources already existing in the stack. The config has drifted. Doing a migration to a versioned system, but can't destroy the cluster if, say, the vmSize is not what it was built with.
I'm looking at getting the existing stack and doing a manual compare at the moment. protect doesn't seem to protect anything until the next round.
e
Yeh protect won't apply the first time round. I think it might be possible to add protect tags via a "stack export" edit the json and "stack import" but I'm not sure exactly how the tag gets added to the json structure.
g
yeah, one can but i don't want to do manual editing
l
You might have to. You can ignore the changing property or you can edit the property to have the old value.
You must do this even if you have the resource protected. The protect opt is a Pulumi feature: it stops Pulumi from trying to delete something. When the code has a change that forces Pulumi to destroy and recreate a resource, and that resource is protected, then Pulumi will honour the protect and abandon the
up
.
So you're going to need to fix the drift.
e
Yeh this sounds like you want to keep doing a "pulumi preview" until preview says nothing needs to change so that your pulumi program correctly represents what your resources are. Then you can check that program into the versioned system.
g
This isn't something I can do manually. I need to make it safe for pipeline.
I'm going to do a manual comparison in the code and only update destructive values on a flag
Copy code
DefaultNodePool = new KubernetesClusterDefaultNodePoolArgs
            {
                Name = existingNodePool != null ? existingNodePool.Apply(value => value.Name) : Output.Create(resourcePrefix),
                NodeCount = nodeCount,
                VmSize = existingNodePool != null ? existingNodePool.Apply(value => value.VmSize) : Output.Create(vmSize),
                OsDiskSizeGb = existingNodePool != null ? existingNodePool.Apply(value => value.OsDiskSizeGb) : Output.Create(30),
                VnetSubnetId = existingNodePool != null ? existingNodePool.Apply(value => value.VnetSubnetId) : kubeSubnet.Id,
                OrchestratorVersion = nodeVersion
            },
Ending up with something like this.
e
This all seems really odd... maybe I'm just totally not getting your problem but surely if your able to get existingNodePool you should just make DefaultNodePool equal to the values from that.
g
I just want to protect it from destruction, not any kind of update