Let's say I can't run pulumi on target clusters at...
# getting-started
c
Let's say I can't run pulumi on target clusters at the same time. Instead of building a multi provide plan I need to change the kubeconfig in a new run on a new agent. (Not using operator). Is Pulumi going to be "additive" if the logical urns are unique and just add new content to tracking or similar to terraform where if it doesn't exist in the current plan it's considered a destroy/remove? Dealing with some uncontrollable credentials that might require me to stop looping on cluster and instead run independently.
Right now I embedded the kubeconfig and cluster list as secret values and loop through them. Now I am facing that issue of possibly having only one at at time provided by a service connection in azure DevOps. Maybe it will create a kubeconfig. I was hoping to avoid a new stack for a new cluster that's supposed to be part of the same deployment essentially. I might just have to push for the credentials to be accessible outside the service connection as a library variable group or something. Wanted to rule this out other options first since that will be more effort to get done.