This message was deleted.
# kubernetes
s
This message was deleted.
d
It sounds like it's a false positive, and has cropped up a few times: https://github.com/pulumi/pulumi/issues/4326 I've made changes to the kubeconfig a number of times without recreation. It's worth making the kubeconfig change, using
--target
for the provider urn during the update. The preview should be correct when you do an untargetted update after that
w
So you're suggesting manually generating the new kubeconfig and updating the provider in state manaully?
I was expecting the refresh to pickup the new provider config.
d
Nope, you can make the change as you normally would in code
Refreshes use the config from state as apposed to from code, so doesn't pick up changes
w
so that's going to fail...
since it wont' be able to contact the k8s cluster...
I'm thinking about the scenario where key has been compromised and cycled.
d
pulumi up
will use the new kubeconfig if you make the change in code, so should work fine
w
So the resource won't be tainted in anyway by the fact that the kubeconfig has changed? Most if the examples I've seen are based on using the underlying kubecontext, so I'm curious whether this will actually work.
d
it is, but it's more of a preview issue, with Pulumi being conservative in changes. If you're still concerned, use
pulumi up --target 'provider_urn'
, then
pulumi refresh
. Subsequent
pulumi up
should be fine
w
I'm currently trying to replicate an AKS credentials cycle as that's the thing I'm concerned about.
d
@worried-knife-31967 I opened a PR that would make pulumi be less conservative about replacement due to a change to the server's properties. Would this help you? https://github.com/pulumi/pulumi-kubernetes/pull/2598