https://pulumi.com logo
#kubernetes
Title
# kubernetes
w

worried-knife-31967

09/13/2023, 5:50 PM
I'm having a little of a issue in my head with the k8s provider. If I create the k8s cluster AND the helm resources in the same pulumi stack, it works fine. However, if I need to, somehow (not sure if this is possible) change the kubeconfig that I've built from the output, all the helm resources result in a replace. As I understand it, the kubeconfig includes the certificate data. So if that has to change, the provider is new, and therefore will cause all the helm releases to redeploy. That feels... wrong?
d

dry-keyboard-94795

09/13/2023, 7:12 PM
It sounds like it's a false positive, and has cropped up a few times: https://github.com/pulumi/pulumi/issues/4326 I've made changes to the kubeconfig a number of times without recreation. It's worth making the kubeconfig change, using
--target
for the provider urn during the update. The preview should be correct when you do an untargetted update after that
w

worried-knife-31967

09/13/2023, 7:14 PM
So you're suggesting manually generating the new kubeconfig and updating the provider in state manaully?
I was expecting the refresh to pickup the new provider config.
d

dry-keyboard-94795

09/13/2023, 7:14 PM
Nope, you can make the change as you normally would in code
Refreshes use the config from state as apposed to from code, so doesn't pick up changes
w

worried-knife-31967

09/13/2023, 7:15 PM
so that's going to fail...
since it wont' be able to contact the k8s cluster...
I'm thinking about the scenario where key has been compromised and cycled.
d

dry-keyboard-94795

09/13/2023, 7:16 PM
pulumi up
will use the new kubeconfig if you make the change in code, so should work fine
w

worried-knife-31967

09/14/2023, 9:04 AM
So the resource won't be tainted in anyway by the fact that the kubeconfig has changed? Most if the examples I've seen are based on using the underlying kubecontext, so I'm curious whether this will actually work.
d

dry-keyboard-94795

09/14/2023, 9:20 AM
it is, but it's more of a preview issue, with Pulumi being conservative in changes. If you're still concerned, use
pulumi up --target 'provider_urn'
, then
pulumi refresh
. Subsequent
pulumi up
should be fine
w

worried-knife-31967

09/14/2023, 9:25 AM
I'm currently trying to replicate an AKS credentials cycle as that's the thing I'm concerned about.
d

damp-airline-38442

10/18/2023, 6:58 PM
@worried-knife-31967 I opened a PR that would make pulumi be less conservative about replacement due to a change to the server's properties. Would this help you? https://github.com/pulumi/pulumi-kubernetes/pull/2598