I'm having a little of a issue in my head with the...
# kubernetes
w
I'm having a little of a issue in my head with the k8s provider. If I create the k8s cluster AND the helm resources in the same pulumi stack, it works fine. However, if I need to, somehow (not sure if this is possible) change the kubeconfig that I've built from the output, all the helm resources result in a replace. As I understand it, the kubeconfig includes the certificate data. So if that has to change, the provider is new, and therefore will cause all the helm releases to redeploy. That feels... wrong?
d
It sounds like it's a false positive, and has cropped up a few times: https://github.com/pulumi/pulumi/issues/4326 I've made changes to the kubeconfig a number of times without recreation. It's worth making the kubeconfig change, using
--target
for the provider urn during the update. The preview should be correct when you do an untargetted update after that
w
So you're suggesting manually generating the new kubeconfig and updating the provider in state manaully?
I was expecting the refresh to pickup the new provider config.
d
Nope, you can make the change as you normally would in code
Refreshes use the config from state as apposed to from code, so doesn't pick up changes
w
so that's going to fail...
since it wont' be able to contact the k8s cluster...
I'm thinking about the scenario where key has been compromised and cycled.
d
pulumi up
will use the new kubeconfig if you make the change in code, so should work fine
w
So the resource won't be tainted in anyway by the fact that the kubeconfig has changed? Most if the examples I've seen are based on using the underlying kubecontext, so I'm curious whether this will actually work.
d
it is, but it's more of a preview issue, with Pulumi being conservative in changes. If you're still concerned, use
pulumi up --target 'provider_urn'
, then
pulumi refresh
. Subsequent
pulumi up
should be fine
w
I'm currently trying to replicate an AKS credentials cycle as that's the thing I'm concerned about.
d
@worried-knife-31967 I opened a PR that would make pulumi be less conservative about replacement due to a change to the server's properties. Would this help you? https://github.com/pulumi/pulumi-kubernetes/pull/2598