Hi. We are using managed kubernetes on AKS. `azure...
# kubernetes
c
Hi. We are using managed kubernetes on AKS.
azure.containerservice.KubernetesCluster
resource provides the property
kubeAdminConfigRaw
which we feed directly in the k8s provider like so:
let cluster = new k8s.Provider(name, {kubeconfig: aks.kubeAdminConfigRaw,})
This works perfectly fine except due to this conservative diffing https://github.com/pulumi/pulumi-kubernetes/blob/master/provider/pkg/provider/provider.go#L270 every change on the AKS resource will trigger a complete recreation of all resources that use this k8s provider instance. Note, that the kubeconfig is mostly a means of authentication, not actually something stateful itself ... Does anyone have the same problem? Any solutions? 🙂
b
also had the same issue lots of times - i think we ended up just putting 'kubeConfigRaw' in the ignorechanges: [] block for the CustomResourceOptions
not sure if that's a good long term strategy, e.g. if the credentials were rotated and then the provider failed to work
but it's what one of the pulumi staff suggested
c
@better-rainbow-14549 okay, thanks for your response 🙂 that we tried. Has the downside that if you actually recreate your cluster, you must somehow manually trigger the k8s provider recreation
b
yeah
c
Would you solve that with a code change then? Or can you somehow "force" recreation of the provider from the command line for a single run?
b
yeah take off the ignore changes and run it manually outside of CI i guess
c
Guess you could
pulumi state delete
it 🤔
b
ah probably yeah not messed with it too much tbh
since adding ignoreChanges it's worked ok for our needs
c
Fair enough 😊 thank you
b
but i expect that sometime soon our service principal credentials will rotate and it'll be somthing we have to fix
no worries let me know if you find a better solution
c
cc @gorgeous-egg-16927
g
I’ve encountered this issue on this exact use-case as well as other use-cases involving creating a provider with the
kubeconfig
input. If the update to
kubeconfig
would result in a different namespace/cluster being targeted then the provider update and dependent resource updates is actually a desirable behaviour; but when the
kubeconfig
update doesn’t effect the namespace/cluster target then it’s quite problematic and actually causes pulumi to crash when replacing dependent resources - pulumi will attempt to replace existing resources and create-then-delete semantics result in “resource already exists” problems… ugh. I don’t know how pulumi can better solve this issue, but it seems like the provider resource needs to do a smarter diff of the credentials to determine if dependent resources actually need to be marked for recreation.
sorry to resurrect a dead thread :P
but i couldn’t find a github issue for this