https://pulumi.com logo
#kubernetes
Title
# kubernetes
c

cuddly-smartphone-89735

05/19/2020, 10:23 AM
Hi. We are using managed kubernetes on AKS.
azure.containerservice.KubernetesCluster
resource provides the property
kubeAdminConfigRaw
which we feed directly in the k8s provider like so:
let cluster = new k8s.Provider(name, {kubeconfig: aks.kubeAdminConfigRaw,})
This works perfectly fine except due to this conservative diffing https://github.com/pulumi/pulumi-kubernetes/blob/master/provider/pkg/provider/provider.go#L270 every change on the AKS resource will trigger a complete recreation of all resources that use this k8s provider instance. Note, that the kubeconfig is mostly a means of authentication, not actually something stateful itself ... Does anyone have the same problem? Any solutions? 🙂
b

better-rainbow-14549

05/19/2020, 10:44 AM
also had the same issue lots of times - i think we ended up just putting 'kubeConfigRaw' in the ignorechanges: [] block for the CustomResourceOptions
not sure if that's a good long term strategy, e.g. if the credentials were rotated and then the provider failed to work
but it's what one of the pulumi staff suggested
c

cuddly-smartphone-89735

05/19/2020, 1:23 PM
@better-rainbow-14549 okay, thanks for your response 🙂 that we tried. Has the downside that if you actually recreate your cluster, you must somehow manually trigger the k8s provider recreation
b

better-rainbow-14549

05/19/2020, 1:24 PM
yeah
c

cuddly-smartphone-89735

05/19/2020, 1:25 PM
Would you solve that with a code change then? Or can you somehow "force" recreation of the provider from the command line for a single run?
b

better-rainbow-14549

05/19/2020, 1:26 PM
yeah take off the ignore changes and run it manually outside of CI i guess
c

cuddly-smartphone-89735

05/19/2020, 1:26 PM
Guess you could
pulumi state delete
it 🤔
b

better-rainbow-14549

05/19/2020, 1:26 PM
ah probably yeah not messed with it too much tbh
since adding ignoreChanges it's worked ok for our needs
c

cuddly-smartphone-89735

05/19/2020, 1:27 PM
Fair enough 😊 thank you
b

better-rainbow-14549

05/19/2020, 1:27 PM
but i expect that sometime soon our service principal credentials will rotate and it'll be somthing we have to fix
no worries let me know if you find a better solution
c

creamy-potato-29402

05/21/2020, 2:37 AM
cc @gorgeous-egg-16927
g

great-byte-67992

07/21/2020, 6:47 AM
I’ve encountered this issue on this exact use-case as well as other use-cases involving creating a provider with the
kubeconfig
input. If the update to
kubeconfig
would result in a different namespace/cluster being targeted then the provider update and dependent resource updates is actually a desirable behaviour; but when the
kubeconfig
update doesn’t effect the namespace/cluster target then it’s quite problematic and actually causes pulumi to crash when replacing dependent resources - pulumi will attempt to replace existing resources and create-then-delete semantics result in “resource already exists” problems… ugh. I don’t know how pulumi can better solve this issue, but it seems like the provider resource needs to do a smarter diff of the credentials to determine if dependent resources actually need to be marked for recreation.
sorry to resurrect a dead thread :P
but i couldn’t find a github issue for this
5 Views