cuddly-smartphone-89735
05/19/2020, 10:23 AMazure.containerservice.KubernetesCluster
resource provides the property kubeAdminConfigRaw
which we feed directly in the k8s provider like so: let cluster = new k8s.Provider(name, {kubeconfig: aks.kubeAdminConfigRaw,})
This works perfectly fine except due to this conservative diffing https://github.com/pulumi/pulumi-kubernetes/blob/master/provider/pkg/provider/provider.go#L270 every change on the AKS resource will trigger a complete recreation of all resources that use this k8s provider instance. Note, that the kubeconfig is mostly a means of authentication, not actually something stateful itself ...
Does anyone have the same problem? Any solutions? đbetter-rainbow-14549
05/19/2020, 10:44 AMcuddly-smartphone-89735
05/19/2020, 1:23 PMbetter-rainbow-14549
05/19/2020, 1:24 PMcuddly-smartphone-89735
05/19/2020, 1:25 PMbetter-rainbow-14549
05/19/2020, 1:26 PMcuddly-smartphone-89735
05/19/2020, 1:26 PMpulumi state delete
it đ¤better-rainbow-14549
05/19/2020, 1:26 PMcuddly-smartphone-89735
05/19/2020, 1:27 PMbetter-rainbow-14549
05/19/2020, 1:27 PMcreamy-potato-29402
05/21/2020, 2:37 AMgreat-byte-67992
07/21/2020, 6:47 AMkubeconfig
input. If the update to kubeconfig
would result in a different namespace/cluster being targeted then the provider update and dependent resource updates is actually a desirable behaviour; but when the kubeconfig
update doesnât effect the namespace/cluster target then itâs quite problematic and actually causes pulumi to crash when replacing dependent resources - pulumi will attempt to replace existing resources and create-then-delete semantics result in âresource already existsâ problems⌠ugh.
I donât know how pulumi can better solve this issue, but it seems like the provider resource needs to do a smarter diff of the credentials to determine if dependent resources actually need to be marked for recreation.