i have seen this behavior in a couple environments now and have had to resort to manually deleting the new aks cluster, exporting the stack, deleting all of the k8s resources and the old provider, and regenerating everything again. obviously not ideal. @broad-dog-22463
nice-guitar-97142
05/05/2020, 6:05 PM
attempting to run
up
again after the failed update just results in the same error
g
gentle-diamond-70147
05/05/2020, 6:41 PM
Can you open an issue at https://github.com/pulumi/pulumi-azure with the Pulumi CLI and package versions you're using and the steps you've performed that got to this state?