We're facing an issue with dynamic kubernetes prov...
# general
g
We're facing an issue with dynamic kubernetes provider instances when upgrading pulumi versions. I can't precisely say which version this started to occur, but I believe it first occured when upgrading CLI from 0.17.16 > 0.17.24 and "@pulumi/kubernetes": "0.22.0", > "@pulumi/kubernetes": "0.25.2", . Basically what happens is the after the upgrade pulumi attempts to replace all our dynamic kubernetes provider instances incl. all resources that have been created using it, even though we made no changes to the stack resources. After a bit debugging I found the issue is that previously our stack state only contained this:
Copy code
{
                "urn": "urn:pulumi:my-stack::my-stack::pulumi:providers:kubernetes::my-cluster",
                ...
                "inputs": {
                    "kubeconfig": "...",
                    "namespace": "my-namespace"
                }
                ...
            }
and after the upgrade pulumi wants to converge to this state:
Copy code
{
                  "urn": "urn:pulumi:my-stack::my-stack::pulumi:providers:kubernetes::my-cluster",
                ...
                "inputs": {
                    "kubeconfig": "...",
                    "namespace": "my-namespace"
                },
                "outputs": {
                    "kubeconfig": "...",
                    "namespace": "my-namespace"
                }
                ...
            },
As you can see after the upgrade suddenly an "outputs" section that wasn't there before is being added. Because it didn't exist before pulumi attempts to do a replacement. I can workaround the issue by copying the "inputs" and pasting them as "outputs" and importing the fixed state.json which I did for a few stacks but since we have almost 100 stacks this is not a scalable solution. Is this a known issue and is there a better solution to the problem? I can btw reproduce this problem quite consistently (my machine, other machines, CI).