Updating from @pulumi/eks 0.18.18 to 0.18.21 resul...
# general
f
Updating from @pulumi/eks 0.18.18 to 0.18.21 results in Pulumi thinking all (I think?) of my k8s resources need to be replaced. This seems like a regression, given nothing in the eks changelog (https://github.com/pulumi/pulumi-eks/blob/master/CHANGELOG.md) indicates that this should occur.
w
Can you share details of the diff you see? In particular, the first few "replace" resources you see listed? I'm guess your
kubernetes:Provider
resource in particular is being replaced? What diff do you see on that resource?
f
Just going from 0.18.18 -> 0.18.19 results in the same diff
Looks like you're correct.
w
What is the diff on the
kubeconfig
?
That will be what is leading to all the replacements - Pulumi thinks you are targetting a different cluster now.
f
It just says
=> output<string>
Nothing is changing in the code - I just bump the dep and run pulumi up.
I can go back to 0.18.18, npm install, pulumi up, and get no changes.
Not sure if it's relevant but I did also update all my other pulumi deps:
Copy code
"@pulumi/pulumi": "1.11.0",
"@pulumi/kubernetes": "1.5.4",
"@pulumi/eks": "0.18.19",
"@pulumi/awsx": "0.19.2",
"@pulumi/aws": "1.23.0",
I suppose I could try downgrading all the deps and just bumping eks.
w
I see. Theres a good change that this is just the preview being conservative - and that nothing will actually get replaced. But it's certainly dangerous. Let me take a quick look. If you could open an issue, it's definitely something we'll want to look into further.
I suppose I could try downgrading all the deps and just bumping eks.
I'd be interested in whether that changes anything - I expect you will see the same thing though.
f
Issue in eks repo?
w
Yeah - thanks.
đź‘Ť 1
f
Downgrading deps results in the same, as you said.
w
I don't see anything that changed in EKS between these versions that should cause this. But the AWS version being used bumped from
^1.14.0
to
^1.18.0
. I do wonder if there is an issue between those versions.
f
w
Just to make sure - are you also seeing an "update" on your
aws:eks:cluster
resource? If so, what is it?
Ahh - your bug report includes it `
Copy code
[diff: ~tags]
f
Yep. Weirdly it thinks it needs to add a bunch of tags. Maybe it wasn't actually setting the tags before.
Ah, current cluster in the console does not have the tags.
w
Okay - so that is what is leading to this unfortunately. Because the cluster is changing (adding tags), the outputs from the cluster are no longer guaranteed to be the same (the update could lead to changes of some outputs). But in practice the outputs we depend on here won't change (they are immutable until the resource is replaced. But Pulumi doesn't know that, so it conservatively notes that these could change, and if they do, it would cause
kubeconfig
to change and the
k8s.Provider
to be replaced.
So I'm 99.9% sure this would not actually be a replacement - but it is a very bad (and dangerous) position to be in.
Let me see if it's possible to suppress the
tags
diff - thats the most immediate workaround.
f
I guess the 0.18.8 version of eks didn't actually set the tags I passed in even though it had the property.
w
Can you try passing
clusterTags: { Name: <any>undefined }
and see if that fixes it?
And if you had any
tags:
prior to this - remove it. As you note, it wasn't being used previously.
f
On 0.18.19?
w
Yes.
f
Same
w
What is the full detailed diff on the `
Copy code
aws:eks:Cluster
resource?
f
Copy code
~ aws:eks/cluster:Cluster: (update)
            [id=k8s-cluster-dev-eksCluster-599d517]
            [urn=urn:pulumi:dev::cs-platform::eks:index:Cluster$aws:eks/cluster:Cluster::k8s-cluster-dev-eksCluster]
            [provider=urn:pulumi:dev::cs-platform::pulumi:providers:aws::default_1_18_0::5143f9be-c35e-485f-974f-b16cb678db78]
          ~ tags: {
              + Name   : "k8s-cluster-dev-eksCluster"
              + app    : "cs-platform"
              + env    : "dev"
              + service: "eks"
            }
w
Are you sure you added
clusterTags: { Name: <any>undefined }
?
f
That's not a tag I created
w
I'm suggesting adding that argument to your cluster. Did you do that?
If you did - I would not expect to see this line in the diff you shared above:
Copy code
+ Name   : "k8s-cluster-dev-eksCluster"
f
Oh geez, misread your other message. Trying now.
Ok I now have:
Copy code
tags: getTags({service: "eks"}),
    clusterTags: { Name: <any>undefined }
and the diff looks the same. Gotta run for lunch. Be back in ~30 mins.
Setting Name does not fix this issue.
I have also tried removing the tags on 0.18.18 and adding them back in in 0.18.19 and get the same issue.
f
Another option would be to manually apply the missing tags to the cluster
and then perform a targetted refresh
pulumi refresh --target <cluster-urn>
then, when you
pulumi up
your statefile should know about those tags and it shouldn’t generate the diff discussed above