https://pulumi.com logo
f

full-dress-10026

02/22/2020, 8:28 PM
Updating from @pulumi/eks 0.18.18 to 0.18.21 results in Pulumi thinking all (I think?) of my k8s resources need to be replaced. This seems like a regression, given nothing in the eks changelog (https://github.com/pulumi/pulumi-eks/blob/master/CHANGELOG.md) indicates that this should occur.
w

white-balloon-205

02/22/2020, 8:29 PM
Can you share details of the diff you see? In particular, the first few "replace" resources you see listed? I'm guess your
kubernetes:Provider
resource in particular is being replaced? What diff do you see on that resource?
f

full-dress-10026

02/22/2020, 8:29 PM
Just going from 0.18.18 -> 0.18.19 results in the same diff
Looks like you're correct.
w

white-balloon-205

02/22/2020, 8:31 PM
What is the diff on the
kubeconfig
?
That will be what is leading to all the replacements - Pulumi thinks you are targetting a different cluster now.
f

full-dress-10026

02/22/2020, 8:32 PM
It just says
=> output<string>
Nothing is changing in the code - I just bump the dep and run pulumi up.
I can go back to 0.18.18, npm install, pulumi up, and get no changes.
Not sure if it's relevant but I did also update all my other pulumi deps:
Copy code
"@pulumi/pulumi": "1.11.0",
"@pulumi/kubernetes": "1.5.4",
"@pulumi/eks": "0.18.19",
"@pulumi/awsx": "0.19.2",
"@pulumi/aws": "1.23.0",
I suppose I could try downgrading all the deps and just bumping eks.
w

white-balloon-205

02/22/2020, 8:35 PM
I see. Theres a good change that this is just the preview being conservative - and that nothing will actually get replaced. But it's certainly dangerous. Let me take a quick look. If you could open an issue, it's definitely something we'll want to look into further.
I suppose I could try downgrading all the deps and just bumping eks.
I'd be interested in whether that changes anything - I expect you will see the same thing though.
f

full-dress-10026

02/22/2020, 8:36 PM
Issue in eks repo?
w

white-balloon-205

02/22/2020, 8:36 PM
Yeah - thanks.
👍 1
f

full-dress-10026

02/22/2020, 8:41 PM
Downgrading deps results in the same, as you said.
w

white-balloon-205

02/22/2020, 8:43 PM
I don't see anything that changed in EKS between these versions that should cause this. But the AWS version being used bumped from
^1.14.0
to
^1.18.0
. I do wonder if there is an issue between those versions.
f

full-dress-10026

02/22/2020, 8:46 PM
w

white-balloon-205

02/22/2020, 8:46 PM
Just to make sure - are you also seeing an "update" on your
aws:eks:cluster
resource? If so, what is it?
Ahh - your bug report includes it `
Copy code
[diff: ~tags]
f

full-dress-10026

02/22/2020, 8:47 PM
Yep. Weirdly it thinks it needs to add a bunch of tags. Maybe it wasn't actually setting the tags before.
Ah, current cluster in the console does not have the tags.
w

white-balloon-205

02/22/2020, 8:49 PM
Okay - so that is what is leading to this unfortunately. Because the cluster is changing (adding tags), the outputs from the cluster are no longer guaranteed to be the same (the update could lead to changes of some outputs). But in practice the outputs we depend on here won't change (they are immutable until the resource is replaced. But Pulumi doesn't know that, so it conservatively notes that these could change, and if they do, it would cause
kubeconfig
to change and the
k8s.Provider
to be replaced.
So I'm 99.9% sure this would not actually be a replacement - but it is a very bad (and dangerous) position to be in.
Let me see if it's possible to suppress the
tags
diff - thats the most immediate workaround.
f

full-dress-10026

02/22/2020, 8:50 PM
I guess the 0.18.8 version of eks didn't actually set the tags I passed in even though it had the property.
w

white-balloon-205

02/22/2020, 8:52 PM
Can you try passing
clusterTags: { Name: <any>undefined }
and see if that fixes it?
And if you had any
tags:
prior to this - remove it. As you note, it wasn't being used previously.
f

full-dress-10026

02/22/2020, 8:56 PM
On 0.18.19?
w

white-balloon-205

02/22/2020, 8:56 PM
Yes.
f

full-dress-10026

02/22/2020, 8:56 PM
Same
w

white-balloon-205

02/22/2020, 8:57 PM
What is the full detailed diff on the `
Copy code
aws:eks:Cluster
resource?
f

full-dress-10026

02/22/2020, 8:58 PM
Copy code
~ aws:eks/cluster:Cluster: (update)
            [id=k8s-cluster-dev-eksCluster-599d517]
            [urn=urn:pulumi:dev::cs-platform::eks:index:Cluster$aws:eks/cluster:Cluster::k8s-cluster-dev-eksCluster]
            [provider=urn:pulumi:dev::cs-platform::pulumi:providers:aws::default_1_18_0::5143f9be-c35e-485f-974f-b16cb678db78]
          ~ tags: {
              + Name   : "k8s-cluster-dev-eksCluster"
              + app    : "cs-platform"
              + env    : "dev"
              + service: "eks"
            }
w

white-balloon-205

02/22/2020, 8:59 PM
Are you sure you added
clusterTags: { Name: <any>undefined }
?
f

full-dress-10026

02/22/2020, 9:00 PM
That's not a tag I created
w

white-balloon-205

02/22/2020, 9:00 PM
I'm suggesting adding that argument to your cluster. Did you do that?
If you did - I would not expect to see this line in the diff you shared above:
Copy code
+ Name   : "k8s-cluster-dev-eksCluster"
f

full-dress-10026

02/22/2020, 9:01 PM
Oh geez, misread your other message. Trying now.
Ok I now have:
Copy code
tags: getTags({service: "eks"}),
    clusterTags: { Name: <any>undefined }
and the diff looks the same. Gotta run for lunch. Be back in ~30 mins.
Setting Name does not fix this issue.
I have also tried removing the tags on 0.18.18 and adding them back in in 0.18.19 and get the same issue.
f

faint-table-42725

02/23/2020, 3:51 AM
Another option would be to manually apply the missing tags to the cluster
and then perform a targetted refresh
pulumi refresh --target <cluster-urn>
then, when you
pulumi up
your statefile should know about those tags and it shouldn’t generate the diff discussed above