Did @pulumi/eks `0.18.19` release inadvertently?
# kubernetes
m
Did @pulumi/eks
0.18.19
release inadvertently?
b
Yes v0.18.19 was released yesterday. Though the changelog doesn’t reflect that - it has a typo that we're fixing. Is there anything else that you're experiencing with the release?
m
It seemed to introduce a fair amount of flux that I thought maybe shouldn’t be a patch version but I didn’t look too carefully what changed.
A lot of changes in my
plan
I mean
b
Sorry we introduced more changes than you were expecting. Our CI has been experiencing issues and we’ve been working to get it in a state to cut a release since it had been a while. What changes did you see? What version were you updating from? You could DM me the statefiles before and after, or email me them if you’d prefer (mike@pulumi.com), so that we can evaluate further. Also, please feel free to open up any issues you’re encountering [1]. It is likely that if you’re seeing flux, so are others. 1 - https://github.com/pulumi/pulumi-eks/issues/new
m
I’ll have to look into it in more detail when I have time, for now I just had to revert versions, I will make sure to open an issue if anything seems wrong, thanks!
👍 1
I’m just now having a chance to look into this. I’m seeing that the
kubeconfig
is updating unexpectedly, causing a diff to internal providers and subsequently other resources
Copy code
├─ infrastructure:k8s-cluster-v2         cluster-us2                                      
     │  └─ eks:index:Cluster                  cluster-us2                                      
 ~   │     ├─ aws:ec2:SecurityGroup           cluster-us2-eksClusterSecurityGroup  update      [diff: ~tags]
 ~   │     ├─ aws:eks:Cluster                 cluster-us2-eksCluster               update      [diff: ~tags]
 ~   │     ├─ aws:ec2:SecurityGroup           cluster-us2-nodeSecurityGroup        update      [diff: ~tags]
 ~   │     ├─ pulumi-nodejs:dynamic:Resource  cluster-us2-vpc-cni                  update      [diff: ~__provider,kubeconfig]
 +-  │     ├─ pulumi:providers:kubernetes     cluster-us2-eks-k8s                  replace     [diff: ~kubeconfig]
 +-  │     ├─ kubernetes:core:ConfigMap       cluster-us2-nodeAccess               replace     [diff: ~metadata,provider]
 +-  │     ├─ aws:ec2:LaunchConfiguration     cluster-us2-nodeLaunchConfiguration  replace     [diff: ~userData]
 ~   │     ├─ aws:cloudformation:Stack        cluster-us2-nodes                    update      [diff: ~tags,templateBody]
 +-  │     └─ pulumi:providers:kubernetes     cluster-us2-provider                 replace     [diff: ~kubeconfig]
I’m still trying to understand more so I can open a useful issue
I did a targeted apply of just the things with
~tags
and that seemed to somehow make the other diffs disappear
b
What version where you upgrading from to 0.18.19?
Did you happen to do a
pulumi refresh
before the update?
Support for cluster tags were added, so that would apply. Is the cluster still functional?
m
From
0.18.18
. I tried a
refresh
halfway though doing the targeted applies thinking that might fix the rest of the clusters faster. It seemed to have no effect.
Yeah it is all fine now.
b
I believe the changes you experienced are related to refreshing the state of the live cluster.
Please let us know if you experience any further issues
m
Sure, thanks!