Hi all :wave:, not sure if this is the right chann...
# kubernetes
f
Hi all 👋, not sure if this is the right channel for this but it is k8s related so I might ask here. I am trying to troubleshoot an issue I am having with running preview and apply against our eks cluster. Upgrade is in the works but I need to be able to apply some changes that will be used in projects downstream that appear to be working. The issue is similar if not the same as this (I am not the author of the issue). I have noticed that pulumi wants a much later version of the kubernetes provider than the versions of the npm packages we have installed and I think the issue may stem from installing a later package that upgrades
client-go
to
>1.23.x
. I guess the question is, how does pulumi decide which resource plugin version it requires?
The frustrating thing about this error is that there is no trace of v1alpha1 in kubeconfig (aws cli is at latest before someone mentions it) or in code. It is in the state (generated by the eks plugin), patching the state does not seem to help, it still somehow wants to use an internally generated kubeconfig with v1alpha1.
Pulumi wants to update the provider plugin version in use
"3.7.3" => "3.20.3"
according to the preview,
3.20.3
has
client-go
1.24.x
IIRC which is why it is throwing the error.
In any case, I think I have solved the issue, I have inherited a monorepo and we are using yarn workspaces and package resolution was the issue. Coming from python I was not aware of the some of the semantics. The question stands though, how does pulumi choose the plugin?
One last observation: Kubernetes upstream only support a version skew of +-1 for kubectl. I had to update to the eks node package 0.40.0+ to get everything working and it updated our VPC CNI deployment but required version 1.24 of kubectl. We are using a 1.21 cluster, that is 3 versions difference! I could be wrong but could this be caused by the seeming insistence of the pulumi cli to always roll forward resource plugin versions?
The issue I was running into is that we were running with yarn workspaces, the other workspaces were pulling in the eks module as dependency a shared library. That version of eks was generating a kubeconfig that was being used to build a bunch of internal resources and the newer version of the kubernetes resource plugin was using a later version of
client-go
that was not happy with the kubeconfig. I have been all over the place with aws cli versions, kubectl versions etc. Now that I am running a later version of the eks node module I am running into a new issue, it is failing to patch the VPC CNI. Still looking into it