Hello! I’m trying to update an EKS cluster config...
# kubernetes
s
Hello! I’m trying to update an EKS cluster configured by Pulumi. I’m getting a seemingly common error but the suggested solutions seem to all be around repairing/locating the kube config. My kube config is healthy though and works fine for making
kubectl
calls. Does anyone know of any other things that could cause this problem?
Copy code
Diagnostics:
  aws:eks:Cluster (bluewhale-staging-eksCluster):
    error: Exception calling application: The connection to the server localhost:8080 was refused - did you specify the right host or port?
s
What's your Kubernetes Provider config look like?
s
Copy code
self.k8_provider = k8s.Provider('eks-k8s', kubeconfig=self.cluster.kubeconfig.apply(lambda k: json.dumps(k)))
where
cluster
is an
pulumi_eks.eks.Cluster
object. Incidentally, I was able to stand up a stack in my dev environment after resolving a new issue around a circular dependency that hadn’t come up previously, but I still can’t do an update to an existing stack in this staging environment with the same above error.
And I can’t update a different stack in yet a different environment with the following error:
Copy code
Diagnostics:
  aws:eks:Cluster (bluewhale-production-eksCluster):
    error: Exception calling application: expected str, bytes or os.PathLike object, not Unknown
s
hmm are you able to share more of your code?
s
Quite possibly but I have lots of classes handling different aspects of these deployments so it’s a bit complex. What are we looking for?
s
if usage of
self.k8_provider
might be missing in any resources (have you also tried printing the json contents to make sure it's exactly what you expect). and the type error you got looks like you might need to just comb through the code
s
Ok I’ll poke around a bit on both of those points. Thank you. Strange though that I could stand up a fresh stack with essentially the same configuration (except for differences in the manifests/helm values applied for micro services). I wonder if mis-use of the
k8_provider
handle might lead to issues with later iterations even if an initial deploy works fine? Does that check out at all?
s
it's a pretty "out there" scenario, but if you missed adding the
provider
setting to a k8s resource and had a local k8s cluster running accessible at localhost, ran pulumi up successfully, stopped the local cluster, and ran pulumi up again, i think it could result in the scnario you're seeing. curious if you can successfully run
pulumi refresh