Hi! I have k8s deployed on DigitalOcean via Pulumi...
# kubernetes
Hi! I have k8s deployed on DigitalOcean via Pulumi, and I also deploy my Helm releases onto said cluster using Pulumi. If I try to update an existing Helm release, or create a new one, I get the following when I run `Pulumi up`:
Copy code
error: can't create Helm Release with unreachable cluster: unable to load schema information from the API server: the server has asked for the client to provide credentials
I have my digital ocean access token set up, and I initially created the Helm release with Pulumi, so I'm not clear as to why it suddenly isn't working. Any guidance or help would be massively appreciated!
Are you using the default Kubernetes provider or did you give it specific credential information?
I give it specific information, at least I mean to:
Copy code
k8s = do.KubernetesCluster("ptb-k8s", <etc>)

kube_provider = Provider("ptb-k8s-provider", args=ProviderArgs(kubeconfig=k8s.kube_configs[0].raw_config))

opts = ResourceOptions(provider=kube_provider, depends_on=[kube_provider, k8s])

nginx_release_args = ReleaseArgs(name="nginx-ingress", <etc>)

nginx_release = Release("nginx-ingress-controller", args=nginx_release_args, opts=opts)
I'm not sure if the
are defined correctly, given that it isn't working
That looks reasonable.
Is the kubeconfig there defined with a proper token ? I'm not familiar with DO in this regard
But it'll use the kubeconfig that is in that provider
Okay yeah I tried creating something that isn't a Helm Release and that didn't work for similar reasons. Guess the provider is somehow misconfigured
I pulled the
that the code uses above, and tried using it as the kubeconfig locally; it didn't work, which makes sense. I updated the token in the kubeconfig, ran a
kubectl get pods
while pointing to it as the kubeconfig, and it worked locally. So this means that somehow, the token in the kubeconfig in my code is incorrect... not quite sure how that happens.
Alright so in case anyone find this thread because they've run into something similar, here is how I resolved it. 1.
pulumi stack export --show-secrets --file mystack
2. open up
and replace all instances of the previous, incorrect token with a new one that you know works (I just used vim fwiw), save and close the file 3.
pulumi stack import --file mystack
And now my resources that touch the kubernetes server (secrets, helm charts, etc) do not run into the "unreachable cluster" error.