https://pulumi.com logo
#kubernetes
Title
# kubernetes
f

freezing-vase-18205

01/26/2024, 6:58 PM
Hi all, I am a bit at a loss my
pulumi up
seems to not picking the right cluster or context when doing
k get pods
I successfully authenticate with the cluster and see the pods just fine but when doing
pulumi up
I get
Copy code
error: can't create Helm Release with unreachable cluster: unable to load schema information from the API server: the server has asked for the client to provide credentials
Would love to know how to see which cluster/context pulumi selects… Raised https://github.com/pulumi/pulumi-kubernetes/issues/2773 to at least understand how to troubleshoot this further on.
c

cuddly-computer-18851

01/27/2024, 12:03 AM
Are you creating an explicit provider, or using the default provider and config file settings?
f

freezing-vase-18205

01/27/2024, 7:20 PM
@cuddly-computer-18851 I am using the default config (no explicit provider config), hence I am curious what exactly does the provider pick up
I tried to configure the provider explicitly, but to no avail. Same error.
Set your pulumi config vars
kubernetes:<var>
- possible options are listed here: https://www.pulumi.com/registry/packages/kubernetes/api-docs/provider/
f

freezing-vase-18205

01/28/2024, 9:21 AM
yes, I did all that. Without seeing what cluster/context gets used it is impossible to say what is going on
c

cuddly-computer-18851

01/28/2024, 9:33 AM
You specify what context to use..
f

freezing-vase-18205

01/28/2024, 9:39 AM
thank you, but I did try that. As I said, for whatever reason the wrong context gets chosen. The question is how to check what context is picked up and why
c

cuddly-computer-18851

01/28/2024, 10:07 AM
what does including
--show-config
and
--debug
reveal when doing a preview ?
Also, are you running pulumi via a container ala docker-compose or locally?
f

freezing-vase-18205

01/28/2024, 2:41 PM
pulumi binary is installed natively on the linux os both flags reveal nothing that disclose what cluster connection details are used I used -v 11 and trace options, but still nothing
found the issue Apparently the clusters I deployed used the same default kubeadm name for a k8s admin — kubernetes-admin when my automation merged several kubeconfig into one it made an interesting side effect on the authentication workflow. Once I renamed the users it started to work well
2 Views