Hello here. I’m having something really weird goin...
# general
Hello here. I’m having something really weird going on I can’t understand. I recently created a new stack, then I switched back to the first one to do some changes and now it seems there’s something wrong with my user :
Copy code
error: Plan apply failed: ingresses.extensions is forbidden: User "REDACTED" cannot create ingresses.extensions in the namespace "default": Required "container.ingresses.create" permission.
(same problem for a
pulumi refresh
related to the permission
) given that : - the user showed here is the right one with more permissions than needed assigned to it (
amongst them, which are more than enough) -
gcloud auth
gcloud config
show me the right account is selected - the key exported by
is also the right one for this user (I even compared the id with the one in the web console to be sure) - the pulumi stack selected is the right one too I definitely have something wrong here, but I can’t spot it and I didn’t change a thing related to the service account or auth related.
Ok, I’ve finally found the problem. I didn’t switch the
This is really error prone to have to change so many things in order to switch from one stack/cluster/project to another
I think pulumi should be in charge of that at one point
In the meantime I will write a tool to automate all of this, else I will begin to be crazy soon enough. I’ll keep you posted 😉
We have a work item (https://github.com/pulumi/pulumi/issues/1181) to make it clearer in the display output when we're picking up ambient config for your deployment target, to help to avoid pitfalls like this. cc @lemon-spoon-91807 on that It is possible to specify your kubeconfig in code, however, unless I'm missing something this isn't as smooth as it could be. For example, you can do this
Copy code
const config = new pulumi.Config();
const kubeconfig = config.require("kubeconfig");
const provider = new k8s.Provider("k8s", { kubeconfig });
const service = new k8s.v1.core.Service(..., { provider });
This configures an explicit Kubernetes provider that uses kubeconfig supplied via the Pulumi configuration system. All resources that use it (note the
..., { provider }
bit) will then target your kubeconfig, not the ambient
one. You can similarly specify
. I'm not sure if there's a way to set the stack-wide config in the same manner, however, e.g.
Copy code
$ cat kubeconfig.json | \
    pulumi config set kubernetes:kubeconfig --
, or
...) @creamy-potato-29402 @gorgeous-egg-16927 Is this currently possible?
Yeah, that's an annoying issue that frequently bites people using the k8s provider. I think you can set the parent object to automatically use the same provider, but it's not super convenient. We plan to address that more cleanly in the near future.
Thing is, to configure k8s, I generate a kubeconfig after having provisioned my cluster like the example you provide for gcp, which actually relies on the account currently selected by the
I’ll come up with something 🙂