Hi. I want to set the `kubectl` context for a Kube...
# general
s
Hi. I want to set the
kubectl
context for a Kubernetes resource to make sure it’s applied to the correct cluster (configured in the uniquely named context). How can I avoid that Pulumi will *recreate*/replace the resource when I add
new kubernetes.Provider(..., {context: ,})
to it?
Is there an idiomatic way? The only solution I’ve found is to change the
provider
on the resource in the Pulumi stack JSON file.
That’s the MWE:
Copy code
import kubernetes = require("@pulumi/kubernetes")

let k8sProvider = new kubernetes.Provider("test-provider", {})

new kubernetes.core.v1.Namespace("test-namespace", {}, {
//    provider: k8sProvider,  // add after first `pulumi update`
})
w
Is this just making more explicit the provider being used, but still targeting the same cluster? I know we have a goal to be able to do that kind of change without replacing, but I can’t recall whether there’s a way to do that currently. Cc @microscopic-florist-22719 who might know more.
s
Using the Kubernetes provider serves several purposes: 1. To not accidentally update another cluster. Sometimes forget to run
kubectl config use-context ...
before. 2. To use different RBAC roles (“least privilege”) in one Pulumi project. 3. We’re normally using
aws-vault
in front of the
exec
plugin (
aws-iam-authenticator
). As it prompts for a MFA code and Pulumi seems to not support stdin, I’m using a context with only the
aws-iam-authenticator
&
aws-vault
before
pulumi
.
c
FYI, we set the context in the Pulumi.stack.yaml file per the docs. Have you tried that?
s
@cool-egg-852 Didn’t know that. Thanks. That will solve 3. & mostly 1., but not 2.
c
I'm general we have no way of knowing that two kube contexts point at the same cluster. ip addresses can be allocated to multiple clusters and so can certs
there's an issue in upstream to have clusters have unique IDs but it's been open for years
c
@stocky-island-3676 What do you mean by using different RBAC roles in one pulumi project?
s
E.g. setting up the namespaces itself with a K8s user having the
cluster-admin
ClusterRole. Then applying monitoring with a K8s user with lowered rights, e.g.
admin
RoleBinding (fixed to a
monitoring
namespace).
To be fair, that’s my current intention. Don’t know yet if it is feasible, though.
c
Any reason why though? If someone is running the project, and part of it requires
cluster-admin
, then they are already trusted enough to have
cluster-admin
..
s
For decoupling the error vectors. After I’ve setup the namespaces, I will mostly work only inside the namespace objects. With it, I can better safely apply a new K8s project with many resources which I may haven’t fully overseen yet. Therefore I better restrict its abilities inside the K8s cluster. It’s an habit I’ve started to use with Terraform providers with it’s aliases.
c
Maybe do your namespaces in a separate project? Sucks, but then it also ensures only a specific role can execute what is inside of that project.
s
That’s an option. Other one would be to use
namespace
in K8s provider (when the issue is fixed).
c
This is a really, really common pattern.
You want to grant ~most people enough rights to deal with their specific part of the stack, and no more.
We do support this, and the support for this is going to get better over time, even in the next sprint.
But, @stocky-island-3676 correct me if I’m wrong — the issue you have right now is that you want to make the provider explicit, and you can’t without replacing the resource.
This is a fundamental problem with Kubernetes, right now. We can’t tell which contexts uniquely identify which clusters.
So there is no way to know that we should not replace .
c
Not sure I follow on that one. A context is just a combination of a user and a cluster IIRC isn’t it?
Therefore reading the context to see what cluster it’s associated with would be possible.
c
That might be a local view of it but an IP address/URL and cert is not sufficient to determine a cluster uniquely.
In many cloud providers, the address of your API server is actually a load balancer that routes to multiple clusters based on, say, cert. And that cert can be rotated at any time, and is by many cloud providers.
So, even if you have both a cert and a cluster, that only is enough information to connect to a cluster. Not the cluster.
c
Ah. My company (not related to Dominik’s) has only experienced k8s through kops.
c
So there are a lot of issues like this: https://github.com/kubernetes/kubernetes/issues/2292
I don’t agree with Clayton here. Without some unique cluster ID you have effectively no way of knowing which cluster you’re talking to, and DNS/IP/certs are not enough to determine that.
IMO this is one of the reasons federation was so attractive, but that (also IMO) is a high-tech solution to a problem that should be very low-tech: just give every cluster a unique ID.
Anyway: net/net though it pains me to say it, I don’t know that we can be smarter here in general.
s
@creamy-potato-29402 You were right: I wanted to switch to an explicit K8s provider from the default, implicit one. Thanks for the detailed insight. So, without a unique K8s cluster ID, the only safe option for Pulumi is to offer to rename/move resources manually. Or do I miss sth.?
c
That’s correct.
@stocky-island-3676 soon we will have the ability to “adopt” existing resources, which might be what you want. cc @microscopic-florist-22719
but until that lands (a couple weeks) it’s better just to re-create them.
the good news is that if you’re not specifying
.metadata.name
manually, we will create new versions before deleteing the old ones.