I switched computers and ran `pulumi up` in a stac...
# general
I switched computers and ran
pulumi up
in a stack that has an existing EKS cluster running. I received a bunch of these errors:
Copy code
error: Preview failed: unable to read kubectl config: invalid configuration: no configuration has been provided
I thought Pulumi automatically creates the config file for connecting the the cluster. Do I need to explicitly provide it? Reading through https://pulumi.io/quickstart/aws/tutorial-eks/, it does not appear that you need to set a k8s config file.
@full-dress-10026 that’s saying that you’re not pointed at a kubeconfig file.
You need either to use
stack export
to write the kubeconfig file to a place where we can pick it up automatically, or you need to use a first-class provider, as we do in the examples.
you should not need to “set” the config.
What is a "first-class" provider? My code looks just like the example.
Copy code
const k8sProvider = new k8s.Provider("k8s", {kubeconfig: cluster.kubeconfig.apply(JSON.stringify)});

const kafkaChart = new k8s.helm.v2.Chart("kafka", {
    path: "../cp-helm-charts",
    namespace: "kafka",
}, {providers: {kubernetes: k8sProvider}});
It’s the
you can make one or use the one exported by the EKS library
Ok, I don't a
in my code. I pass
to all my k8s components.
that shoudl work.
that error says you’re not passing the kubeconfig in.
so something is wrong.
Why would it think I should pass in a kubeconfig?
You can’t talk to a kube cluster without that information.
a provider is a thin wrapper around kubeconfig files.
YOu are passing one in with the provider, or you’re picking it up ambiently. One of those two things is true.
Oh I see. There was one service that did not have it set. After setting it, however, I receive these:
Copy code
kubernetes:core:Service (app-ions-service):
    error: Plan apply failed: services "app-ions-service" already exists
Guessing I need to recreate the service for some reason?
that is saying it already exists
so you already created it somehow.
Yes - the service has already been `pulumi up`'ed. I switched computers today. My previous computer must've had a kubeconfig set up in .kube.
that is quite likely, yes.
So, I added in the
{provider: k8sProvider}
to an existing
, resulting in the above.
yes, but in general we can’t tell whether you’re pointed at the same cluster or not.
like, in the kubeconfig you’re given a URI or IP address that’s the location and auth info — neither of those need be unique to a cluster.
So if you created a resource on the cluster from another computer somehow, and you now point it at the same cluster, there’s no way in general to know they’re actually the same resource on the same cluster.
I see. Is there a way to tell Pulumi that this is the same cluster as before? Or does it make more sense to just assign a new
to the Deployment and let it recreate it.
If it’s a service, I’d just delete it .
and then run
pulumi up
in the ether there is an issue about this in the kubernetes/kubernetes repository, but somehow having a unique ID for a cluster is a controversial idea. 🙂
That works for this case because it's a dev environment. If this was in production, that wouldn't work. How would you ensure uptime in the case that you forgot to pass an explicit
you’d probably use importers, once they are implemented.
you could also rename the service, create that, and then delete the old one out of band.