Digital ocean’s managed k8s service refreshes its ...
# general
b
Digital ocean’s managed k8s service refreshes its certificates every week. I authenticate via my digital ocean access token and can still access my pods, but this isn’t working with Pulumi. I'm unable to make updates to the cluster or to my applications. When I attempt to refresh I get this error:
Copy code
unable to load schema information from the API server: the server has asked for the client to provide credentials
From what I’ve read Pulumi auths to k8s using either ~/.kube/config or $KUBECONFIG. Given that I have a recently updated kube config and can connect to my cluster I don’t understand why Pulumi is failing. I’m thinking the issue could be with the way I export the provider in the project that creates the basic cluster:
Copy code
export const kubeconfig = cluster.kubeConfigs[0].rawConfig
const provider = new kubernetes.Provider(project_name, { kubeconfig })
I removed the cluster bootstrapping (container registry login stuff, etc) that was dependent upon the provider and ran
pulumi update -v 5 --debug
. I hoped this would update the kubeconfig being used by the other projects that deploy my apps. All I got from the attempted update was a vague error and an outputted kube config file:
Copy code
error: an unhandled error occurred: Program exited with non-zero exit code: -1
What is the recommended way to deal with the k8s digitalocean certificate rotation, both from managing the base cluster and for the applications that will deploy to it? I took a look at the Terraform documentation here but am not sure how to translate this to Pulumi: https://www.terraform.io/docs/providers/do/r/kubernetes_cluster.html#kubernetes-terraform-provider-example I’ve been stuck on this for a while, any assistance would be greatly appreciated :-)
g
@broad-dog-22463 thoughts here?
b
I'm thinking that I'll need to use doctl to generate a fresh kubeconfig each time my app deploy pipelines run, pass the path to it into my pulumi update command as an env var, and load the file as the provider rather than depending upon the one exported in the main cluster stack.
b
Would it be better if we gave a way to get KubeConfig from the code and then it would refresh each time the certificate changes in a refresh?
b
That sounds good to me.