able-coat-23390
01/10/2025, 6:11 PMDiagnostics:
pulumi:pulumi:Stack (aws-eks-python-dev):
error: kubernetes:yaml/v2:ConfigGroup resource 'proxy-config' has a problem: configured Kubernetes cluster is unreachable: unable to load Kubernetes client configuration from kubeconfig file. Make sure you have:
• set up the provider as per <https://www.pulumi.com/registry/packages/kubernetes/installation-configuration/>
invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I am attempting to use the "apply" method so that it grabs the information after creation.
# Wait for cluster creation and then supply the outputs for nodegroup
eks_cluster_ca = eks_cluster.certificate_authority.apply(lambda ca: eks_cluster.certificate_authority)
eks_cluster_endpoint = eks_cluster.endpoint.apply(lambda endpoint: eks_cluster.endpoint)
eks_cluster_name = eks_cluster.name.apply(lambda name: eks_cluster.name)
node_group = my_nodegroups.create_nodegroup(cluster_name, aws_region, vpc, eks_cluster, eks_cluster_ca, eks_cluster_endpoint, eks_cluster_name)
# Generate and export the kubeconfig
kubeconfig = pulumi.Output.all(eks_cluster.name, eks_cluster.endpoint, eks_cluster.certificate_authority).apply(
lambda args: json.dumps({
"apiVersion": "v1",
"clusters": [{
"cluster": {
"server": args[1],
"certificate-authority-data": args[2]["data"],
},
"name": "kubernetes",
}],
"contexts": [{
"context": {
"cluster": "kubernetes",
"user": "aws",
},
"name": "aws",
}],
"current-context": "aws",
"kind": "Config",
"users": [{
"name": "aws",
"user": {
"exec": {
"apiVersion": "<http://client.authentication.k8s.io/v1alpha1|client.authentication.k8s.io/v1alpha1>",
"command": "aws",
"args": ["eks", "get-token", "--cluster-name", args[0]],
},
},
}],
})
)
kubernetes.apply_configmap_patch(kubeconfig, eks_cluster)
And then I set the kubernetes resources to be dependent on the cluster using Opts
def apply_configmap_patch(mykubeconfig, eks_cluster):
# Set configurations
proxy_hostname = config.require("proxy_host")
proxy_port = config.require("proxy_port")
no_proxy = config.require("no_proxy")
k8s_provider = k8s.Provider(
"kubernetes_auth",
kubeconfig=mykubeconfig
)
# Create a ConfigMap
config_map = yaml.ConfigGroup(
"proxy-config",
yaml="""
apiVersion: v1
kind: ConfigMap
metadata:
name: proxy-environment-variables
namespace: kube-system
data:
HTTP_PROXY: {proxy_hostname}:{proxy_port}
HTTPS_PROXY: {proxy_hostname}:{proxy_port}
NO_PROXY: {no_proxy}
"""
).format(proxy_hostname=proxy_hostname, proxy_port=proxy_port, no_proxy=no_proxy),
opts=pulumi.ResourceOptions(
provider=k8s_provider,
depends_on=eks_cluster
)
Thank you for any assistanceable-coat-23390
01/10/2025, 7:15 PMquick-house-41860
01/13/2025, 11:49 AMpulumi-eks
has a kubeconfig
output on the Cluster
component.
• pulumi-aws
actually has a function to retrieve a short lived token you can directly use for interacting with the EKS cluster (aws.eks.getClusterAuth). The only caveat here is that it has a 15 minute expiration set on the AWS side.
I'm mostly interested whether there's a way we could simplify EKS authentication here 🤔able-coat-23390
01/15/2025, 4:00 PMable-coat-23390
01/15/2025, 4:02 PMquick-house-41860
01/16/2025, 7:38 AM