Hello, I am using Pulumi first time so sorry if th...
# kubernetes
a
Hello, I am using Pulumi first time so sorry if this is a simple thing but hoping someone can assist. I am deploying an EKS Cluster with the AWS provider, and need to pass in a configmap patch to the cluster, after it gets created. However, upon "pulumi up" i receive
Copy code
Diagnostics:
  pulumi:pulumi:Stack (aws-eks-python-dev):
    error: kubernetes:yaml/v2:ConfigGroup resource 'proxy-config' has a problem: configured Kubernetes cluster is unreachable: unable to load Kubernetes client configuration from kubeconfig file. Make sure you have:
    
         • set up the provider as per <https://www.pulumi.com/registry/packages/kubernetes/installation-configuration/>
    
     invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I am attempting to use the "apply" method so that it grabs the information after creation.
Copy code
# Wait for cluster creation and then supply the outputs for nodegroup
    eks_cluster_ca = eks_cluster.certificate_authority.apply(lambda ca: eks_cluster.certificate_authority)
    eks_cluster_endpoint = eks_cluster.endpoint.apply(lambda endpoint: eks_cluster.endpoint)
    eks_cluster_name = eks_cluster.name.apply(lambda name: eks_cluster.name)
    node_group = my_nodegroups.create_nodegroup(cluster_name, aws_region, vpc, eks_cluster, eks_cluster_ca, eks_cluster_endpoint, eks_cluster_name)

    # Generate and export the kubeconfig
    kubeconfig = pulumi.Output.all(eks_cluster.name, eks_cluster.endpoint, eks_cluster.certificate_authority).apply(
        lambda args: json.dumps({
            "apiVersion": "v1",
            "clusters": [{
                "cluster": {
                    "server": args[1],
                    "certificate-authority-data": args[2]["data"],
                },
                "name": "kubernetes",
            }],
            "contexts": [{
                "context": {
                    "cluster": "kubernetes",
                    "user": "aws",
                },
                "name": "aws",
            }],
            "current-context": "aws",
            "kind": "Config",
            "users": [{
                "name": "aws",
                "user": {
                    "exec": {
                        "apiVersion": "<http://client.authentication.k8s.io/v1alpha1|client.authentication.k8s.io/v1alpha1>",
                        "command": "aws",
                        "args": ["eks", "get-token", "--cluster-name", args[0]],
                    },
                },
            }],
        })
    )

    kubernetes.apply_configmap_patch(kubeconfig, eks_cluster)
And then I set the kubernetes resources to be dependent on the cluster using Opts
Copy code
def apply_configmap_patch(mykubeconfig, eks_cluster):
    # Set configurations
    proxy_hostname = config.require("proxy_host")
    proxy_port = config.require("proxy_port")
    no_proxy = config.require("no_proxy")

    k8s_provider = k8s.Provider(
        "kubernetes_auth",
        kubeconfig=mykubeconfig
        )

    # Create a ConfigMap
    config_map = yaml.ConfigGroup(
        "proxy-config",
        yaml="""
        apiVersion: v1
        kind: ConfigMap

        metadata:

            name: proxy-environment-variables

            namespace: kube-system

        data:

            HTTP_PROXY: {proxy_hostname}:{proxy_port}

            HTTPS_PROXY: {proxy_hostname}:{proxy_port}

            NO_PROXY: {no_proxy}
        """

    ).format(proxy_hostname=proxy_hostname, proxy_port=proxy_port, no_proxy=no_proxy),
    opts=pulumi.ResourceOptions(
        provider=k8s_provider,
        depends_on=eks_cluster
        )
Thank you for any assistance
Nevermind i believe i figured out my issue. I put the opts outside of the configgroup resource :)
q
Just out of curiosity, I see you're manually assembling the kubeconfig. Are you using the pulumi-aws or the pulumi-eks provider to create the cluster? •
pulumi-eks
has a
kubeconfig
output on the
Cluster
component. •
pulumi-aws
actually has a function to retrieve a short lived token you can directly use for interacting with the EKS cluster (aws.eks.getClusterAuth). The only caveat here is that it has a 15 minute expiration set on the AWS side. I'm mostly interested whether there's a way we could simplify EKS authentication here 🤔
a
I am using pulumi-aws
Will look into using those, thought was just to save it for later use
q
Feel free to cut a feature request on GitHub if there's anything we could do to make your life easier!