Hi, I'm creating Eks.Cluster and kubernetes provid...
# general
b
Hi, I'm creating Eks.Cluster and kubernetes provider at aws lambda base docker image public.ecr.aws/lambda/python:3.8 but got a error
Copy code
+  pulumi:providers:kubernetes cluster-1-eksvpc2-nonprod-244-eks-k8s created (0.22s)
+  eks:index:VpcCni cluster-1-eksvpc2-nonprod-244-vpc-cni creating (3s) error: Command failed: kubectl apply -f /tmp/tmp-10713Igkzn0px28.tmp
+  eks:index:VpcCni cluster-1-eksvpc2-nonprod-244-vpc-cni **creating failed** error: Command failed: kubectl apply -f /tmp/tmp-10713Igkzn0px28.tmp
+  pulumi:pulumi:Stack Pulumi-Archie-78bf35e5-390e-4042-b4d6-6b0cc615da8c creating (550s) error: You must be logged in to the server (the server has asked for the client to provide credentials)
+  pulumi:pulumi:Stack Pulumi-Archie-78bf35e5-390e-4042-b4d6-6b0cc615da8c creating (551s) error: update failed
p
this might be, because you did not apply new cluster credentials into local kubectl. I have no idea if it can be done with pulumi though 😕 The way we managed this (but for Azure actually) was to split this into stacks, but for us, cluster was also behind VPN, so this was more mandatory. 1. base stack creating infra, with VPN and with kubernetes cluster in the end 2. than you apply credentials and also connect to VPN 3. Than You run 2nd stack, that deals with "cluster content" It would be good to know a better way maybe 🙂
o
Hey there @big-angle-30002 when you created the Kubernetes provider, did you provide it and the VpcCni with the Kubeconfig from the EKS cluster? Could you share the program you wrote - or at least the cluster, provider, & VpcCni resources?
s
I have also seen this problem with EKS when the user’s
kubectl
instance can’t authenticate against AWS for some reason (EKS leverages AWS authentication for cluster auth).
b
Hello, @orange-policeman-59119. I used the Eks.Cluster resource from aws eks and the kubeconfig recovered it from said resource created
Copy code
### CREATE CLUSTER ###
        for i in range(data_conf["eks_cluster"].__len__()):
            data_conf["eks_cluster"][i].update(
                {
                    "vpc_id": vpc.id,
                    "vpc_cidr_tag": vpc_cidr_block_splited[1],
                    "public_subnet_ids": [subnet.id for subnet in public_subnets],
                    "cluster_security_group": public_sg[i], # NOTE: expected 1 SG * Cluster
                    "instance_roles": cluster_node_role,
                }
            )
            data_conf["eks_cluster"][i]["node_group_options"].update(
                {
                    "cluster_ingress_rule": public_sg[i],
                    "instance_profile": instance_profiles[i],
                    "node_security_group":  public_sg[i],
                    "node_subnet_ids": [subnet.id for subnet in public_subnets]
                }
            )

        clusters = CreateAwsEksClusterBuilder.pulumi_builder(data_conf=data_conf)
        kubeconfig = clusters[0].kubeconfig

        ### CREATE K8S PROVIDER ###
        if data_conf["provider"]:
            provider_conf =  data_conf["provider"]
            for i in range(data_conf["provider"].__len__()):
                provider = ProviderBuild(
                    f"PROVIDER_{data_conf['project_name'].upper()}",
                    ProviderArgs(
                        project_name=data_conf["project_name"],
                        environment=data_conf["environment"],
                        index=i,
                        cluster=clusters[0],
                        context=provider_conf[i]["context"],
                        delete_unreachable=provider_conf[i]["delete_unreachable"],
                        enable_config_map_mutable=provider_conf[i]["enable_config_map_mutable"],
                        enable_server_side_apply=provider_conf[i]["enable_server_side_apply"],
                        helm_release_settings=provider_conf[i]["helm_release_settings"],
                        kube_client_settings=provider_conf[i]["kube_client_settings"],
                        kubeconfig=clusters[i].kubeconfig,
                        namespace=provider_conf[i]["namespace"],
                        render_yaml_to_directory=provider_conf[i]["render_yaml_to_directory"],
                        suppress_deprecation_warnings=provider_conf[i]["suppress_deprecation_warnings"],
                        suppress_helm_hook_warnings=provider_conf[i]["suppress_helm_hook_warnings"],
                    )).provider
                
                providers.append(provider)
                k8s_provider.append(Output.format("{0} | {1}\n", provider.id, provider._name))
o
Are you getting this error on the 2nd or Nth cluster you're creating and not the 1st?
b
I'm creating just one cluster but the template suports for loop to make more clusters
o
Hm - I've seen this error before "You must be logged in to the server" when credentials are invalid; if you export the kubeconfig and attempt to use it locally against the cluster, does that work? Did you customize any security groups that might block API server access to the cluster?
p
@big-angle-30002 I found someone on my team pushed solution a bit further than I explained in first comment. Example is for AKS in typescript, but maybe it will give You the feeling
Copy code
// this is taken from config, from previously created cluster
let clusterName = config.require("clusterName")
const kubeConfigs = pulumi.all([clusterName]).apply(([clusterName]) => {
    return azure.containerservice.listManagedClusterUserCredentials({
        resourceGroupName: group,
        resourceName: clusterName,
    });
  });

const kubeConfigEncoded = kubeConfigs.kubeconfigs[0].value;

const kubeconfig = kubeConfigEncoded.apply(enc => Buffer.from(enc, "base64").toString());

 const aksProvider = new k8s.Provider("aks", {
 kubeconfig: kubeconfig
});

// and than all cluster actions are run like:

const nginxIngress = new k8s.helm.v3.Release("ingress-nginx-release", {
    chart: "ingress-nginx",
    namespace: infraNamespace.metadata.name,        
    repositoryOpts:{
        repo: "<https://kubernetes.github.io/ingress-nginx>",
    },
    values: {
        controller: {
            replicaCount: 2,
            service: {
                externalTrafficPolicy: "Local",
                loadBalancerIP: ingressPubicIp
            }
        }
    }
}, { provider: aksProvider, dependsOn: [infraNamespace]});