Still having problems with kubeconfig. ```Diagnostics: pulumi:pulumi:Stack (infra-eks-utils-dev): ...
g
Still having problems with kubeconfig.
Copy code
Diagnostics:
  pulumi:pulumi:Stack (infra-eks-utils-dev):
    warning: unable to determine cluster's API version: could not get server version from Kubernetes: Get "<https://2A1746CBAF4ADCE5D31BDD78F5115A64.gr7.eu-central-1.eks.amazonaws.com/version?timeout=32s>": getting credentials: exec: executable aws-iam-authenticator not found

    It looks like you are trying to use a client-go credential plugin that is not installed.

    To learn more about this feature, consult the documentation available at:
          <https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins>
Copy code
cluster = aws.eks.Cluster.get('eks_cluster', cluster_id)

    # Create the Kubernetes provider with the kubeconfig
    kubeconfig = pulumi.Output.all(cluster.endpoint, cluster.certificate_authority, cluster.name).apply(
        lambda args: f"""
apiVersion: v1
clusters:
- cluster:
    server: {args[0]}
    certificate-authority-data: {args[1]['data']}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {{}}
users:
- name: aws
  user:
    exec:
      apiVersion: <http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - {args[2]}
        """
    )

    k8s_provider = k8s.Provider("k8s-provider", kubeconfig=kubeconfig)
pulumi ai offered earlier apiVersion which doesn't work
I've changed to the one i've found in ~/.kube/config
oh so because of the auth mode api I need to install this
brew install aws-iam-authenticator
OK
q
IIRC you shouldn't need aws-iam-authenticator. Can you try doing
aws eks update-kubeconfig --region REGION [--profile PROFILE_IF_YOU_USE_ONE] --name CLUSTER_NAME
. That should create the kube config for you
g
I don’t want pulumi to use my kubeconfig, it should make it himself. This also gonna be launched in a pipeline
but anyway now it works
i’m using auth mode = api maybe that’s why
q
Ah, now I got it! It should still work with auth mode API. The authentication flow on the SDK/CLI side is still the same (it's using
aws eks get-token
under the hood). You can have a look here how we're creating the kubeconfig in the pulumi-eks provider: https://github.com/pulumi/pulumi-eks/blob/986c528a058381be989c382d38a2e74238c9502b/nodejs/eks/cluster.ts#L207 Alternatively you could start using the pulumi-eks provider directly because it generates the kubeconfig for you 🙂
g
I’ve tried my best to start with pulumi-eks and I’ve gave up. I want to create roles myself, couldn’t find how to provide arn to it exactly as it happened with aws.eks.Cluster() you just pass role_arn param in there and that’s it. Also the node groups, not sure how to add them too, with aws.eks it’s easier. Probably will need to ask pulumi ai to convert current code to use eks provider. For now I’ve seen that adding a role is not so easy.. you need additional steps for that.
also I’ve split the stacks for creating resource and provisioning it. really wanted to get cluster_id from eks stack and in eks_utils to handle everything I need, and it seems eks provider doesn’t even have getCluster()…
q
For providing the cluster role you can use this parameter: https://www.pulumi.com/registry/packages/eks/api-docs/cluster/#servicerole_nodejs. Or you omit it and it's auto created. But you're right it's not an arn, it's the role object. So if you only have the arn and not the object you'd have to look up the role with this here: https://www.pulumi.com/registry/packages/aws/api-docs/iam/getrole/
🙌 1
g
ok now it’s a bit more clear
my setup is even more sophisticated I’m using packages/eks packages/eks_utils and stack eks and eks_utils, who’s using those packages. Just don’t want to duplicate code around, too much to maintain later on.
downside.. well destroying bunch of clusters after one change in the package. but pretty much that’s how it works with terraform modules too