I am trying to attach an IAM policy to the node Ro...
# general
I am trying to attach an IAM policy to the node Role that was created with my EKS cluster:
Copy code
new aws.iam.RolePolicyAttachment("eks-node-role-assignment", {
        policyArn: nodePolicy.arn,
        role: eksCluster.instanceRole
This is throwing an exception:
Copy code
Running program '/home/kenny/compute_software/infrastructure/pulumi-k8s-src' failed with an unhandled exception:
    Error: Missing required property 'role'
Any idea why? Do I need to explicitly pass the role when creating the EKS cluster?
It appears that
is undefined.
If I try setting
on my
, I receive:
Copy code
error: Running program '/home/kenny/compute_software/infrastructure/pulumi-k8s-src' failed with an unhandled exception:
    Error: an instanceProfile is required
        at Object.createNodeGroup (/home/kenny/compute_software/infrastructure/pulumi-k8s-src/node_modules/@pulumi/nodegroup.ts:238:15)
        at new Cluster (/home/kenny/compute_software/infrastructure/pulumi-k8s-src/node_modules/@pulumi/cluster.ts:597:37)
        at Object.<anonymous> (/home/kenny/compute_software/infrastructure/pulumi-k8s-src/src/index.ts:123:18)
        at Module._compile (module.js:653:30)
        at Module.m._compile (/home/kenny/compute_software/infrastructure/pulumi-k8s-src/node_modules/ts-node/src/index.ts:439:23)
        at Module._extensions..js (module.js:664:10)
        at Object.require.extensions.(anonymous function) [as .ts] (/home/kenny/compute_software/infrastructure/pulumi-k8s-src/node_modules/ts-node/src/index.ts:442:12)
        at Module.load (module.js:566:32)
        at tryModuleLoad (module.js:506:12)
        at Function.Module._load (module.js:498:3)
Super confused. Setting
on the
works but breaks lots of other things (i.e. coredns cannot be pulled from ECR). Looking at the code for
, I see there is a
and a
. Not sure what the difference is.
How are you supposed to give your k8s Pods access via IAM to AWS services?
Usually the best way is not to use instance role for your pods but to utilize something like https://github.com/uswitch/kiam
That's interesting but I'd like to get up and rolling without needing to integrate another whole service. There must be a way to do this without adding a new service.
Yeah I've seen that. Going to try the
. I'm surprised how fragile this is.
Any idea if you can have
deployDashboard: true,
when using
I'm guessing deploying the dashboard is going to be problematic if you use
because there are no nodes in the cluster. Unfortunately the dashboard code is not exposed as an API in the eks package either 😞
Isnt deployDashboard part of the cluster definition and not node group ?
Yes but from what I can tell, it will try to deploy the dashboard immediately.
Oh..havent tried that since we dont deploy the dashboard at all..It will be a good issue to file
It's done here: https://github.com/pulumi/pulumi-eks/blob/087f65501092b79e5b5fac7f138de4d0e215830f/nodejs/eks/cluster.ts#L634-L636. Given that no nodes are in the group, that will probably fail.
It'd be great if the dashboard was exposed as an API.
You can also try to spin up cluster with
deployDashboard: false
and then change to
in a second
pulumi up
True. A bit annoying for spinning up new environments though.
Is there a Typescript hack to be able to get access to that createDashboard function?
True..you can also circumvent the whole thing by deploying dashboard through a helm chart as well..
Not a typescript expert 😞
Hmm, worth a shot. Guessing there's no examples for that?
Havent come across it
I'm trying
Copy code
new k8s.helm.v2.Chart("k8s-dashboard", {
    repo: "stable",
    chart: "kubernetes-dashboard",
    version: "1.10.1"
}, {providers: {kubernetes: k8sProvider}});
but I get
Copy code
Error: Command failed: helm fetch stable/kubernetes-dashboard --untar --version 1.10.1 --destination /tmp/tmp-13188EbwU1rymTPCU
    Error: chart "kubernetes-dashboard" matching 1.10.1 not found in stable index. (try 'helm repo update'). No chart version found for kubernetes-dashboard-1.10.1
According to https://hub.helm.sh/charts/stable/kubernetes-dashboard the latest version is either 1.10.1 or 1.5.2. I tried both and they both result in the above error.
helm fetch stable/kubernetes-dashboard --version 1.5.2
locally, it works.
Woah, ran it after running that fetch command locally and it worked!