trying to deploy eks has not been a terribly pleas...
# aws
i
trying to deploy eks has not been a terribly pleasant experience so far. • ran
pulumi new
and selected
kubernetes-aws-typescript
• local aws access key doesn't have required permissions, need to keep updating user policy as I encounter errors during
pulumi up
... sometimes using very insecure policy • the iterative nature of -- (1)
pulumi up
+ (2) fix IAM rules -- causes the stack to eventually get into a bad state, where it's starting to throw errors all over the place (e.g.
nodeLaunchConfiguration
doesn't support update) • only solution is to bring down stack using
PULUMI_K8S_DELETE_UNREACHABLE=true pulumi down
• retry a new deployment and this time it works, now that the IAM policy is working • notice that the template deployed a VPC with 3 NAT Gateways which are expensive to run • find a pulumi guide that shows you can deploy an EKS without VPC (it just uses default VPC then), update template to remove the VPC • try to deploy again... a bunch of errors • trying to bring it down, and it has hung for ~10 minutes on deleting
eks-cluster-eksClusterSecurityGroup
• run down hallway while screaming and pulling hair out
r
yeah it not easy to setup production ready EKS cluster, i suggest you to use awsx package and configure first the VPC as you wish and later pass the vpc id and subnests to the Cluster resource
a
I feel your pain. EKS itself isn't great and bringing it up and down is very slow.
r
i can tell that i already configured some production ready EKS cluster if you want i can share some code with you