wonderful-wolf-97567
02/23/2025, 9:11 AMmodern-zebra-45309
02/23/2025, 12:21 PMwonderful-wolf-97567
02/23/2025, 3:06 PM# Step 1: Create an IAM role for EKS
eks_role = aws.iam.Role("eks-role",
assume_role_policy="""{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "<http://eks.amazonaws.com|eks.amazonaws.com>"
},
"Action": "sts:AssumeRole"
}
]
}"""
)
# Attach the necessary policies to the role
eks_role_policy_attachment = aws.iam.RolePolicyAttachment("eks-role-policy-attachment",
role=eks_role.name,
policy_arn="arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
)
# Step 2: Create a VPC and subnets
vpc = aws.ec2.Vpc("vpc",
cidr_block="10.0.0.0/16",
enable_dns_support=True,
enable_dns_hostnames=True
)
subnet1 = aws.ec2.Subnet("subnet-1",
vpc_id=vpc.id,
cidr_block="10.0.1.0/24",
availability_zone="eu-north-1a"
)
subnet2 = aws.ec2.Subnet("subnet-2",
vpc_id=vpc.id,
cidr_block="10.0.2.0/24",
availability_zone="eu-north-1b"
)
# Step 3: Create an EKS cluster
eks_cluster = aws.eks.Cluster("eks-cluster",
role_arn=eks_role.arn,
vpc_config=aws.eks.ClusterVpcConfigArgs(
subnet_ids=[subnet1.id, subnet2.id],
),
opts=pulumi.ResourceOptions(depends_on=[eks_role_policy_attachment])
)
# Step 4: Generate a kubeconfig using `apply`
kubeconfig = pulumi.Output.all(eks_cluster.name, eks_cluster.endpoint, eks_cluster.certificate_authority.data).apply(
lambda args: f"""
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {args[2]}
server: {args[1]}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {{}}
users:
- name: aws
user:
exec:
apiVersion: <http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>
command: aws
args:
- "eks"
- "get-token"
- "--cluster-name"
- {args[0]}
"""
)
# Step 5: Create a Kubernetes provider using `apply`
k8s_provider = kubeconfig.apply(
lambda config: k8s.Provider("k8s-provider",
kubeconfig=config,
opts=pulumi.ResourceOptions(depends_on=[eks_cluster]) # Ensure the provider waits for the cluster
)
)
# Below are several namespace creation options attempted:
namespace = k8s.core.v1.Namespace("test", opts=pulumi.ResourceOptions(provider=k8s_provider))
# # Step 6: Create a namespace using `apply`
# namespace = k8s.core.v1.Namespace("MyNamespace",
# lambda provider: k8s.core.v1.Namespace("argocd-namespace",
# metadata={"name": "argocd"},
# opts=pulumi.ResourceOptions(provider=provider) # Ensure the namespace waits for the provider
# )
# )
# namespace = k8s_provider.apply(
# lambda provider: k8s.core.v1.Namespace("argocd-namespace",
# metadata={"name": "argocd"},
# opts=pulumi.ResourceOptions(provider=provider) # Ensure the namespace waits for the provider
# )
# )
when removing namespace section cluster is created.
I manually verified generated kubeconfig and also created node groups using the provider which worked.
reference to a working flow for creating namespaces (preferably in a single run) would be very helpful.modern-zebra-45309
02/23/2025, 3:20 PMapply()
or depends_on
in cases where the relationship is clear from connections between outputs and inputs. This will make your code a lot easier.
I suggest you look into using pulumi-eks
, which wraps the more basic aws.eks.Cluster
and associated resources in a higher-level component resource. Among other things, it takes care of creating the kubeconfig and even instantiating a Kubernetes provider for you: https://www.pulumi.com/blog/easily-create-and-manage-aws-eks-kubernetes-clusters-with-pulumi/ This will make your life a lot easier and let you focus on your application, rather than dealing with lower-level details of EKS and Pulumi.wonderful-wolf-97567
02/23/2025, 5:16 PMmodern-zebra-45309
02/23/2025, 6:55 PMpulumi-eks
but pulumi-aws
, though:
eks_cluster = aws.eks.Cluster("eks-cluster", ...
Overall, your code seems rather complicated and it might very well be that your attempts at managing the relationship between resources manually is causing a deadlock. You can leave this to Pulumi, it's smart enough to know, e.g., that it cannot instantiate the Kubernetes provider prior to the cluster being ready 🙂
With pulumi-eks
, it looks like this (copying directly from the tutorial I linked and adding the namespace creation):
import pulumi
import pulumi_eks as eks
import pulumi_kubernetes as k8s
# Create an EKS cluster.
cluster = eks.Cluster(
"cluster",
instance_type="t2.medium",
desired_capacity=2,
min_size=1,
max_size=2,
)
k8s_provider = k8s.Provider("k8s-provider", kubeconfig=cluster.kubeconfig)
# alternatively, this is what the tutorial uses: k8s_provider = cluster.provider
my_namespace = k8s.core.v1.Namespace("my-namespace",
metadata=k8s.meta.v1.ObjectMetaArgs(name="my-namespace"),
opts=pulumi.ResourceOptions(provider=k8s_provider))
little-cartoon-10569
02/23/2025, 8:50 PM# Step 5: Create a Kubernetes provider using `apply`
k8s_provider = kubeconfig.apply(
lambda config: k8s.Provider("k8s-provider",
kubeconfig=config,
opts=pulumi.ResourceOptions(depends_on=[eks_cluster]) # Ensure the provider waits for the cluster
)
)
This code wouldn't create a provider until provision time, but it'd be too late for use by then. As far as I know, you cannot create providers inside an apply.wonderful-wolf-97567
02/25/2025, 9:53 AMimport pulumi
import pulumi_eks as eks
import pulumi_kubernetes as k8s
# Create an EKS cluster.
cluster = eks.Cluster(
"cluster",
instance_type="t2.medium",
desired_capacity=2,
min_size=1,
max_size=2,
)
k8s_provider = k8s.Provider("k8s-provider", kubeconfig=cluster.kubeconfig)
# alternatively, this is what the tutorial uses: k8s_provider = cluster.provider
my_namespace = k8s.core.v1.Namespace("my-namespace",
metadata=k8s.meta.v1.ObjectMetaArgs(
name="my-namespace"),
opts=pulumi.ResourceOptions(provider=k8s_provider))
when running preview on an empty stack:
pulumi preview
Previewing update (python-eks-testing):
Type Name Plan
+ pulumi:pulumi:Stack aws-py-eks-python-eks-testing create.
+ ├─ eks:index:Cluster cluster create
+ │ ├─ eks:index:ServiceRole cluster-instanceRole create
+ │ │ ├─ aws:iam:Role cluster-instanceRole-role create
+ │ │ ├─ aws:iam:RolePolicyAttachment cluster-instanceRole-03516f97 create
+ │ │ ├─ aws:iam:RolePolicyAttachment cluster-instanceRole-3eb088f2 create
+ │ │ └─ aws:iam:RolePolicyAttachment cluster-instanceRole-e1b295bd create
+ │ ├─ eks:index:ServiceRole cluster-eksRole create
+ │ │ ├─ aws:iam:Role cluster-eksRole-role create
+ │ │ └─ aws:iam:RolePolicyAttachment cluster-eksRole-4b490823 create
+ │ ├─ aws:iam:InstanceProfile cluster-instanceProfile create
+ │ ├─ aws:ec2:SecurityGroup cluster-eksClusterSecurityGroup create
+ │ ├─ aws:ec2:SecurityGroupRule cluster-eksClusterInternetEgressRule create
+ │ ├─ aws:eks:Cluster cluster-eksCluster create
+ │ ├─ pulumi:providers:kubernetes cluster-eks-k8s create
+ │ ├─ aws:eks:Addon cluster-kube-proxy create
+ │ ├─ aws:ec2:SecurityGroup cluster-nodeSecurityGroup create
+ │ ├─ aws:eks:Addon cluster-coredns create
+ │ ├─ kubernetes:core/v1:ConfigMap cluster-nodeAccess create
+ │ ├─ aws:ec2:SecurityGroupRule cluster-eksExtApiServerClusterIngressRule create
+ │ ├─ aws:ec2:SecurityGroupRule cluster-eksClusterIngressRule create
+ │ ├─ aws:ec2:SecurityGroupRule cluster-eksNodeClusterIngressRule create
+ │ ├─ aws:ec2:SecurityGroupRule cluster-eksNodeInternetEgressRule create
+ │ ├─ eks:index:VpcCniAddon cluster-vpc-cni create
+ │ │ └─ aws:eks:Addon cluster-vpc-cni create
+ │ ├─ aws:ec2:SecurityGroupRule cluster-eksNodeIngressRule create
+ │ ├─ aws:autoscaling:Group cluster create
+ │ └─ aws:ec2:LaunchTemplate cluster-launchTemplate create
+ └─ pulumi:providers:kubernetes k8s-provider create
and this stalls for over an hour.
removing the namespace creation section allows preview to finish.
I am using a venv on mac with
pulumi 3.150.0
pulumi_aws 6.68.0
pulumi_eks 3.8.1
pulumi_kubernetes 4.21.1
debug log is getting stuck after
I0225 11:32:19.152297 39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: command={aws}
I0225 11:32:19.152302 39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: env={[{map[name:{KUBERNETES_EXEC_INFO} value:{{"apiVersion": "client.authentication.k8s.io/v1beta1"}}]}]}
I0225 11:32:19.152307 39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: name={KUBERNETES_EXEC_INFO}
I0225 11:32:19.152311 39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: value={{"apiVersion": "client.authentication.k8s.io/v1beta1"}}
I0225 11:32:19.152317 39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: version={4.21.1}
I0225 11:32:19.153010 39124 eventsink.go:59] resource registration successful: ty=pulumi:providers:kubernetes, urn=urn:pulumi:python-eks-testing::aws-py-eks::pulumi:providers:kubernetes::k8s-provider
I0225 11:32:19.153020 39124 eventsink.go:62] eventSink::Debug(<{%reset%}>resource registration successful: ty=pulumi:providers:kubernetes, urn=urn:pulumi:python-eks-testing::aws-py-eks::pulumi:providers:kubernetes::k8s-provider<{%reset%}>)
which seems to be getting stuck in the grpc call?? any ideas?
I think the problem is actually with the provider since any use of it stalls and seems it is unable to connect to the cluster despite kubeconfig was manually verified.modern-zebra-45309
02/25/2025, 11:58 AM