Hi all, I am new to pulumi and could use some help...
# getting-started
w
Hi all, I am new to pulumi and could use some help While creating an eks cluster, i am unable to create a namespace (stalling on preview stage) unless explicitly using apply function. (on kubeconfig, provier and namespace) have tried using output and depends_on with no success. anyone familiar with the issue or can suggest example approach? I would like to be able to create a namespace without breaking eks creation into a different stack (though tried that too) I also manually verified kubeconfig is working in command line. using pulumi 3.150 / python thank you p.s. from everyones experience, is there a preferred language for pulumi or all are the same? (saw most examples are in ts)
m
There's no preferred language but there are some quirks and restrictions when it comes to specific features (like dynamic resource providers). Regarding your problem, you'll have to show your current implementation for anyone to be able to help you figure out what the problem is.
w
I have tried several methods, the base problem is pulumi gets stuck on namespace preview stage (tried increasing timeout)
Copy code
# Step 1: Create an IAM role for EKS
eks_role = aws.iam.Role("eks-role",
    assume_role_policy="""{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "<http://eks.amazonaws.com|eks.amazonaws.com>"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    }"""
)

# Attach the necessary policies to the role
eks_role_policy_attachment = aws.iam.RolePolicyAttachment("eks-role-policy-attachment",
    role=eks_role.name,
    policy_arn="arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
)

# Step 2: Create a VPC and subnets
vpc = aws.ec2.Vpc("vpc",
    cidr_block="10.0.0.0/16",
    enable_dns_support=True,
    enable_dns_hostnames=True
)

subnet1 = aws.ec2.Subnet("subnet-1",
    vpc_id=vpc.id,
    cidr_block="10.0.1.0/24",
    availability_zone="eu-north-1a"
)

subnet2 = aws.ec2.Subnet("subnet-2",
    vpc_id=vpc.id,
    cidr_block="10.0.2.0/24",
    availability_zone="eu-north-1b"
)

# Step 3: Create an EKS cluster
eks_cluster = aws.eks.Cluster("eks-cluster",
    role_arn=eks_role.arn,
    vpc_config=aws.eks.ClusterVpcConfigArgs(
        subnet_ids=[subnet1.id, subnet2.id],
    ),
    opts=pulumi.ResourceOptions(depends_on=[eks_role_policy_attachment])
)

# Step 4: Generate a kubeconfig using `apply`
kubeconfig = pulumi.Output.all(eks_cluster.name, eks_cluster.endpoint, eks_cluster.certificate_authority.data).apply(
    lambda args: f"""
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: {args[2]}
    server: {args[1]}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {{}}
users:
- name: aws
  user:
    exec:
      apiVersion: <http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>
      command: aws
      args:
        - "eks"
        - "get-token"
        - "--cluster-name"
        - {args[0]}
"""
)

# Step 5: Create a Kubernetes provider using `apply`
k8s_provider = kubeconfig.apply(
    lambda config: k8s.Provider("k8s-provider",
        kubeconfig=config,
        opts=pulumi.ResourceOptions(depends_on=[eks_cluster])  # Ensure the provider waits for the cluster
    )
)

# Below are several namespace creation options attempted:
namespace = k8s.core.v1.Namespace("test", opts=pulumi.ResourceOptions(provider=k8s_provider))

# # Step 6: Create a namespace using `apply`
# namespace = k8s.core.v1.Namespace("MyNamespace",
#     lambda provider: k8s.core.v1.Namespace("argocd-namespace",
#         metadata={"name": "argocd"},
#         opts=pulumi.ResourceOptions(provider=provider)  # Ensure the namespace waits for the provider
#     )
# )
# namespace = k8s_provider.apply(
#     lambda provider: k8s.core.v1.Namespace("argocd-namespace",
#         metadata={"name": "argocd"},
#         opts=pulumi.ResourceOptions(provider=provider)  # Ensure the namespace waits for the provider
#     )
# )
when removing namespace section cluster is created. I manually verified generated kubeconfig and also created node groups using the provider which worked. reference to a working flow for creating namespaces (preferably in a single run) would be very helpful.
m
As a general note, you don't need
apply()
or
depends_on
in cases where the relationship is clear from connections between outputs and inputs. This will make your code a lot easier. I suggest you look into using
pulumi-eks
, which wraps the more basic
aws.eks.Cluster
and associated resources in a higher-level component resource. Among other things, it takes care of creating the kubeconfig and even instantiating a Kubernetes provider for you: https://www.pulumi.com/blog/easily-create-and-manage-aws-eks-kubernetes-clusters-with-pulumi/ This will make your life a lot easier and let you focus on your application, rather than dealing with lower-level details of EKS and Pulumi.
w
@modern-zebra-45309, thanks for the followup. In my example, i am using pulumi-eks to create the cluster. that works perfectly, however i do want to create a namespace which i didnt see is managed in that package. examples i have seen refer to using pulumi_kubernetes and the provider to connect to the cluster. though kubeconfig is correct, something in the preview stage is not working. is there a different way for creating namespaces and deploy services onto them? This should be relatively simple 1. create EKS (using pulumi-eks + dependencies) 2. create a provider based on kubeconfig 3. create a new namespace on the cluster using the provider however this doesnt work and seems to be stuck deep in the pulumi dependency graph parsing...
m
In your example above, you're not using
pulumi-eks
but
pulumi-aws
, though:
Copy code
eks_cluster = aws.eks.Cluster("eks-cluster", ...
Overall, your code seems rather complicated and it might very well be that your attempts at managing the relationship between resources manually is causing a deadlock. You can leave this to Pulumi, it's smart enough to know, e.g., that it cannot instantiate the Kubernetes provider prior to the cluster being ready 🙂 With
pulumi-eks
, it looks like this (copying directly from the tutorial I linked and adding the namespace creation):
Copy code
import pulumi
import pulumi_eks as eks
import pulumi_kubernetes as k8s

# Create an EKS cluster.
cluster = eks.Cluster(
    "cluster",
    instance_type="t2.medium",
    desired_capacity=2,
    min_size=1,
    max_size=2,
)

k8s_provider = k8s.Provider("k8s-provider", kubeconfig=cluster.kubeconfig)
# alternatively, this is what the tutorial uses: k8s_provider = cluster.provider

my_namespace = k8s.core.v1.Namespace("my-namespace", 
  metadata=k8s.meta.v1.ObjectMetaArgs(name="my-namespace"),
  opts=pulumi.ResourceOptions(provider=k8s_provider))
l
I think this bit may have been the issue, and Killian's code is the way to go:
Copy code
# Step 5: Create a Kubernetes provider using `apply`
k8s_provider = kubeconfig.apply(
    lambda config: k8s.Provider("k8s-provider",
        kubeconfig=config,
        opts=pulumi.ResourceOptions(depends_on=[eks_cluster])  # Ensure the provider waits for the cluster
    )
)
This code wouldn't create a provider until provision time, but it'd be too late for use by then. As far as I know, you cannot create providers inside an apply.
w
I might be doing something wrong here but even using the suggested code doesnt work... only thin my code has is:
Copy code
import pulumi
import pulumi_eks as eks
import pulumi_kubernetes as k8s

# Create an EKS cluster.
cluster = eks.Cluster(
    "cluster",
    instance_type="t2.medium",
    desired_capacity=2,
    min_size=1,
    max_size=2,
)

k8s_provider = k8s.Provider("k8s-provider", kubeconfig=cluster.kubeconfig)
# alternatively, this is what the tutorial uses: k8s_provider = cluster.provider

my_namespace = k8s.core.v1.Namespace("my-namespace",
                                     metadata=k8s.meta.v1.ObjectMetaArgs(
                                         name="my-namespace"),
                                     opts=pulumi.ResourceOptions(provider=k8s_provider))
when running preview on an empty stack:
Copy code
pulumi preview
Previewing update (python-eks-testing):
     Type                                   Name                                       Plan
 +   pulumi:pulumi:Stack                    aws-py-eks-python-eks-testing              create.
 +   ├─ eks:index:Cluster                   cluster                                    create
 +   │  ├─ eks:index:ServiceRole            cluster-instanceRole                       create
 +   │  │  ├─ aws:iam:Role                  cluster-instanceRole-role                  create
 +   │  │  ├─ aws:iam:RolePolicyAttachment  cluster-instanceRole-03516f97              create
 +   │  │  ├─ aws:iam:RolePolicyAttachment  cluster-instanceRole-3eb088f2              create
 +   │  │  └─ aws:iam:RolePolicyAttachment  cluster-instanceRole-e1b295bd              create
 +   │  ├─ eks:index:ServiceRole            cluster-eksRole                            create
 +   │  │  ├─ aws:iam:Role                  cluster-eksRole-role                       create
 +   │  │  └─ aws:iam:RolePolicyAttachment  cluster-eksRole-4b490823                   create
 +   │  ├─ aws:iam:InstanceProfile          cluster-instanceProfile                    create
 +   │  ├─ aws:ec2:SecurityGroup            cluster-eksClusterSecurityGroup            create
 +   │  ├─ aws:ec2:SecurityGroupRule        cluster-eksClusterInternetEgressRule       create
 +   │  ├─ aws:eks:Cluster                  cluster-eksCluster                         create
 +   │  ├─ pulumi:providers:kubernetes      cluster-eks-k8s                            create
 +   │  ├─ aws:eks:Addon                    cluster-kube-proxy                         create
 +   │  ├─ aws:ec2:SecurityGroup            cluster-nodeSecurityGroup                  create
 +   │  ├─ aws:eks:Addon                    cluster-coredns                            create
 +   │  ├─ kubernetes:core/v1:ConfigMap     cluster-nodeAccess                         create
 +   │  ├─ aws:ec2:SecurityGroupRule        cluster-eksExtApiServerClusterIngressRule  create
 +   │  ├─ aws:ec2:SecurityGroupRule        cluster-eksClusterIngressRule              create
 +   │  ├─ aws:ec2:SecurityGroupRule        cluster-eksNodeClusterIngressRule          create
 +   │  ├─ aws:ec2:SecurityGroupRule        cluster-eksNodeInternetEgressRule          create
 +   │  ├─ eks:index:VpcCniAddon            cluster-vpc-cni                            create
 +   │  │  └─ aws:eks:Addon                 cluster-vpc-cni                            create
 +   │  ├─ aws:ec2:SecurityGroupRule        cluster-eksNodeIngressRule                 create
 +   │  ├─ aws:autoscaling:Group            cluster                                    create
 +   │  └─ aws:ec2:LaunchTemplate           cluster-launchTemplate                     create
 +   └─ pulumi:providers:kubernetes         k8s-provider                               create
and this stalls for over an hour. removing the namespace creation section allows preview to finish. I am using a venv on mac with pulumi 3.150.0 pulumi_aws 6.68.0 pulumi_eks 3.8.1 pulumi_kubernetes 4.21.1 debug log is getting stuck after
Copy code
I0225 11:32:19.152297   39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: command={aws}
I0225 11:32:19.152302   39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: env={[{map[name:{KUBERNETES_EXEC_INFO} value:{{"apiVersion": "client.authentication.k8s.io/v1beta1"}}]}]}
I0225 11:32:19.152307   39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: name={KUBERNETES_EXEC_INFO}
I0225 11:32:19.152311   39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: value={{"apiVersion": "client.authentication.k8s.io/v1beta1"}}
I0225 11:32:19.152317   39124 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(pulumi:providers:kubernetes,k8s-provider)]: version={4.21.1}
I0225 11:32:19.153010   39124 eventsink.go:59] resource registration successful: ty=pulumi:providers:kubernetes, urn=urn:pulumi:python-eks-testing::aws-py-eks::pulumi:providers:kubernetes::k8s-provider
I0225 11:32:19.153020   39124 eventsink.go:62] eventSink::Debug(<{%reset%}>resource registration successful: ty=pulumi:providers:kubernetes, urn=urn:pulumi:python-eks-testing::aws-py-eks::pulumi:providers:kubernetes::k8s-provider<{%reset%}>)
which seems to be getting stuck in the grpc call?? any ideas? I think the problem is actually with the provider since any use of it stalls and seems it is unable to connect to the cluster despite kubeconfig was manually verified.
m
The logs look fine to me. I think you might have a problem with the connection to the EKS endpoint, or the AWS CLI that's used to fetch the credentials. My suggestion would be to take out your machine and local setup entirely, e.g., by running the code from a Docker container that you spin up on an EC2 instance or ECS. There's a Pulumi Docker image at https://hub.docker.com/r/pulumi/pulumi-python, all you'd have to do is install the AWS CLI and copy over your code.