hello, is it possible to use the `coreDns` `aws....
# kubernetes
a
hello, is it possible to use the
coreDns
aws.eks.Addon
with fargate in a k8s. https://www.pulumi.com/registry/packages/aws/api-docs/eks/addon/#sts=Create%20a%20Addon%20Resource the documentation provides some level of information but when applying it i run into the following error. the full kubectl describe command output is in the thread. I suspect the label should be updated in the ResourceOptions somehow to
<http://eks.amazonaws.com/fargate-profile=|eks.amazonaws.com/fargate-profile=><cluster name>
Copy code
Events:
  Type     Reason            Age                       From               Message
  ----     ------            ----                      ----               -------
  Warning  FailedScheduling  2m29s (x5494 over 3d21h)  default-scheduler  0/3 nodes are available: 3 node(s) had taint {<http://eks.amazonaws.com/compute-type|eks.amazonaws.com/compute-type>: fargate}, that the pod didn't tolerate.
Copy code
Name:                 coredns-66689d8bc4-gth2b
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 <none>
Labels:               <http://eks.amazonaws.com/component=coredns|eks.amazonaws.com/component=coredns>
                      k8s-app=kube-dns
                      pod-template-hash=66689d8bc4
Annotations:          <http://eks.amazonaws.com/compute-type|eks.amazonaws.com/compute-type>: ec2
                      <http://kubernetes.io/psp|kubernetes.io/psp>: eks.privileged
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-66689d8bc4
Containers:
  coredns:
    Image:       <http://602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/coredns:v1.8.7-eksbuild.2|602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/coredns:v1.8.7-eksbuild.2>
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86vzs (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-86vzs:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 CriticalAddonsOnly op=Exists
                             <http://node-role.kubernetes.io/master:NoSchedule|node-role.kubernetes.io/master:NoSchedule>
                             <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason            Age                       From               Message
  ----     ------            ----                      ----               -------
  Warning  FailedScheduling  2m29s (x5494 over 3d21h)  default-scheduler  0/3 nodes are available: 3 node(s) had taint {<http://eks.amazonaws.com/compute-type|eks.amazonaws.com/compute-type>: fargate}, that the pod didn't tolerate.
b
what taints do the fargate nodes have?
a
i believe its name space *, let me check
Copy code
# Create default Fargate roles and profiles
        self._fargate_role = aws.iam.Role(
            f"{self._id}-fargate",
            assume_role_policy=aws_account.assume_role_policy("<http://eks-fargate-pods.amazonaws.com|eks-fargate-pods.amazonaws.com>"),
            opts=pulumi.ResourceOptions(provider=aws_account.provider()),
        )
        aws.iam.RolePolicyAttachment(
            f"{self._id}-fargate",
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy",
            role=self._fargate_role,
            opts=pulumi.ResourceOptions(provider=aws_account.provider()),
        )
        self.fargate = aws.eks.FargateProfile(
            self._id,
            cluster_name=self.cluster.name,
            pod_execution_role_arn=self._fargate_role.arn,
            selectors=[
                {"namespace": "*"},
            ],
            subnet_ids=aws_account.private_subnets,
            opts=pulumi.ResourceOptions(provider=aws_account.provider()),
        )

        # Add CoreDNS add-on to the EKS cluster
        aws.eks.Addon(
            f"{self._id}-coredns",
            addon_name="coredns",
            addon_version=coredns_version,
            cluster_name=self.cluster.name,
            opts=pulumi.ResourceOptions(provider=aws_account.provider(), depends_on=[self.fargate]),
        )
step 1 / configmap patch to update the labels add the fargate-profile step2/ deploymentpatch to remove the ec2 instance https://github.com/jaxxstorm/pulumi-examples/blob/main/python/aws/eks_patch_coredns/__main__.py#L96-L123
and then run
kubectl
to patch CoreDNS