Hey folks, another pulumi noob here ! I'm trying t...
# getting-started
v
Hey folks, another pulumi noob here ! I'm trying to setup an AWS EKS cluster with pods running over "Fargate" using this flag (https://www.pulumi.com/registry/packages/eks/api-docs/cluster/#fargate_go), cluster is created properly, vpc configuration, gateway and friends look ok, nodes are created properly on fargate, my pod is deployed successfully and listening on port 8080, then my service is mapping 80 to 8080, a load balancer is created also with an automatic binding to the port 80, but I get a connexion timeout when calling the url of my load balancer **.eu-central-2.elb.amazonaws.com, i'm probably missing something, where I can find any working samples or articles?
i'm a bit confused on what this flag fargate=true is doing under the hood and if the fargate profile is needed or not ?
v
Are you running your cluster in a VPC on the private subnets? If so, we make use of the aws alb controller which automatically creates the security groups which allow the LB to speak to the cluster from the internet if the LB is set to public
We make use of pulumi eks and the normal aws classic library to create our own eks cluster library, when I’m on my work machine tomorrow can share more code but maybe I can help by asking and answering questions 😃
v
it would be awesome 😄
yes i'm creating the cluster with private / public subnet
Copy code
devEksCluster, err := eks.NewCluster(ctx, "rh-eks-dev", &eks.ClusterArgs{
			ClusterSecurityGroup: eksSecurityGroup,
			VpcId:                devEksVpc.VpcId,
			Fargate:              pulumi.Bool(true),
			//	CreateOidcProvider:   pulumi.Bool(true),
			// Public subnets will be used for load balancers
			PublicSubnetIds: devEksVpc.PublicSubnetIds,
			// Private subnets will be used for cluster nodes
			PrivateSubnetIds: devEksVpc.PrivateSubnetIds,
		})
v
How are you managing the LB creation?
v
then I found this https://devpress.csdn.net/cicd/62ec87d089d9027116a112c0.html but it's a bit outdated
basically i'm following this
but in golang
v
So the way we do it, is that we run our EKS cluster in the private subnets in our VPC, then using the aws alb controller we automatically provision ingresses with dns (using the external-dns operator) that allow access to the applications
All using annotations etc
v
I think i'm not so far
v
We use pulumi in the TypeScript implementation, so tomorrow I can send you some code to help but unfortunately I’m not too familiar with Go
v
ok it's not a big deal, sometime it's just a bit tricky to find the correct typing
v
Ok cool, tomorrow at some point I’ll send you some code then and hope it helps, but we just followed the pulumi eks docs and then used the k8s provider and aws classic provider to add the bits we were missing
We use fargate to provision karpenter which automatically scales our nodes so hopefully it can help you
v
currently i'm stuck creating the ingress
image.png
i probably missed something
v
How are you trying to create these? Are you using the aws-lb-controller?
Can be installed using a helm chart
v
yes
i'm doing this :
v
This automatically creates the Ingress for you and routes traffic from the public to the private subnets
v
Copy code
_, err = helmv3.NewRelease(ctx, "aws-load-balancer-controller", &helmv3.ReleaseArgs{
			Chart:   pulumi.String("aws-load-balancer-controller"),
			Version: pulumi.String("1.6.1"),
			RepositoryOpts: helmv3.RepositoryOptsArgs{
				Repo: pulumi.String("<https://aws.github.io/eks-charts>"),
			},
			Namespace: pulumi.String("kube-system"),
			Values: pulumi.Map{
				"clusterName": devEksCluster.EksCluster.Name(),
				"serviceAccount": pulumi.Map{
					"create": pulumi.Bool(false),
					"name":   serviceAccount.Metadata.Name(),
				},
				"region": pulumi.String("eu-central-2"),
				"vpcId":  devEksVpc.VpcId,
			},
		})
v
Looks like you’re using quite an old version of it? Also, where are the annotations on the k8s ingress you’re trying to create?
v
ok i'm creating the aws-load-balancer-controller using helm and then i'm trying to create the ingress
Copy code
_, err = v1.NewIngress(ctx, "api-ingress", &v1.IngressArgs{
			Metadata: &metav1.ObjectMetaArgs{
				Annotations: pulumi.StringMap{
					"<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>":                pulumi.String("alb"),
					"<http://alb.ingress.kubernetes.io/backend-protocol|alb.ingress.kubernetes.io/backend-protocol>": pulumi.String("HTTP"),
					"<http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>":           pulumi.String("internet-facing"),
					"<http://alb.ingress.kubernetes.io/target-type|alb.ingress.kubernetes.io/target-type>":      pulumi.String("ip"),
				},
				Labels: pulumi.StringMap{
					"app": pulumi.String("api"),
				},
				Name: pulumi.String("api-ingress"),
			},
			Spec: &v1.IngressSpecArgs{
				Rules: v1.IngressRuleArray{
					&v1.IngressRuleArgs{
						Http: &v1.HTTPIngressRuleValueArgs{
							Paths: v1.HTTPIngressPathArray{
								&v1.HTTPIngressPathArgs{
									Backend: &v1.IngressBackendArgs{
										Service: &v1.IngressServiceBackendArgs{
											Name: pulumi.String("my-service"), // Your service name goes here
											Port: &v1.ServiceBackendPortArgs{
												Number: <http://pulumi.Int|pulumi.Int>(80), // Your service target port number
											},
										},
									},
									Path:     pulumi.String("/*"), // Request path to match
									PathType: pulumi.String("Prefix"),
								},
							},
						},
					},
				},
			},
		}, pulumi.Provider(eksProvider))
v
Are you using the external dns service to automatically create your route 53 entries?
v
not yet, i just wanted to call the load balancer url directly before playing with dns
and private subnet
v
The alb controller should handle the SG creation routing from the public to the private subnets
Without looking at our code I can’t remember if there’s anything extra we did
But I’ll be able to share with you tomorrow
v
awesome
thanks
v
No probs man, if you manage to fix before tomorrow let me know, otherwise as soon as I can I’ll send you everything we have
v
yeah sure
i think the extra steps is documented here https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html, but they confused me a lot with the tags and the version to use
v
Hope you fix it soon dude, like I said, tomorrow I’ll be around to help again
v
hello, so if i follow all the steps https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html and here https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html, manually creating the oidc provider with correct iam policies / aws-load-balancer-controller / k8s ingress, I'm able to connect to a service in my cluster using fargate with pods in the private subnets
now, regarding provisioning everything with pulumi, i'm feel a bit lost, i was doing this :
Copy code
devEksCluster, err := eks.NewCluster(ctx, "rh-eks-dev", &eks.ClusterArgs{
			ClusterSecurityGroup: eksSecurityGroup,
			VpcId:                devEksVpc.VpcId,
			Fargate:              pulumi.Bool(true),
			CreateOidcProvider:   pulumi.Bool(true),
			// Public subnets will be used for load balancers
			PublicSubnetIds: devEksVpc.PublicSubnetIds,
			// Private subnets will be used for cluster nodes
			PrivateSubnetIds: devEksVpc.PrivateSubnetIds,
		})
1/
ClusterSecurityGroup: eksSecurityGroup,
I guess it's not needed since the alb-controller is already automatically creating things
v
I think that security group relates to the security group that’s applied to the nodes
For inter communication etc. works really busy today so not had chance to grab the code yet
v
yes
Copy code
CreateOidcProvider:   pulumi.Bool(true),
I don't get how this one is working, https://www.pulumi.com/registry/packages/eks/api-docs/cluster/#createoidcprovider_go, I think i have to create it manually before
and then I have to create the alb-controller man and the ingress, I will try with some helm charts
v
The oidcprovider is an output of the cluster then you need to create an iam service account using it
v
I think the tricky part is really the iam/service account/oidc configuration, if you have some details to share it would be great
v
hey mate, sorry been a crazy few days. here's some code to create an iam service account:
Copy code
import { iam } from '@pulumi/aws';
import * as k8s from '@pulumi/kubernetes';
import {
  all,
  ComponentResource,
  ComponentResourceOptions,
  Input,
  Output,
} from '@pulumi/pulumi';

export type IamServiceAccountArgs = {
  /**
   * Name of the service account to associate with the role
   */
  serviceAccountName: Input<string>;
  /**
   * Namespace to associate with the service account/role
   */
  serviceAccountNamespace: Input<string>;
  /**
   * Whether to create the service account or not
   * (i.e if you are using helm to create the service account, set this to false,
   * then extract the roleArn from this to annotate the serviceaccount with in the helm chart)
   */
  createServiceAccount?: Input<boolean>;
  /**
   * ARNs of the IAM policies to apply to the service account
   */
  policies: Input<string>[];
  /**
   * The issuer of the cluster OIDC provider
   */
  clusterOidcProviderIssuer: Input<string>;
  /**
   * The ARN of the cluster OIDC provider
   */
  clusterOidcProviderArn: Input<string>;
  /**
   * The provider with authentication to the cluster
   */
  provider: k8s.Provider;
  /**
   * The resource tags
   */
  tags: Tags;
};

export class IamServiceAccount extends ComponentResource {
  serviceAccount: k8s.core.v1.ServiceAccount;
  role: iam.Role;
  // TODO: remove roleArn here and just use role.arn in downstream constructors
  roleArn: Output<string>;

  constructor(
    name: string,
    {
      clusterOidcProviderArn,
      clusterOidcProviderIssuer,
      policies,
      provider,
      serviceAccountName,
      serviceAccountNamespace,
      createServiceAccount,
      tags,
    }: IamServiceAccountArgs,
    opts?: ComponentResourceOptions
  ) {
    super('pkg:io:jugo:eks:IamServiceAccount', name, {}, opts);

    const { assumeRolePolicy } = all([
      clusterOidcProviderArn,
      clusterOidcProviderIssuer,
      serviceAccountNamespace,
      serviceAccountName,
    ]).apply(
      ([
        clusterOidcProviderArn,
        clusterOidcProviderIssuer,
        serviceAccountNamespace,
        serviceAccountName,
      ]) => {
        const assumeRolePolicy: iam.PolicyDocument = {
          Version: '2012-10-17',
          Statement: [
            {
              Effect: 'Allow',
              Principal: {
                Federated: clusterOidcProviderArn,
              },
              Action: 'sts:AssumeRoleWithWebIdentity',
              Condition: {
                StringEquals: {
                  [clusterOidcProviderIssuer.replace('https://', '') + ':aud']:
                    'sts.amazonaws.com',
                  [clusterOidcProviderIssuer.replace('https://', '') +
                  ':sub']: `system:serviceaccount:${serviceAccountNamespace}:${serviceAccountName}`,
                },
              },
            },
          ],
        };
        return { assumeRolePolicy };
      }
    );

    createServiceAccount = createServiceAccount ?? true;

    this.role = new iam.Role(
      `${name}-k8s-sa-iam-role`,
      {
        name: `${name}-k8s-sa-iam-role`,
        assumeRolePolicy,
        tags: tags,
      },
      { parent: opts?.parent }
    );

    policies.forEach((policy, i) => {
      new iam.RolePolicyAttachment(
        `${name}-${i}-attachment`,
        {
          role: this.role.name,
          policyArn: policy,
        },
        { parent: opts?.parent }
      );
    });

    if (createServiceAccount) {
      this.serviceAccount = new k8s.core.v1.ServiceAccount(
        name,
        {
          metadata: {
            name: serviceAccountName,
            namespace: serviceAccountNamespace,
            annotations: {
              'eks.amazonaws.com/role-arn': this.role.arn,
            },
          },
        },
        {
          provider,
          parent: opts?.parent,
        }
      );
    }

    this.roleArn = this.role.arn;
  }
}
Hope that helps with the IAM service account creation. You don't need to do anything to coonfigure the oidc provider, just get the details from the cluster and pass it in to that component resource
v
yes it was helpful
thanks
now I have issue with my service deployment using helm install/upgrade but it's another story :D
v
No problem. What issue are you having using helm? Are you using the pulumi provider for it? Or native helm commands
v
i'm doing this
Copy code
_, err = helmv3.NewRelease(ctx, "some-release", &helmv3.ReleaseArgs{
    Chart: pulumi.String("./infra/charts/some-chart"),
    Name:  pulumi.String("some-release"),
    RecreatePods:  pulumi.Bool(true),
    CleanupOnFail: pulumi.Bool(true),
    ForceUpdate:   pulumi.Bool(true),
}, pulumi.Provider(eksProvider))
the first pulumi up work well
but after i changed stuff in my chart then if i'm running pulumi up again i'm getting this error
error: cannot re-use a name that is still in use
but i don't want to run helm install again, i just want a helm upgrade
v
Looks like it’s not adding the managed-by label
v
v
Oh that’s weird, whenever I’ve faced those issues, it’s because the managed by label was missing. Are you adding it yourself or letting pulumi manage it?
v
i dropped everything and run pulumi refresh and now everything is working well, my state was probably broken for some reason
BTW i have a question about the ALB ingress config
I'm adding a second service
Copy code
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  name: {{ .Chart.Name }}
  annotations:
    <http://alb.ingress.kubernetes.io/group.name|alb.ingress.kubernetes.io/group.name>: dev-load-balancer
    <http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>: internet-facing
    <http://alb.ingress.kubernetes.io/target-type|alb.ingress.kubernetes.io/target-type>: instance
    ## SSL Settings
    <http://alb.ingress.kubernetes.io/certificate-arn|alb.ingress.kubernetes.io/certificate-arn>: arn:aws:acm:eu-central-2:949119048772:certificate/2ec19ed9-cb27-4c1d-9fe5-b8b56aca36f4
    <http://alb.ingress.kubernetes.io/listen-ports|alb.ingress.kubernetes.io/listen-ports>: '[{"HTTP": 80}, {"HTTPS":443}]'
    #<http://alb.ingress.kubernetes.io/ssl-redirect|alb.ingress.kubernetes.io/ssl-redirect>: '443'
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /rh-backend
            pathType: Prefix
            backend:
              service:
                name: rh-backend
                port:
                  number: 80
          - path: /rh-public-api
            pathType: Prefix
            backend:
              service:
                name: rh-public-api
                port:
                  number: 80
here it's my config, but it's not working well
this two service are on a different dns route
v
What issues are you having? I’ve got an example of doing path based routing on aws alb but I’m out at the moment, can send you the code again later on if that’s ok?
v
oh yes it would be great
v
We needed to separate the config and route it to the same service as we had some issues only wanting to put specific paths behind SSO
Are you using the external dns operator?
v
no, should i use it ?
v
That’s what we use, it automatically updates the route 53 zones for you
Using annotations
v
v
The external dns plugin will do that for you
v
ho
it would be great 😄
v
thx 🙏
v
Again, when I’m home later, I can send you the code for this as well
v
there is a way to do this with pulumi ? or it's should be done manually
v
Yeah we manage with pulumi, I’ll send examples later on. Will probably be a few hours
v
At some point I will have to send you some beer
v
Haha, I prefer rum 🤣
But don’t worry about it man! It’s a pleasure to help :)
Copy code
import { iam } from '@pulumi/aws';
import * as k8s from '@pulumi/kubernetes';
import {
  ComponentResource,
  ComponentResourceOptions,
  Input,
} from '@pulumi/pulumi';
import { serviceAccountNamespace } from './clusterConfig';
import { region } from '@pulumi/aws/config';
import { getAssumeRolePolicyDocument } from './policies';

export type ExternalDNSArgs = {
  /**
   * Name of the environment the cluster will be running in e.g. sandbox, prod, qa
   */
  environment: Input<string>;
  /**
   * Name of the cluster e.g. jugo-monitoring
   */
  clusterName: string;
  /**
   * The DNS update policy: upsert-only or sync. Defaults to sync
   */
  dnsUpdatePolicy?: string;
  /**
   * The issuer of the cluster OIDC provider
   */
  clusterOidcProviderUrl: Input<string>;
  /**
   * The ARN of the cluster OIDC provider
   */
  clusterOidcProviderArn: Input<string>;
  /**
   * The provider with authentication to the cluster
   */
  provider: k8s.Provider;
  /**
   * The resource tags
   */
  tags: Tags;
};

export class ExternalDNS extends ComponentResource {
  constructor(
    name: string,
    {
      environment,
      clusterName,
      dnsUpdatePolicy,
      clusterOidcProviderArn,
      clusterOidcProviderUrl,
      provider,
      tags,
    }: ExternalDNSArgs,
    opts?: ComponentResourceOptions
  ) {
    super('pkg:io:jugo:eks:ExternalDNS', name, {}, opts);

    const externalDNSPolicyDocument: iam.PolicyDocument = {
      Version: '2012-10-17',
      Statement: [
        {
          Effect: 'Allow',
          Action: ['route53:ChangeResourceRecordSets'],
          Resource: ['arn:aws:route53:::hostedzone/*'],
        },
        {
          Effect: 'Allow',
          Action: ['route53:ListHostedZones', 'route53:ListResourceRecordSets'],
          Resource: ['*'],
        },
      ],
    };

    const saName = `${clusterName}-external-dns`;

    const assumeRolePolicy = getAssumeRolePolicyDocument({
      clusterOidcProviderUrl,
      clusterOidcProviderArn,
      provider,
      saName,
    });

    const role = new iam.Role(
      `${clusterName}-external-dns-role`,
      {
        name: `${clusterName}-external-dns-role`,
        assumeRolePolicy,
        tags,
      },
      { parent: this }
    );

    const externalDnsPolicy = new iam.Policy(
      `${clusterName}-external-dns-policy`,
      {
        name: `${clusterName}-external-dns-policy`,
        policy: externalDNSPolicyDocument,
      },
      { parent: this }
    );

    new iam.RolePolicyAttachment(
      `${clusterName}-external-dns-policy-attachment`,
      {
        role: role.name,
        policyArn: externalDnsPolicy.arn,
      },
      { parent: this }
    );

    const serviceAccount = new k8s.core.v1.ServiceAccount(
      saName,
      {
        metadata: {
          name: saName,
          namespace: serviceAccountNamespace,
          annotations: {
            '<http://eks.amazonaws.com/role-arn|eks.amazonaws.com/role-arn>': role.arn,
          },
          labels: {
            '<http://app.kubernetes.io/component|app.kubernetes.io/component>': 'controller',
            '<http://app.kubernetes.io/name|app.kubernetes.io/name>': saName,
          },
        },
      },
      {
        provider,
        parent: this,
      }
    );


    let domainFilter = environment;
    // See: <https://github.com/bitnami/charts/tree/main/bitnami/external-dns>
    new k8s.helm.v3.Release(
      `${clusterName}-external-dns`,
      {
        name: 'external-dns',
        chart: 'external-dns',
        version: '6.18.0',
        namespace: 'kube-system',
        values: {
          provider: 'aws',
          serviceAccount: {
            create: false,
            name: serviceAccount.metadata.name,
          },
          aws: {
            region: region,
            roleArn: role.arn,
          },
          zoneType: 'public',
          txtOwnerId: clusterName,
          domainFilters: [`${domainFilter}`],
          policy: dnsUpdatePolicy || 'sync',
          tolerations: [
            {
              key: '<http://jugo.io/node-role|jugo.io/node-role>',
              operator: 'Exists',
              effect: 'NoSchedule',
            },
          ],
          nodeSelector: {
            '<http://jugo.io/node-role|jugo.io/node-role>': 'system',
          },
        },
        repositoryOpts: {
          repo: '<https://charts.bitnami.com/bitnami>',
        },
      },
      {
        provider,
        customTimeouts: { create: '2m' },
        parent: this,
      }
    );
  }
}
thats the external dns deployment
what examples did you need now? how to use the annotations?
v
Hey, I was busy on many things but today i implemented the external dns with pulumi, it's working like charms 🙂
my pulumi stack is almost good now
did you setup aws container insight ? how do you monitor your containers ?
v
Hey man, hoped my code snippets helped. We use metrics server and fluent bit for container logging and monitoring. What problem are you trying to solve?
v
yep snippets helped, thanks
I did the parts with logging, so I'm using fluentbit to have the containers logs in cloud watch
but i wanted also to have the metrics and dashboard
in "container insight"
but not sure how to do it with fargate
I guess i have to follow this https://aws-otel.github.io/docs/introduction, but if you have any hints 🙂