Does anyone know if pulumi supports alb-ingress-co...
# general
w
Does anyone know if pulumi supports alb-ingress-controller creation at the moment?
g
We don't have specific library support for it, but from https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/setup/ it looks like you could install it with the relevant Helm chart. Note that you'd need to set up the k8s cluster with the prerequisites yourself, either out of band, or with Pulumi.
w
We're tracking adding this as a first-class thing in https://github.com/pulumi/pulumi-eks/issues/29. @most-pager-38056 may have a report on how he ended standing it up?
w
Okay, thanks guys. Was hoping to avoid manually setting these up.
m
Hello! It was simpler than i thought, actually. We ended up moving back to GKE, but i can share how i did this. Basically, i used the
aws-alb-ingress-controller
chart (thanks to this amazing Pulumi feature!). It looks like this:
Copy code
export const albChart = new k8s.helm.v2.Chart(
  'alb-ingress-controller',
  {
    chart: 'aws-alb-ingress-controller',
    values: {
      clusterName: cluster.eksCluster.name,
      autoDiscoverAwsRegion: true,
      autoDiscoverAwsVpcID: true,
    },
    fetchOpts: {
      repo: '<http://storage.googleapis.com/kubernetes-charts-incubator>',
    },
  },
  {
    providers: { kubernetes: cluster.provider },
  },
);
And a policy attachment, as suggested by @white-balloon-205:
Copy code
export const albPolicyAttachment = new RolePolicyAttachment(
  'alb-ingress-controller-policy-attachment',
  {
    policyArn:
      'arn:aws:iam::ACCOUNT_ID:policy/AmazonEKSALBIngressControllerPolicy',
    role: cluster.instanceRole,
  },
);
Our policy is not created inside our stack. We were using a single AWS account for both stage/prod environment, so we decided to create the policy manually by using the IAM dashboard. Here is the policy we used: https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/iam-policy.json Now we can create a ingress inside our cluster, that’s how it looks like:
Copy code
export const ingress = new k8s.extensions.v1beta1.Ingress(
  `${appName}-ingress`,
  {
    metadata: {
      annotations: {
        '<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>': 'alb',
        '<http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>': 'internet-facing',
        '<http://alb.ingress.kubernetes.io/subnets|alb.ingress.kubernetes.io/subnets>': coreStackReference.outputs.apply(
          outputs => outputs.subnetIds.join(', '),
        ),
        '<http://alb.ingress.kubernetes.io/certificate-arn|alb.ingress.kubernetes.io/certificate-arn>': coreStackReference.outputs.apply(
          outputs => outputs.originCertificateArn,
        ),
      },
    },
    spec: {
      // ingress spec...
    },
  },
  {
    provider: cluster.provider,
  },
);
Notice that both origin certificate ARN and subnet ids should be specified, so the ALB can map the nodes to the load balancer. We decided to use the concept of a “core-stack” (with vpc, ssl certificates, etc) to avoid recreating these resources in every stack we wanted to use ALB. That’s it. 🙂
👍 3