Hello, I am trying to create a CNAME record in Rou...
# aws
Hello, I am trying to create a CNAME record in Route 53 using Pulumi with the
Value/Route traffic to
field populated with the DNS name of an ALB that is auto-created by an
upon detecting the ingress of my application. This breaks down into 2 questions: 1. Is there a way to get the DNS name of the ALB when its created? I know of
but that requires the name and ARN of the ALB, so I would need a way to get those. 2. How do I put the DNS name of the ALB into the CNAME record? Looking at the Pulumi docs here https://www.pulumi.com/registry/packages/aws/api-docs/route53/record/ it doesn't seem as though there is a
field to populate (though I could be wrong) and the
field used in the example doesn't actually have any explanation. Thank you!
is a Pulumi Output of the
resource. Set the Input of the
to the Output of the
(using caps because these are proper Pulumi terms). This example uses an ELB, but it's basically the same: https://www.pulumi.com/registry/packages/aws/api-docs/route53/record/#alias-record
Okay, I see that and it makes sense but I do not manually create a
resource, my
that is on my EKS cluster does after it detects an ingress from my application. so if I don't create that
manually, is there a way to capture the DNS name?
This is how I create the
, via helm Release.
Copy code
//Declare ALB Ingress Controller with helm
const albController = new k8s.helm.v3.Release(
        chart: "aws-load-balancer-controller",
        repositoryOpts: {
            repo: "<https://aws.github.io/eks-charts>",
        namespace: "kube-system",
        values: {
            autoDiscoverAwsRegion: "true",
            serviceAccount: {
                name: lbSaName,
                create: false
            vpcId: vpcId,
            clusterName: clusterName,
            podLabels: {
                app: "kube-system"
            transformations: [remove_status],
    }, { provider: k8sProvider });
Got it - it's a K8s-managed LB. I would recommend that you have K8s create the Route53 record since its what knows about the lifecycle of the ALB.
And therefore K8s will know to tear down the record if you remove the chart.
My knowledge may be out of date here, but this is what I used to solve this problem when I last set up K8s on AWS: https://github.com/kubernetes-sigs/external-dns
Thank you for your help thus far, I don't think I'm going to be able to swing K8s creating the Route53 record as a lot of my deployment automation depends on using Pulumi. That being said, I have found that the ALB DNS Name can be exported from the ingress with
. Which allows me to easily create a Route 53 CNAME record manually, but doing it through Pulumi I'm still not sure on my second question. Do I just put the DNS Name in the
field with the
field specified as CNAME?
Have you tried using externalDNS at all?
You'd still be using Pulumi in an idiomatic way by having K8s create the DNS entry, since its the one creating the ALB. Make sure you're creating an Alias record regardless: https://www.pulumi.com/registry/packages/aws/api-docs/route53/record/#alias-record
No, mainly because after reading through the documentation I don't think it works well with my use case? Which is to create a Route53 CNAME record temporarily for ephemeral test environments, and then have it deleted with the rest of the test environment resources. None of this is managed manually, and thus far I've had success with using Pulumi to manage those resources on a Github Actions runner. Sorry if it seems like I'm ignoring that advice (and I'm almost certainly not fully understanding how to ExternalDNS works and is set up), I'm not, I mainly just want to see if I can do this in pattern with the rest of my system the way I'm envisioning it before looking to other solutions.
No worries - no offense taken! ExternalDNS does what
does, except it manages Route53 entries for services deployed on the cluster. Once a public-facing K8s service is torn down, ExternalDNS will remove its Route53 entry. ExternalDNS itself is also deployed as a K8s service (or it was when I last checked a few years ago). My worry is that your resources are gonna get out of sync because you have 2 separate things managing the lifecycle of the infra. I would suggest that once you cross over to resources that are managed by K8s (that is, after the Helm chart is deployed), you don't go back to having dependent resources (the Route 53 entry depends on the ALB created by K8s) managed by Pulumi because they aren't visible to Pulumi (because Pulumi didn't create them). This might not be as big of a deal if you can the attribute back from the chart as you describe.