This fragment of Pulumi-code that uses the nginx I...
# kubernetes
a
This fragment of Pulumi-code that uses the nginx Ingress Controller Helm chart is working but it does not use my k8sDnsLabel in the PublicIP that is created in Azure by the IngressController. I'm wondering if any of you have any insights.
Copy code
const env = pulumi.getStack(); // reference to this stack
const stackId = `dave/aks/${env}`;
const aksStack = new pulumi.StackReference(stackId);
const k8sDnsName = aksStack.getOutput("k8sDnsName"); // <-- This is "identity-auth-dev"

// Deploy ingress-controller using helm to AKS Cluster
const options = {
    chart: "nginx-ingress-controller",
    namespace: "kube-system",
    repo: "bitnami",
    values: {
        annotations: {
            "<http://service.beta.kubernetes.io/azure-dns-label-name|service.beta.kubernetes.io/azure-dns-label-name>": "identity-auth-dev"
        },
        resources: { requests : {memory: "150Mi", cpu: "100m"}},
        serviceType: "LoadBalancer",
        nodeCount: 1,
    }
};
const nginxIngress = new k8s.helm.v3.Chart("nginx", options, {provider: k8sProvider });
b
hey there! can you elaborate a little more more on the k8sDnsLabel? I don't see it in this code, although I could be missing it!
a
Yes, I've updated the example. I spoke of k8sDnsLabel but removed it and just put the string in the annotation.
Sorry
I'm thinking the annotation doesn't work, or I'm using it wrong.
I did have a similar annotation in there that made the LB internal and that did work.
I tried this direct helm chart install and it also did not work.
Copy code
helm install nginx-ingress bitnami/nginx-ingress-controller \
--namespace ingress \
--set controller.replicaCount=1 \
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-dns-label-name"=identity-auth-dev \
b
i'll try repro this when I get a sec, i believe I've got this working somewhere
a
That would be great @billowy-army-68599!
I've been standing up and tearing down LBs for an hour, using different combinations of stuff. I've found no way to make it work yet.
r
If I remember correctly, you also need to specify the resource group where your public ip is located. Remember that the cluster nodes are put into a separate resource group than the one where you create the „parent“ Kubernetes cluster resource. By default, the vms, nics, and so on are created in a resource group named MC_... and this is also where azure tries to find the public ip...
a
Thanks @rhythmic-finland-36256. I'll try that. I had the resouceGroup annotation in there at some point, but removed it when trying to simplify. I'll put it back into this simple example and see what happens.
k
@ancient-megabyte-79588, did adding the resource group annotation fix this?
I have to do the same thing, tomorrow
a
I'll
pulumi destroy
and
pulumi up
today and I'll let you knw.
@kind-mechanic-53546
k
Thanks 🙂
a
@kind-mechanic-53546 This does not work
Copy code
// from the start of the app
const k8sDnsName = aksStack.getOutput("k8sDnsName");
const clusterResourceGroup = aksStack.getOutput("nodeResourceGroup");

// Deploy ingress-controller using helm to AKS Cluster
const nginxIngress = new k8s.helm.v3.Chart("nginx", {
    chart: "nginx-ingress-controller",
    namespace: "kube-system",
    repo: "bitnami",
    values: {
        annotations: {
            "<http://service.beta.kubernetes.io/azure-dns-label-name|service.beta.kubernetes.io/azure-dns-label-name>": k8sDnsName,
            "<http://service.beta.kubernetes.io/azure-load-balancer-resource-group|service.beta.kubernetes.io/azure-load-balancer-resource-group>": clusterResourceGroup,

        },
        resources: { requests : {memory: "150Mi", cpu: "100m"}},
        serviceType: "LoadBalancer",
        nodeCount: 1,
    }
}, {provider: k8sProvider });
I use this script to do the work manually
Copy code
# get the PublicIP object for our load balancer
$pip = az network public-ip list --query "[?tags.service=='kube-system/nginx-nginx-ingress-controller']" | ConvertFrom-Json
# update the --dns-name and refresh our object in PowerShell
$pip = az network public-ip update -n $pip.name -g $pip.resourceGroup  --dns-name "identity-auth-dev" | ConvertFrom-Json
# set the clusterFQDN in Pulumi
pulumi config set clusterFQDN $pip.dnsSettings.fqdn
# verify that we can resolve our DNS entry
nslookup $pip.dnsSettings.fqdn
r
You also need to hand in the public IP address that should be used into the ingress-controller service of type loadBalancer. That’s how we do it. Seems a bit redundant but in the end you also need to do some dns setup for this specific ip to match the domain names used for the ingresses, so with pulumi, that’s pretty okay to pass in the
azure-load-balancer-resource-group
and the
loadBalancerIP
as both are probably created from the same pulumi stack.
a
I've found that you only need to pass in the PublicIp address if you are using an existing IP address, which I am not. when I started this, I was using a pre-created static IP address but I found a limitation that I can't remember at the moment. So, I let the IngressController create the LB and the PublicIp address and use the DNS Name Label to create a consistent Azure-provided DNS entry which I have as a CNAME entry on my DNS provider.
a
So in this case, my DNS name label of
identity-auth-dev
would turn into identity-auth-dev.westus.cloudapp.azure.com which I put in my DNS provider (GoDaddy) as a CNAME entry of
<http://auth.codingwithdave.xyz|auth.codingwithdave.xyz>
pointing to identity-auth-dev.westus.cloudapps.azure.com
r
All right, cool. So you can know that beforehand as you define the dns name when creating the ingress controller.
a
You don't need to do that if you are not providing the PublicIp address.
I don't have a PublicIp resource when creating the helm chart
the IngressController, in Azure, does all of that for me.
I was using that exact
controller.service.LoadBalancerIP
at the start.
r
I understand. It’s just a matter of which action causes which result. The CNAME way is also a good idea.
👍 1
But then you also hand in the responsibility to create a unique name for the dns label to the ingress controller deployment, right?
a
I think something is broken or changed in the nginx IngressController or in Azure that the
Copy code
annotations: {
            "<http://service.beta.kubernetes.io/azure-dns-label-name|service.beta.kubernetes.io/azure-dns-label-name>": k8sDnsName,
            "<http://service.beta.kubernetes.io/azure-load-balancer-resource-group|service.beta.kubernetes.io/azure-load-balancer-resource-group>": clusterResourceGroup,
},
annotation doesn't work anymore.
r
So deploying the same stack twice requires you to provide different dns labels so that everything works.
For us that still works…
a
What is the scenario that you'd deploy the same stack twice to two different places?
r
OKAY, I didn’t redeploy the ingress controller in the last week 🙂
Not the same pulumi stack, but the same set of resources once for test and once for prod
in the same region
So that’s one more place where you need to make sure to create unique (and also still available) names in azure.
a
My stacks are dev/test/stage/prod and I would use those names to help with the uniqueness constraint of the DNS Name label
r
But yeah, that’s the same like with public facing storage accounts…
a
It isn't that way in my example..
I could have the same stack in different regions and they'd get different URLs since the region is backed into the *.cloudapp.azure.com part of the Url
r
Hopefully nobody created ``${yourprefix}-prod`` before you first deploy to prod.
a
My prefix could be a guid.. I haven't done that for simplicity sake..
👍 1
r
I try to avoid such cases if possible.
All right. So: problem solved.
a
Well, not exactly. The DNS name label doesn't work during the
pulumi up
so I have to do it with PS script against the azure-cli after the deployment.
r
Oh, sorry. I thought that worked…
a
Something is wrong with the way Azure or nginx IngressController are handling that annotation
I haven't to do this after the
pulumi up
Copy code
# get the PublicIP object for our load balancer
$pip = az network public-ip list --query "[?tags.service=='kube-system/nginx-nginx-ingress-controller']" | ConvertFrom-Json
# update the --dns-name and refresh our object in PowerShell
$pip = az network public-ip update -n $pip.name -g $pip.resourceGroup  --dns-name "identity-auth-dev" | ConvertFrom-Json
# set the clusterFQDN in Pulumi
pulumi config set clusterFQDN $pip.dnsSettings.fqdn
# verify that we can resolve our DNS entry
nslookup $pip.dnsSettings.fqdn
r
Ah, so the dns-name-label isn’t set automatically when Azure creates the public IP. That was the empty text box…
a
Yeah
I've got a question in this Github issue
Here is a blog post detailing my current approach https://westerndevs.com/kubernetes/kubernetes-my-journey-part-8/
r
Nice setup. We do the same with traefik.
a
I had
traefik
in there for a bit, but found examples harder to find. I'm really interested in continuing that exploration.
r
If you want to solve it and your azure service principal has the permissions to create public ips, you still have the option to create it (even in the cluster resource group), add the dns-name label there and hand it into the helm deployment.
a
That is certainly a path that I'll explore. I'd like the current annotations to do the right thing, or find documentation about what I"m doing wrong because some people seem to be able to get it to work.
r
That sounds like better not relying on this magic to work with the next upgrade of AKS. I’ll stick with pre-created IPs and hand them into the service.
Still, setting the dns-name label might be a nice way to decouple the real dns entries from the concrete ips used…
a
There was a reason I switched from a pre-created PIp .. I can't recall now. And the CNAME entry is really easy to manage.
I'm trying upgrading the version of k8s in AKS to 1.18.2 (preview) ... see if that makes a difference. After reading that bug/issue in the AKS github repo
r
All right. Creating the IP manually (from pulumi) and assigning a dns-name would still work for the CNAME way of dealing with the real dns names. So it’s basically just about if you want to create a public IP resource from the stack where you have your helm charts.
I was in that situation too using too many stacks for base infra, base kubernetes deployments and app so that eventually I merged them back into one because there were too many interconnections.
E.g. we deployed one application that doesn’t work over http, so it needs its own loadbalancer (and the public IP). That was in my
kubernetes apps
stack, but creating a public IP from there didn’t feel right.
And creating that IP in some base stack and exporting it would have worked technically but then the base stack would make assumptions about the apps to be deployed.
k
Thanks for the update @ancient-megabyte-79588 🙂
👍 1
@ancient-megabyte-79588, @rhythmic-finland-36256, I've confirmed this works, either with pre-supplying an IP or letting it auto-create Took me a while destroying and recreating to get the annotations right AKS K8s 1.16.9
Copy code
// Deploy NGINX ingress controller using the Helm chart.
const nginx = new k8s.helm.v2.Chart(
  "nginx-ingress-helm-chart",
  {
    namespace: conf.k8sClusterConfig.ingressNsName,
    chart: "nginx-ingress",
    version: nginx_helm_chart_version,
    fetchOpts: { repo: "<https://kubernetes-charts.storage.googleapis.com/>" },
    values: {
      controller: {
        publishService: { enabled: true },
        service: {
          //loadBalancerIP: lbPublicIp.ipAddress.apply((v) => v),
          annotations: {
            "<http://service.beta.kubernetes.io/azure-dns-label-name|service.beta.kubernetes.io/azure-dns-label-name>":
              "asdfluahsdfasdf",
          },
        },
      },
    },
    transformations: [
      (obj: any) => {
        // Do transformations on the YAML to set the namespace
        if (obj.metadata) {
          obj.metadata.namespace = conf.k8sClusterConfig.ingressNsName;
        }
      },
    ],
  },
  { provider: provider }
);
Oh, and I'm using a different repo from you @ancient-megabyte-79588 (default in the crosswalk guides IIRC)
a
Do you have a link to the repo?
k
this one?
a
Oh.. for the helm chart.. ok .. gotcha
What did you determine that
publishService: { enabled: true },
does?
I think I"m using Helm v3 as well
const nginxIngress = new k8s.helm.v3.Chart("nginx", {
k
umm
ah, yes, there is that too
a
I don't think the v2 vs. v3 is important
Just a point of interest
k
Re: publishService
Only the LoadBalancer Service knows the IP address of the automatically created Load Balancer. Some apps (such as ExternalDNS) need to know its IP address, but can only read the configuration of an Ingress. The Controller can be configured to publish the IP address on each Ingress by setting the 
controller.publishService.enabled
 parameter to 
true
 during 
helm install
. It is recommended to enable this setting to support applications that may depend on the IP address of the Load Balancer.
Probably not relevant, think it was just copied over
I forked the crosswalk stacks originally so most code is Pulumi's great work
a
awesome! Thanks a ton for finding this. I'm tearing down and standing up my cluster again right now.
I don't think think we need the publishService parameter for Azure, but I've put it in for the moment.
k
NP, thanks for doing the initial work 🙂
a
Well, this is a bit frustrating
No DNS Label
k
gah
a
Copy code
// Deploy ingress-controller using helm to AKS Cluster
const nginxIngress = new k8s.helm.v3.Chart("nginx", {
    chart: "nginx-ingress-controller",
    namespace: "kube-system",
    repo: "bitnami",
    values: {
        controller: {
            publishService: { enabled: true },
            service: {
              annotations: {
                "<http://service.beta.kubernetes.io/azure-dns-label-name|service.beta.kubernetes.io/azure-dns-label-name>": "k8sDnsName",
                "<http://service.beta.kubernetes.io/azure-load-balancer-resource-group|service.beta.kubernetes.io/azure-load-balancer-resource-group>": clusterResourceGroup,
                }
            },
        },
        resources: { requests : {memory: "150Mi", cpu: "100m"}},
        serviceType: "LoadBalancer",
        nodeCount: 1,
    }
}, {provider: k8sProvider });
I'll try removing the cluster resource group annotation
I also tried upgrading my whole cluster to 1.18.2
prior to this.. and that didn't help...
k
just pulling down and adding your exact code
a
I was going to do that with your code next too! 😄
k
Odd, your SKU is Basic, Mine is Standard...
a
Interesting..
I don't know what controls that
Your chart is different
chart: "nginx-ingress",
is yours
chart: "nginx-ingress-controller",
is mine
k
yep
do you know the diff off the top of your head? I don't
there's so much I just don't know at this stage 😞
a
I don't know the difference either.. I imagine you can find the chart definitions in github
I'll go look eventually
Ok.. hold on...
This may be a PEBKAC error
I am editing the wrong index.ts
(╯°□°)╯︵ ┻━┻
k
haha
I've done that before...
a
ok.. I'm trying on what should be the real index.ts now!
@kind-mechanic-53546 in case you didn't see this in another thread, I have blogged about a ton of the work I've been doing. You can check it out here. https://westerndevs.com/kubernetes/kubernetes-my-journey/
All of the pulumi stuff starts at part 7
k
yep, legendary 🙂
out of curiosity, why did you go with IDSRV4 vs AAD B2C or AAD?
a
That has turned out to be a great question... the decision to do ID4 was in Dec... A bunch of work had gone into eval of ID4 and we didn't really have exposure to AAD B2C and the newly announced features of AAD External Identities announced at build really throws a loop into things
k
no idea which is the right one, I've used idsrv4 in the past for an on-prem system and it was great
a
We have apps and devices that will need to be managed.. I don't have an answer for that with AAD B2C yet.. for us .. it isn't just people.
ID4 looks to be a good fit, and while I want to investigate B2C and External Identities, I don't know that I need to put the project on hold while we re-do it to use the Azure services.
k
AAD B2C is a bit of a struggle to get working, and no good option for MFA
SMS only, 🤮
yep, ID4 is fine
a
ID4 has MFA built in .. not as nice as the app with push notifications, but authenticator with rolling codes is a good start.
k
my app is B2C only ATM
a
So... my IngressController didn't even get provisioned! 😄
k
well
a
No LB either in Azure
I'm going to change the chart
Interesting.. bitnami doesn't know about the
nginx-ingress
chart
k
I cannot get it to work with the bitnami controller chart
a
If this has been a bitnami chart problem, I'll be mad
I'm standing up the stable repo nginx-ingress now
k
pretty sure it is a bitnami problem
why bitnami?
a
I don't know.. it was one of the first repos I saw with many examples ..
And... IT WORKED!
I'm going to write a strongly worded letter.
😛
JohnB, thank you so much for all of your help on this!
🙇
🙏
I can go fix the article with an addendum!
FYI, here is the Bitnami link to this chart https://bitnami.com/stack/nginx-ingress-controller/helm
This chart may have a different values.yml ...
k
🎉
is your LB now Standard, instead of Basic?
(and I guess the IP's)
r
Nice you figured that out. Now as you wrote I remember that I also stumbled upon strange differences on Azure side because of Basic instead of Standard SKU. Especally with Standard not being the default…
I once had to tear down a whole cluster because I wanted load balancer monitoring which requires a public IP with Standard SKU. But changing the type of the public IP SKU cannot be done without fully recreating the whole cluster…
I guess if you want your infra to be future proof and enable a feature later that you didn’t think of today, booking a basic sku for anything is a bad idea…
a
@kind-mechanic-53546 no.. my PublicIP address is basic.. which I don't think was impacting this working.
I'm tearing this down this morning, and trying to stand up the bitnami chart again, with a tweak on the
values: {}
object
@kind-mechanic-53546 This also works!
Copy code
// Deploy ingress-controller using helm to AKS Cluster
const nginxIngress = new k8s.helm.v3.Chart("nginx", {
    chart: "nginx-ingress-controller",
    namespace: "kube-system",
    repo: "bitnami",
    values: {
        service :{
            annotations: {
                "<http://service.beta.kubernetes.io/azure-dns-label-name|service.beta.kubernetes.io/azure-dns-label-name>": k8sDnsName,
                "<http://service.beta.kubernetes.io/azure-load-balancer-resource-group|service.beta.kubernetes.io/azure-load-balancer-resource-group>": clusterResourceGroup,
            },
        },
        resources: { requests : {memory: "150Mi", cpu: "100m"}},
        serviceType: "LoadBalancer",
        nodeCount: 1,
    }
}, {provider: k8sProvider });
so in the stable chart, you need to do
controller : { services: { annotations :{} } }
and in the bitnami chart, you leave out the
controller: { }
parent object
This is certainly blog-learning worthy, because until know, I sort of assumed all charts were the same if they were "sort of" the same name.
And I saw lots of examples with the stable/nginx-ingress but I was always assuming that the only difference with the bitnami one was the name, so when I tweaked my
values: {}
object, I'd wouldn't pay enough attention to which repo/chart I was using and look at that specific charts github values.yml example.
k
Absolutely, re blog worthy, a rewarding learning exercise 🙂
Do either of you gents know what controls the Basic vs Standard SKU?
I'd prefer to use Basic in dev/test but haven't figured out as yet how to switch
a
I haven't figured that out either. I don't think it is the AKS cluster.. I don't recall picking a SKU for that. I don't indicate what SKU of LB I want in the Helm chart, and the Helm chart/LB provision the publicIp as well.
k
Yep, odd
a
One thing I thought is that perhaps in your Azure zone (Australia East) or Subscription that a basic IP or basic LB ins't available.
k
basic IP def is
not sure about LB
having said that, I'd probably prefer it to be a standard sku for prod
a
Other than availability zones (which we don't use), I don't know enough of what the difference would be to make a choice.
k
Both are available for my subscription
I don't know either, but at the cost of a full teardown / setup in prod...
a bit of future proofing
a
You only have to tear out the Ingress-Controller to get rid of the LB and pIp
k
ah, yes
a
Obviously, you'd take an outage doing that though
k
derp
a
In theory, it could be no-outage...
You could stand-up a new LB with out the DNS Name label, get it configured properly, then use the azure-cli to flip the DNS name labels, then tear out the old LB.
If you are using the CNAME technique
k
true
re the CNAME technique, so you have a CNAME pointing to your dns name label?
is that right?
then I guess just have to ensure you never lose control of xxx.cloudapp.azure.com
Regarding Basic v Standard LB SKU's, Basic only supports 1 node pool
which I'm eventually planning on having multiple
1 best effort with low priority VMSS (or whatever they renamed it), and then 1 SLA
a
That's some good investigation @kind-mechanic-53546. I don't know at the moment if I'll have multiple node pools or multiple clusters, each with a single node pool. But I can investigate how to get a Standard IP. I didn't have a preference. I just took the default.