<@UB3BGTV63> any idea why creating a GKE K8S Ingre...
# general
c
@white-balloon-205 any idea why creating a GKE K8S Ingress would be stuck like this?
Copy code
++  ├─ kubernetes:extensions:Ingress  hello-kubernetes-us-east1         creating replacement..   [diff: ~metadata]; [2/3] Waiting for update of .status.loadBalancer with hostname/IP
r
How long is it stuck like this? I've seen this resolve after 5-10 minutes or so when an external IP was finally allocated and assigned. According to https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer?hl=en
It may take a few minutes for GKE to allocate an external IP address and set up forwarding rules until the load balancer is ready to serve your application.
w
Yeah - I’ve seen this be quite variable as well on GCP - sometimes nearly instant, other time 5 minutes.
c
@rapid-eye-32575 @white-balloon-205 seems it timed out. When I was provisioning a straight up
LoadBalancer
for the past few days it was ready within 10 seconds usually
There's no resources in the console
w
Can you share the code you used to add the Ingres?
c
Yes sir!
Copy code
const helloWorldDeployment = new k8s.apps.v1.Deployment(
    `${name}-${region}`,
    {
      metadata: {
        name: "hello-kubernetes"
      },
      spec: {
        replicas: 3,
        selector: {
          matchLabels: appLabels
        },
        template: {
          metadata: {
            labels: appLabels
          },
          spec: {
            containers: [
              {
                name: name,
                image: "paulbouwer/hello-kubernetes:1.5"
                ],
                ports: [
                  {
                    containerPort: 8080
                  }
                ]
              }
            ]
          }
        }
      }
    },
    { provider: k8sProvider }
  );

  const helloWorldService = new k8s.core.v1.Service(
    `${name}-${region}`,
    {
      metadata: {
        name: name
      },
      spec: {
        selector: appLabels,
        ports: [
          {
            port: 8080
          }
        ]
      }
    },
    { provider: k8sProvider }
  );

  const helloWorldFrontend = new k8s.extensions.v1beta1.Ingress(
    `${name}-${region}`,
    {
      metadata: {
        name: name
      },
      spec: {
        rules: [
          {
            http: {
              paths: [
                {
                  path: "/",
                  backend: {
                    serviceName: name,
                    servicePort: 8080
                  }
                }
              ]
            }
          }
        ]
        // ports: [{ port: 80, targetPort: 8080, protocol: "TCP" }],
        // selector: appLabels,
      }
    },
    { provider: k8sProvider }
  );
I don't have a domain yet, just want to get it working with IP first
r
@creamy-jelly-91590 is it a completely new cluster? I.e. is the HTTP load balancing addon enabled?
c
It's new yeah, using Pulumi to provision it
r
Can you share that snippet as well?
c
Copy code
// Default values for constructing the `Cluster` object.
  const defaultClusterOptions = {
    nodeCount: 1,
    nodeMachineType: "n1-standard-1",
    minMasterVersion: kubeVersion,
    masterUsername: "",
    masterPassword: ""
  };

  const cluster = new gcp.container.Cluster(`${name}-${region}`, {
    project,
    region: region,
    initialNodeCount: config.nodeCount || defaultClusterOptions.nodeCount,
    minMasterVersion: defaultClusterOptions.minMasterVersion,
    removeDefaultNodePool: true,
    network: network.name,
    subnetwork: subnet.name,
    masterAuth: {
      username: defaultClusterOptions.masterUsername,
      password: defaultClusterOptions.masterPassword
    }
  });

  const poolName = `${name}-${region}-default-pool`;
  const defaultNodePool = new gcp.container.NodePool(poolName, {
    name: poolName,
    cluster: cluster.name,
    version: kubeVersion,
    initialNodeCount: 1,
    location: region,
    nodeConfig: {
      machineType:
        config.nodeMachineType || defaultClusterOptions.nodeMachineType,
      oauthScopes: [
        "<https://www.googleapis.com/auth/compute>",
        "<https://www.googleapis.com/auth/devstorage.read_only>",
        "<https://www.googleapis.com/auth/logging.write>",
        "<https://www.googleapis.com/auth/monitoring>"
      ]
    },
    management: {
      autoUpgrade: false,
      autoRepair: true
    }
  });
r
Also you can check the addons in the cluster detail / config page under "addons"
c
Yes, HTTP load balancing is set to enabled in the console
c
does it actually allocate the IP address
c
Nothing seems to be happening at all
c
if it doesn't allocate the IP address there isn't a whole lot we can do
c
No IP addresses allocated — I see 6 ephemeral ones but I am pretty sure thats one per node in my 2 node pools
r
@creamy-jelly-91590 You said that "There's no resources in the console" ... does that mean that neither a service nor an ingress resource was created?
c
Oh, sorry I meant no load balancers in the Networking -> Load balancing section
Will check workloads!
Oh man, here we go:
Copy code
error while evaluating the ingress spec: service "default/hello-kubernetes" is type "ClusterIP", expected "NodePort" or "LoadBalancer
I specified no type so it used ClusterIP — but it seems like this error should be caught by Pulumi and reported back?
Bam, NodePort fixed the issue. @creamy-potato-29402 was Pulumi not supposed to pick up the spec error though?
r
Hmm I think you have to explicitely set it to
LoadBalancer
in your case of relying on the ingress controller of GKE itself (And when the annotation
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>
is absent from your ingress). But interesting why this error isn't reported in Pulumi...
c
@rapid-eye-32575 just setting it to NodePort made it go through
Hm, but its not actually serving the app on the IP
r
@creamy-jelly-91590 Cool, but in case you don't want to use a random high port you have to use
LoadBalancer
c
Yeah Ill try that and see if that fixes the issue
r
You should be able to see the port that was allocated when inspecting the service resource
c
Should I open an issue for the error not being picked up?
Hm,
LoadBalancer
does not seem to make it respond to requests either
Copy code
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
This is from the GC HTTP LB
I know this isn't a Pulumi problem but I'd appreciate any tips anyone might have 😄
r
Hmm I might be mistaken but this is the Google Cloud LB message for a Bad Gateway (502) is it not? Maybe you can check via
kubectl port-forward
that the service and the pod is actually answering on port 8080
c
I didnt configure
targetPort
because its supposed to be set to the same as
port
if specified. I am following this and applying it with Pulumi: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
Hm, that wasnt it either..
I mean the linked guide doesn't seem that difficult, so I wonder why it isnt working..
Health checks as seen in the console are passing
Oh there we go!
Time to reverse engineer this so I can use a single global HTTP LB for multiple clusters 😄
r
Best of luck. I feel like ingress/egress in clusters should be added to the really hard stuff of computer science right after cache invalidation and naming things 😉
Not because it is so hard in theory but because every cloud does it differently ...
c
Agreed, this stuff is difficult
c
nodeport would essentially bypass the await logic
that’s probably not what you want.
pulumi does not attempt to statically analyze
Ingress
with implementation in mind
I’m not entirely sure what we’d do to improve this situation?
Do you have an idea of what you’d want to see?
c
@creamy-potato-29402 well, wouldn't the K8S API return an error when applying the ingress resource?
c
hmm, isn’t that what it did?
c
No, it timed out
I found the error in GC Console
c
I see what you mean
There is a lot of stuff like that that we are planning on doing eventually
c
It's weird because it seems to report K8S-level errors back just fine in general
c
Traversing the graph of things and finding error proactively is hard, but worth doing.
Well what’s going on here is, the errors are not reported back through the
Ingress
status.
So we would have to go get it.
If you file a bug we will eventually get to this.
c
Oh I think I understand, so the ingress failure is asynchronous and not reported back in the API response?
I am not familiar with the K8S API at all though, but there's gotta be some way to check the general status of any resource?
Like "does this resource have an error"
c
lol I wish
man oh man would that be nice
c
Wow, okay didn't know it was that bad 😂
I mean the GC UI is getting the error from somewhere
Maybe an annotation?
c
k8s as a project does not seem to take these sorts of UX decisions seriously.
Sure, gcloud is able to get it because someone wrote the code to go find it.
We do the same
c
Everything is open until it isn't
c
but none of this is free. you have to actually go get it.
so what I’m saying is, we will eventually write code to “go get” this error, but it won’t be today. 🙂
c
Btw we chatted a while ago on the K8S Slack, happy to see Pulumi going so well
c
I remember the avatar
c
Haha 😂
c
our k8s support is what I always wanted ksonnet to be.
c
I'll file an issue for this particular situation
c
thank you
c
c
we’ll triage soon!
c
partyk8s