https://pulumi.com logo
#general
Title
# general
c

creamy-jelly-91590

05/23/2019, 3:01 PM
@white-balloon-205 any idea why creating a GKE K8S Ingress would be stuck like this?
Copy code
++  ├─ kubernetes:extensions:Ingress  hello-kubernetes-us-east1         creating replacement..   [diff: ~metadata]; [2/3] Waiting for update of .status.loadBalancer with hostname/IP
r

rapid-eye-32575

05/23/2019, 3:05 PM
How long is it stuck like this? I've seen this resolve after 5-10 minutes or so when an external IP was finally allocated and assigned. According to https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer?hl=en
It may take a few minutes for GKE to allocate an external IP address and set up forwarding rules until the load balancer is ready to serve your application.
w

white-balloon-205

05/23/2019, 3:07 PM
Yeah - I’ve seen this be quite variable as well on GCP - sometimes nearly instant, other time 5 minutes.
c

creamy-jelly-91590

05/23/2019, 3:08 PM
@rapid-eye-32575 @white-balloon-205 seems it timed out. When I was provisioning a straight up
LoadBalancer
for the past few days it was ready within 10 seconds usually
There's no resources in the console
w

white-balloon-205

05/23/2019, 3:10 PM
Can you share the code you used to add the Ingres?
c

creamy-jelly-91590

05/23/2019, 3:12 PM
Yes sir!
Copy code
const helloWorldDeployment = new k8s.apps.v1.Deployment(
    `${name}-${region}`,
    {
      metadata: {
        name: "hello-kubernetes"
      },
      spec: {
        replicas: 3,
        selector: {
          matchLabels: appLabels
        },
        template: {
          metadata: {
            labels: appLabels
          },
          spec: {
            containers: [
              {
                name: name,
                image: "paulbouwer/hello-kubernetes:1.5"
                ],
                ports: [
                  {
                    containerPort: 8080
                  }
                ]
              }
            ]
          }
        }
      }
    },
    { provider: k8sProvider }
  );

  const helloWorldService = new k8s.core.v1.Service(
    `${name}-${region}`,
    {
      metadata: {
        name: name
      },
      spec: {
        selector: appLabels,
        ports: [
          {
            port: 8080
          }
        ]
      }
    },
    { provider: k8sProvider }
  );

  const helloWorldFrontend = new k8s.extensions.v1beta1.Ingress(
    `${name}-${region}`,
    {
      metadata: {
        name: name
      },
      spec: {
        rules: [
          {
            http: {
              paths: [
                {
                  path: "/",
                  backend: {
                    serviceName: name,
                    servicePort: 8080
                  }
                }
              ]
            }
          }
        ]
        // ports: [{ port: 80, targetPort: 8080, protocol: "TCP" }],
        // selector: appLabels,
      }
    },
    { provider: k8sProvider }
  );
I don't have a domain yet, just want to get it working with IP first
r

rapid-eye-32575

05/23/2019, 3:17 PM
@creamy-jelly-91590 is it a completely new cluster? I.e. is the HTTP load balancing addon enabled?
c

creamy-jelly-91590

05/23/2019, 3:17 PM
It's new yeah, using Pulumi to provision it
r

rapid-eye-32575

05/23/2019, 3:17 PM
Can you share that snippet as well?
c

creamy-jelly-91590

05/23/2019, 3:18 PM
Copy code
// Default values for constructing the `Cluster` object.
  const defaultClusterOptions = {
    nodeCount: 1,
    nodeMachineType: "n1-standard-1",
    minMasterVersion: kubeVersion,
    masterUsername: "",
    masterPassword: ""
  };

  const cluster = new gcp.container.Cluster(`${name}-${region}`, {
    project,
    region: region,
    initialNodeCount: config.nodeCount || defaultClusterOptions.nodeCount,
    minMasterVersion: defaultClusterOptions.minMasterVersion,
    removeDefaultNodePool: true,
    network: network.name,
    subnetwork: subnet.name,
    masterAuth: {
      username: defaultClusterOptions.masterUsername,
      password: defaultClusterOptions.masterPassword
    }
  });

  const poolName = `${name}-${region}-default-pool`;
  const defaultNodePool = new gcp.container.NodePool(poolName, {
    name: poolName,
    cluster: cluster.name,
    version: kubeVersion,
    initialNodeCount: 1,
    location: region,
    nodeConfig: {
      machineType:
        config.nodeMachineType || defaultClusterOptions.nodeMachineType,
      oauthScopes: [
        "<https://www.googleapis.com/auth/compute>",
        "<https://www.googleapis.com/auth/devstorage.read_only>",
        "<https://www.googleapis.com/auth/logging.write>",
        "<https://www.googleapis.com/auth/monitoring>"
      ]
    },
    management: {
      autoUpgrade: false,
      autoRepair: true
    }
  });
r

rapid-eye-32575

05/23/2019, 3:18 PM
Also you can check the addons in the cluster detail / config page under "addons"
c

creamy-jelly-91590

05/23/2019, 3:19 PM
Yes, HTTP load balancing is set to enabled in the console
c

creamy-potato-29402

05/23/2019, 3:19 PM
does it actually allocate the IP address
c

creamy-jelly-91590

05/23/2019, 3:20 PM
Nothing seems to be happening at all
c

creamy-potato-29402

05/23/2019, 3:20 PM
if it doesn't allocate the IP address there isn't a whole lot we can do
c

creamy-jelly-91590

05/23/2019, 3:21 PM
No IP addresses allocated — I see 6 ephemeral ones but I am pretty sure thats one per node in my 2 node pools
r

rapid-eye-32575

05/23/2019, 3:21 PM
@creamy-jelly-91590 You said that "There's no resources in the console" ... does that mean that neither a service nor an ingress resource was created?
c

creamy-jelly-91590

05/23/2019, 3:22 PM
Oh, sorry I meant no load balancers in the Networking -> Load balancing section
Will check workloads!
Oh man, here we go:
Copy code
error while evaluating the ingress spec: service "default/hello-kubernetes" is type "ClusterIP", expected "NodePort" or "LoadBalancer
I specified no type so it used ClusterIP — but it seems like this error should be caught by Pulumi and reported back?
Bam, NodePort fixed the issue. @creamy-potato-29402 was Pulumi not supposed to pick up the spec error though?
r

rapid-eye-32575

05/23/2019, 3:26 PM
Hmm I think you have to explicitely set it to
LoadBalancer
in your case of relying on the ingress controller of GKE itself (And when the annotation
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>
is absent from your ingress). But interesting why this error isn't reported in Pulumi...
c

creamy-jelly-91590

05/23/2019, 3:27 PM
@rapid-eye-32575 just setting it to NodePort made it go through
Hm, but its not actually serving the app on the IP
r

rapid-eye-32575

05/23/2019, 3:28 PM
@creamy-jelly-91590 Cool, but in case you don't want to use a random high port you have to use
LoadBalancer
c

creamy-jelly-91590

05/23/2019, 3:28 PM
Yeah Ill try that and see if that fixes the issue
r

rapid-eye-32575

05/23/2019, 3:29 PM
You should be able to see the port that was allocated when inspecting the service resource
c

creamy-jelly-91590

05/23/2019, 3:31 PM
Should I open an issue for the error not being picked up?
Hm,
LoadBalancer
does not seem to make it respond to requests either
Copy code
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
This is from the GC HTTP LB
I know this isn't a Pulumi problem but I'd appreciate any tips anyone might have 😄
r

rapid-eye-32575

05/23/2019, 3:34 PM
Hmm I might be mistaken but this is the Google Cloud LB message for a Bad Gateway (502) is it not? Maybe you can check via
kubectl port-forward
that the service and the pod is actually answering on port 8080
c

creamy-jelly-91590

05/23/2019, 3:39 PM
I didnt configure
targetPort
because its supposed to be set to the same as
port
if specified. I am following this and applying it with Pulumi: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
Hm, that wasnt it either..
I mean the linked guide doesn't seem that difficult, so I wonder why it isnt working..
Health checks as seen in the console are passing
Oh there we go!
Time to reverse engineer this so I can use a single global HTTP LB for multiple clusters 😄
r

rapid-eye-32575

05/23/2019, 3:51 PM
Best of luck. I feel like ingress/egress in clusters should be added to the really hard stuff of computer science right after cache invalidation and naming things 😉
Not because it is so hard in theory but because every cloud does it differently ...
c

creamy-jelly-91590

05/23/2019, 3:54 PM
Agreed, this stuff is difficult
c

creamy-potato-29402

05/23/2019, 4:06 PM
nodeport would essentially bypass the await logic
that’s probably not what you want.
pulumi does not attempt to statically analyze
Ingress
with implementation in mind
I’m not entirely sure what we’d do to improve this situation?
Do you have an idea of what you’d want to see?
c

creamy-jelly-91590

05/23/2019, 4:13 PM
@creamy-potato-29402 well, wouldn't the K8S API return an error when applying the ingress resource?
c

creamy-potato-29402

05/23/2019, 4:13 PM
hmm, isn’t that what it did?
c

creamy-jelly-91590

05/23/2019, 4:13 PM
No, it timed out
I found the error in GC Console
c

creamy-potato-29402

05/23/2019, 4:14 PM
I see what you mean
There is a lot of stuff like that that we are planning on doing eventually
c

creamy-jelly-91590

05/23/2019, 4:14 PM
It's weird because it seems to report K8S-level errors back just fine in general
c

creamy-potato-29402

05/23/2019, 4:14 PM
Traversing the graph of things and finding error proactively is hard, but worth doing.
Well what’s going on here is, the errors are not reported back through the
Ingress
status.
So we would have to go get it.
If you file a bug we will eventually get to this.
c

creamy-jelly-91590

05/23/2019, 4:16 PM
Oh I think I understand, so the ingress failure is asynchronous and not reported back in the API response?
I am not familiar with the K8S API at all though, but there's gotta be some way to check the general status of any resource?
Like "does this resource have an error"
c

creamy-potato-29402

05/23/2019, 4:17 PM
lol I wish
man oh man would that be nice
c

creamy-jelly-91590

05/23/2019, 4:17 PM
Wow, okay didn't know it was that bad 😂
I mean the GC UI is getting the error from somewhere
Maybe an annotation?
c

creamy-potato-29402

05/23/2019, 4:18 PM
k8s as a project does not seem to take these sorts of UX decisions seriously.
Sure, gcloud is able to get it because someone wrote the code to go find it.
We do the same
c

creamy-jelly-91590

05/23/2019, 4:19 PM
Everything is open until it isn't
c

creamy-potato-29402

05/23/2019, 4:19 PM
but none of this is free. you have to actually go get it.
so what I’m saying is, we will eventually write code to “go get” this error, but it won’t be today. 🙂
c

creamy-jelly-91590

05/23/2019, 4:20 PM
Btw we chatted a while ago on the K8S Slack, happy to see Pulumi going so well
c

creamy-potato-29402

05/23/2019, 4:20 PM
I remember the avatar
c

creamy-jelly-91590

05/23/2019, 4:21 PM
Haha 😂
c

creamy-potato-29402

05/23/2019, 4:21 PM
our k8s support is what I always wanted ksonnet to be.
c

creamy-jelly-91590

05/23/2019, 4:21 PM
I'll file an issue for this particular situation
c

creamy-potato-29402

05/23/2019, 4:21 PM
thank you
c

creamy-jelly-91590

05/23/2019, 4:24 PM
c

creamy-potato-29402

05/23/2019, 4:27 PM
we’ll triage soon!
c

creamy-jelly-91590

05/23/2019, 4:55 PM
partyk8s