Running in circles on this: we have our own `Servi...
# kubernetes
l
Running in circles on this: we have our own
Service
component resource. In our own abstraction, we create a NodePort based k8s
Service
resource, retrieve the actual nodeport value from the service using apply and pass it on to a GKE specific
BackendConfig
CRD. This is the code snippet:
Copy code
const healthPortNumber: pulumi.Input<number> = this.service.spec.apply((spec) => {
      const healthPort = spec.ports.find((port) => {
        return port.name === 'health';
      });
      if (healthPort) {
        return healthPort.nodePort;
      } else {
        return 4001;
      }
    });
We have a stack where the ports section of the
Service
spec contains this (taken from the stack state):
Copy code
{
  "name": "health",
  "nodePort": 30449,
  "port": 4001,
  "protocol": "TCP",
  "targetPort": "health"
}
But we get
undefined
as the value for
healthPort.nodePort
. How is that possible??
What makes it even stranger: if we remove the service (commenting it) and redeploy again (uncommenting it), this code works on the new deployment.
And again, when updating the service with e.g. an additional annotation, the problem comes up again: no nodeport value. Is Pulumi not fetching the full state on update @gorgeous-egg-16927?
p
We had something simular, de deployed a k8s secret in a helm file with value X and then the chart had an init container to update it. Pulumi never saw the updated value, like it was “cached” or something. We fixed it by manually getting it
Copy code
const secret = fabricCa.getResource('v1/Secret', 'default', 'ca-tls-cert');

// <https://www.pulumi.com/docs/intro/concepts/resources/#resource-get>
const updatedSecret = k8s.core.v1.Secret.get('tlssecret', secret.id, {
  provider,
  dependsOn: [fabricCa],
});

export const data = updatedSecret.data.apply((data) => data);
apply on “secret” = old version apply on “updatedSecret” = new version
not sure if it helps your issue, but the ability to force a fetch from the cluster might help as workaround
l
This is well worth the try. Tnx for the suggestion @proud-pizza-80589
BTW @proud-pizza-80589 did you ever file a Github issue regarding this?
p
No, i figured it works as designed because we are updating the secret outside of pulumi’s process, it cannot really know about the changed value
l
ok. That is clearly not the case for me. Also the service is deployed by Pulumi.
f
One option might be to do a get on the secret after it is created. If the nodeport is automatically allocated by k8s and not in your spec then I don’t know if pulumi does a lookup of the full resource if it was created since it is updated outside of it’s creation.
l
@fast-dinner-32080 the nodeport value is visible in the Pulumi state with the correct value, after the k8s
Service
is created. Why don’t I get this value (which remains unchanged) if I just add e.g. an annotation?
g
Hmm, that is odd. It sounds to me like the
healthPortNumber
code is running prior to the
nodePort
value resolving (it’s usually set by the cluster). This seems like a possible oversight in the await logic for Service resources. If you look at https://www.pulumi.com/docs/reference/pkg/kubernetes/core/v1/service/#service you’ll notice that the
nodePort
isn’t considered as part of the readiness check.
@billowy-army-68599 Any ideas for workarounds here?
l
@gorgeous-egg-16927 @billowy-army-68599 should I file a GH issue for this?
b
Sorry lots of plates spinning at the moment. Yes a GitHub issue would great at thank you
l