https://pulumi.com logo
#kubernetes
Title
# kubernetes
l

limited-rainbow-51650

04/20/2021, 9:51 AM
Running in circles on this: we have our own
Service
component resource. In our own abstraction, we create a NodePort based k8s
Service
resource, retrieve the actual nodeport value from the service using apply and pass it on to a GKE specific
BackendConfig
CRD. This is the code snippet:
Copy code
const healthPortNumber: pulumi.Input<number> = this.service.spec.apply((spec) => {
      const healthPort = spec.ports.find((port) => {
        return port.name === 'health';
      });
      if (healthPort) {
        return healthPort.nodePort;
      } else {
        return 4001;
      }
    });
We have a stack where the ports section of the
Service
spec contains this (taken from the stack state):
Copy code
{
  "name": "health",
  "nodePort": 30449,
  "port": 4001,
  "protocol": "TCP",
  "targetPort": "health"
}
But we get
undefined
as the value for
healthPort.nodePort
. How is that possible??
What makes it even stranger: if we remove the service (commenting it) and redeploy again (uncommenting it), this code works on the new deployment.
And again, when updating the service with e.g. an additional annotation, the problem comes up again: no nodeport value. Is Pulumi not fetching the full state on update @gorgeous-egg-16927?
p

proud-pizza-80589

04/20/2021, 1:15 PM
We had something simular, de deployed a k8s secret in a helm file with value X and then the chart had an init container to update it. Pulumi never saw the updated value, like it was “cached” or something. We fixed it by manually getting it
Copy code
const secret = fabricCa.getResource('v1/Secret', 'default', 'ca-tls-cert');

// <https://www.pulumi.com/docs/intro/concepts/resources/#resource-get>
const updatedSecret = k8s.core.v1.Secret.get('tlssecret', secret.id, {
  provider,
  dependsOn: [fabricCa],
});

export const data = updatedSecret.data.apply((data) => data);
apply on “secret” = old version apply on “updatedSecret” = new version
not sure if it helps your issue, but the ability to force a fetch from the cluster might help as workaround
l

limited-rainbow-51650

04/20/2021, 1:28 PM
This is well worth the try. Tnx for the suggestion @proud-pizza-80589
BTW @proud-pizza-80589 did you ever file a Github issue regarding this?
p

proud-pizza-80589

04/20/2021, 1:46 PM
No, i figured it works as designed because we are updating the secret outside of pulumi’s process, it cannot really know about the changed value
l

limited-rainbow-51650

04/20/2021, 2:05 PM
ok. That is clearly not the case for me. Also the service is deployed by Pulumi.
f

fast-dinner-32080

04/20/2021, 4:55 PM
One option might be to do a get on the secret after it is created. If the nodeport is automatically allocated by k8s and not in your spec then I don’t know if pulumi does a lookup of the full resource if it was created since it is updated outside of it’s creation.
l

limited-rainbow-51650

04/20/2021, 6:56 PM
@fast-dinner-32080 the nodeport value is visible in the Pulumi state with the correct value, after the k8s
Service
is created. Why don’t I get this value (which remains unchanged) if I just add e.g. an annotation?
g

gorgeous-egg-16927

04/20/2021, 9:15 PM
Hmm, that is odd. It sounds to me like the
healthPortNumber
code is running prior to the
nodePort
value resolving (it’s usually set by the cluster). This seems like a possible oversight in the await logic for Service resources. If you look at https://www.pulumi.com/docs/reference/pkg/kubernetes/core/v1/service/#service you’ll notice that the
nodePort
isn’t considered as part of the readiness check.
@billowy-army-68599 Any ideas for workarounds here?
l

limited-rainbow-51650

04/22/2021, 11:18 AM
@gorgeous-egg-16927 @billowy-army-68599 should I file a GH issue for this?
b

billowy-army-68599

04/22/2021, 3:17 PM
Sorry lots of plates spinning at the moment. Yes a GitHub issue would great at thank you
l

limited-rainbow-51650

04/23/2021, 12:21 PM
4 Views