I’m trying to use the Kubernetes provider for Pulu...
# kubernetes
b
I’m trying to use the Kubernetes provider for Pulumi to deploy several resources, and I’m using patterns like
myService.metadata.name
in order to pass information around. I’ve found that this ends up creating dependencies, and in some cases, deadlock situations. In this example, a service will not become “healthy” until there are pods registered to it. Yet, when I’m using something like ArgoRollouts, there’s a pointer back to my service name:
Copy code
strategy:

    # Blue-green update strategy
    blueGreen:

      # Reference to service that the rollout modifies as the active service.
      # Required.
      activeService: active-service
So, because of this, the pulumi wants to deploy the service first, and rollout second. But the service never becomes healthy, as it’s waiting on the rollout. The workaround is to define a variable and pass the value of that to all places that need the service name. I was also thinking of a way to tell Pulumi to not block other resources from being created while this one becomes healthy? The data is available regardless of health state. For k8s resources, I feel like, a lot of the data is static, and yet it’s treated as dynamic, thus I can’t easily make these references. (I coming from Jsonnet, which makes this quite easy). Has anyone else run into these types of problems? Do you have any patterns or practices that makes the code feel “refactor proof”? What about the case when you don’t know the service name? (maybe it’s being returned as part of higher-level function)
b
You can tell the Service to not wait on the resources
Copy code
"<http://pulumi.com/skipAwait|pulumi.com/skipAwait>": pulumi.String("true"),
In the Service annotations
b
OH!
where is the documentation for this?
Beyond this, I don’t know, I asked a similar question to yours last year. 🙂
b
does this actually skip the await? I honestly, not sure if I want to skip the await
because I think I want to know if something is broken
but I do need to deploy multiple things at once, in order to make it work
b
What it does is basically mark the service as ready even though the underlying pods are not there yet.
In my system, I have the Service deployed first, and then a Deployment that references it (as a dependency) which is also going to create the pods that match the service
So if I waited on the Service being ready, then I’d have a deadlock (as you described). You just need to break the chain somewhere.
b
I still want to know whether or not the service is actually healthy
I’m comparing to ArgoCD right now, as you can define “waves”, and all resources within a wave and deployed at once, and you move onto the next wave when all resources are healthy
I need to deploy multiple things simultaneously, while still having the “await” health checking. I just don’t want that health checking to block applying of other resources with this “wave” (concept)
the health checking should still take place, and prevent the next “wave”
I filed this issue
I think there maybe needs to be a bigger conversation about this?
b
Yeah, I’m not 100% sure if this is a use case that’s really solvable, as you have a cyclic dependency but you want the cycle to be ignored, but that doesn’t really work if you want to maintain the dependency chain.