melodic-knife-7189
06/01/2023, 10:58 PMpulumi up
failed when some or all pods from our DaemonSet could not become ready within some duration.
I’ve been crawling through other issues to try and find solutions and found a few that are relevant:
• https://github.com/pulumi/pulumi-kubernetes/issues/1260 (Support user-specified readiness/await logic)
• https://github.com/pulumi/pulumi-kubernetes/issues/1056 (CustomResource lifecycle causes issues with importing created resources)
• https://github.com/pulumi/pulumi/issues/1691 (Support custom logic in resource lifecycle)
And through those I’ve found a couple of promising pointers:
• https://gist.github.com/timmyers/4d2fed53a358d4c98557a5886ae2afbb
• https://github.com/pulumi/pulumi/issues/1691#issuecomment-573823609
But I can’t quite piece together how I’d actually block pulumi up
on the DaemonSet becoming ready and throw an error. So far all I’ve been able to do is get the DaemonSet status showing as a stack output, but that’s not too helpful on its own 😅steep-toddler-94095
06/02/2023, 1:56 AMCommand
resource from @pulumi/command
to run the kubectl
CLI in a loop forever until the daemonset pods are ready, and set a customTimeout for the Command resourcemelodic-knife-7189
06/02/2023, 3:40 PMkubectl rollout status
to wait for the DaemonSet deploy to complete, right? I’ll have to check, but I don’t think I have a kubectl
binary (or a corresponding config) to use in my deployment pipeline. Sounds like I’d have to at least get the kubectl
binary. But maybe I could effectively get a .kube/config
out of pulumi since it already has the necessary creds?steep-toddler-94095
06/02/2023, 3:51 PMmelodic-knife-7189
06/02/2023, 3:52 PM