So a question about the possibilities (not sure we...
# general
So a question about the possibilities (not sure we want to do it, but just want to check if pulumi have that capability). is if its possible to within a pulumi stack perform multiple steps. fx: • step A ◦ deploy various cloud resources ◦ deploy helm chart to k8s • step B, wait for step A to finish, and then continue: ◦ need to perform operational /kubectl calls to the deployed containers • step C, wait for STEP B to finish ◦ configure the deployment since this specific deployment has a pulumi provider • step D, wait for step C to finish ◦ then perform additional steps with fx azure AD SSO integration etc.. it would still be declarative within all the configurations, but would this be a pulumi-anti-pattern? , - ie. would it be recommended to simply have independant stacks and just feed in outputs from the previous stacks - and run it in a bash script.. some other orchestrator.? or can we actually use pulumi as this multistep / wait orchestrator ? the case is that i cannot rely on the helmchart/kubernetes deployment readyness - since i need to perform operations on the deployment, and then with whatever output i get from that, continue the stack deployment I dont mind using kubernetes client libraries for accomplishing the operational steps. but in that case i need pulumi to be executed in waves ie. pulumi up: • applies all initial resources (in parallel) • kubernetes client library / execute stuff • applies additional pulumi resources (in parallel) a pulumi destroy should just ignore the client library calls in this situation
There are a couple of tools in the Pulumi toolbox that are worth exploring: The automation API ( would enable any orchestration across stacks. And let you run other clients in between. The command provider ( allows you to run “external” code as part of the Pulumi project code and enables running scripts during destroy as well as update.
this is precisely what i was looking for - was unable to find any mention of it in my googling