Is there a way to order pulumi resources different...
# general
r
Is there a way to order pulumi resources differently between create and update, or otherwise constrain the update process? In my case, I would like to create Linode Instances in parallel, but update serially and stop if anything goes wrong on update. Currently, updates happen in parallel, shutting down all Instances at once.
l
Not within Pulumi. To do this in pure Pulumi, you'd need to separate your resources into multiple projects.
The normal way to achieve this is to use a machine cluster or container cluster, and let your cloud service manage it.
E.g. in AWS, you'd use EKS, ECS or an ASG.
r
thanks, I can't use a cloud abstraction in this scenario
I was curious if a Dynamic Resource could wrap real resource(s) so I could inject locks around
update()
, but I haven't seen any examples of that
l
No, I think the code that runs in dynamic resources runs in the deploy phase. Pulumi resources are created before that, when you're setting up state. You can use the Cloud service's normal APIs in a dynamic resource. Essentially re-implement the Pulumi resoruce.
But it's likely that the best "pure" implementation would be to move the instances into their own project, separate from all the stuff they depend on. Put each instance in its own stack. And schedule the stack updates yourself. Or from an automation-api program.
r
yeah, though at that point pulumi isn't providing much value to the arrangement
l
It isn't? That seems valuable to me. Scheduling logic is never provided by Pulumi. That's always provided by the cloud provider: AWS, Kubernetes, etc. If there's a need for scheduling here, it's Linode that's not meeting it. You can work around that via Pulumi's automation-api. But Pulumi isn't letting you down, in this instance.
r
Linode does have a Kubernetes abstraction, however, this usage can't depend on an abstraction like that
l
But it could depend on a Linode feature equivalent to AWS's ASG. It's still not a Pulumi problem to solve.
r
Someone pointed out the
--parallel
flag which might be sufficient, if not enforced strictly.
l
Ah yes, I haven't ever used that flag. Adding
--parallel 1
looks good.
But I don't know if the API called by the Linode provider will block until the machine comes back. And once the API call returns, Pulumi will move onto the next bit 😞
r
ah, yes, it certainly won't block, good point... it really would require custom code I'll have to experiment with wiring up a lock system with Dynamic Resources
l
Good luck!
r
thanks for your help