Hi! When using the automation api, where do plugin...
# automation-api
f
Hi! When using the automation api, where do plugins get installed to? e.g., running this snippet
Copy code
stack.workspace.installPlugin("aws", "v4.0.0");
b
do you mean where on the filesystem?
f
exactly
m
The automation api is a thin wrapper for the Pulumi CLI, so where it stores plugins,
~/.pulumi/plugins
f
Ah, ok. Does it literally shell out to
pulumi
?
f
Very interesting to see it all glued together like that. e.g., stack files created in temp dir via shelling out
m
There is a downside, in that the Pulumi stack configuration files are tightly coupled with the Pulumi CLI, and thus, the automation API.
f
As in, you need a stack file to exist to use the automation api b/c the cli needs it?
m
The stack file will be implicitly created/read/updated/deleted with stack configuration related operations. This might not be expected behavior with the automation library. In fact, I find it a little jarring just using the CLI. Ex:
pulumi stack rm my-stack
-- goodbye
Pulumi.my-stack.yaml
All that being said, the Automation API is great, and I've used it successfully
f
It is a bit odd, I do agree. I assume this whole approach was done because it’s a lot more straightforward than straight up embedding the pulumi engine in a program?
m
I can only speculate on the architectural decisions. I'm an avid Pulumi user/contributor, but I don't work for Pulumi.
l
We chose this approach initially because it was simple, worked remarkably well, and had a pretty straightforward path to supporting multiple pulumi languages. Biggest downsides have been: • facilities for error handling are limited since we're shelling out to the CLI. We don't get structured errors from programs, just text output • the local CLI requirement • maintenance burden of having to add every new feature by hand to four SDKs I did a spike a while back on a different approach. Linking the pulumi engine directly and then adding another gRPC interface to support multiple languages calling into it. https://github.com/pulumi/pulumi/issues/7219 Biggest problem with that approach is that it will involve a ton of refactoring, and we just haven't had the appetite to do it yet. Hopefully one day! We have been able to build some pretty incredible stuff on top of automation api, like Pulumi Deployments https://www.pulumi.com/docs/pulumi-cloud/deployments/
f
Oof, the unstructured errors hurts 😭 Quite amazing how far you can push a CLI wrapper though! So Deployments actually uses the automation api, not direct API calls to the engine itself?
l
Yup, deployments uses automation API to run pulumi operations. Takes care of managing async long running updates, providing logs, cancellations, orchestration, etc.
f
Why was that path chosen? Dogfood? Actually easier? I like that the automation api is being used because it’ll incentivize improvements to it, perhaps even pushing for the linking idea.
l
All of the above. Actually easier, and taking a dependency forces us to make it better. It's a virtuous cycle. I agree it will likely push us in that direction at some point.
m
I definitely appreciate Pulumi's approach to this. Immediate value was delivered, and it meant we didn't have to write our own wrapper.
j
Is it possible right now to run the automation API inside an AWS lambda? I've not really looked into it so not sure if the EFS is enough. I was considering a servlerless solution using pulumi in the context a multi-tennant platform and this seemed to be a blocker (or at least something that would require a properly containerized service where the infra operations are executed).
m
I'm pretty sure as long as you provide the Pulumi binary it will work. We were running the automation API in AWS Batch because it has a much longer timeout for big infrastructure plans.
l
Yup, it is definitely possible! You can also use container support within AWS lambda to deliver the CLI.
j
container support
thanks. makes sense. it won't work on a vanilla js lambda for example, right?
b
no, you need the Pulumi CLI for this to work
I will also point out, while you can run Pulumi in a lambda, the 15 minute max timeout means lambda is generally not a good runner use case
l
If you're trying to run pulumi in a lamda, ie in response to an API call, then you should really check out Pulumi Deployments. There is a REST API that makes it easy to run an update on demand, even from within a lambda: https://www.pulumi.com/docs/pulumi-cloud/deployments/api/