I'm trying out the Automation API for the first ti...
# automation-api
d
I'm trying out the Automation API for the first time and I'm a little confused about workspace implementations. I see how
LocalWorkspace
works, but it's not quite clear how one would go about implementing a custom workspace. In particular, it looks like there is some machinery around a "language runtime service" and other things that is necessary to make refresh/up implementations. Is there a guide or example for how to do this?
especially considering the
LanguageServer
is an 'internal' class...
seems like it might be possible to use the
Stack
class directly... i'm sure i can figure it out, but would be nice to have an example somewhere. please and thank you 🙂
OK, i got this to work, but I had to jump through some hoops to create a temp directory, spit out a Pulumi.yaml file, violation the privacy of LocalWorkspace.prototype.runPulumiCmd, etc
if there is no example anywhere, let me know and i can share a sample project & potentially contribute some fixes to make this easier
well, mostly got this to work. may have spoke too soon
b
Hey Brandon, sorry this is so painful. Can you open an issue in pulumi/pulumi for this?
d
I'm trying to create a custom deployment tool that mostly hides the fact that Pulumi is used behind the scenes.
r
👋🏽 Hey Brandon. You’re definitely trying a new thing that we don’t yet have examples for, I’d be curious to hear your use case and how the current SDK is falling short. I’ll take a closer look at your code tomorrow.
d
Thanks @red-match-15116! This is mostly an experiment, but I've got two goals: 1) To abstract away some of the stack file management. Basically, I'm trying to do a production stack + one or more dev stacks. Basically, I want to let devs create stacks to work with. Probably with some name formatted like
${org}/{username}-${stackName}
so that they are scoped by dev. Generally, I'll probably have something like
mycompany/brandon-dev
or similar while exploring. It's OK for the production stack's config to live in the repository as a Pulumi.production.yaml file, but it might be nice to store that config elsewhere too. I was thinking it would be better however for all the dev stacks to not have their config live in files in the repo. Tho there are some questions about version control if the stack config is in an external db or similar. 2) I'd like to automate stack create/refresh/up etc from TypeScript, so that i'm not writing bash or shelling out in order to manipulate these dev stacks in CI and in a custom management UI. What I've discovered is that the
Stack
class in the automation package depends on
runPulumiCmd
which calls the
pulumi
binary. I had to do some hacky stuff with
serializeArgsForOp
in order to add a
--cwd
flag to be
workDir
and then I had to spit out a
Pulumi.yaml
file in to that working directory to avoid the pulumi command complaining that there was no project file. I also had to symlink
node_modules
in to that directory, tho I'm not sure that was strictly necessary. Additionally, I've had to spit out a
Pulumi.*.yaml
file, which I've done by calling the private
runPulumiCmd
method with the
config
subcommand to spit out a .yaml file (though I could have done that with a direct file write) in order to implement
setAllConfig
because the pulumi command reads that config file & it doesn't seem obvious how one might pass the config otherwise. Additionally, I used
runPulumiConfig
to run
stack init
in order to create a stack in the state management service. All in all, I did a pile of hacks. Not sure if I'm missing something obvious, or the above is a checklist of issues to fix in order to make it possible to create a dynamic workspace 🙂
I'm getting lots of
unhandled rejection
errors trying to do
stack.up
though
p
I wonder if it would be easier to create your own CustomWorkspace with the operations that you want to support and using
LocalWorkspace
underneath. You could still abstract away the configuration to another source, only configuring
LocalWorkspace
when necessary using the configuration methods.
r
I’m getting lots of 
unhandled rejection
 errors trying to do 
stack.up
 though
We’ve been tracking a bug with unhandled rejections, but they can also be caused by errors in your code. Stack traces would be helpful.
d
@prehistoric-coat-10166 I am indeed delegating to LocalWorkspace in a few places, but it's not really intended to be used that was as best I can tell.
@red-match-15116 the stack traces aren't super useful b/c it points to a "pre allocated" error:
Copy code
STACK_TRACE:
Error
    at Object.debuggablePromise (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/debuggable.js:69:75)
    at Object.registerResource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/resource.js:219:18)
    at new Resource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/resource.js:215:24)
    at new CustomResource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/resource.js:307:9)
    at new Permission (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/lambda/permission.ts:270:9)
    at createLambdaPermissions (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/apigateway/api.ts:645:30)
    at new API (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/apigateway/api.ts:565:29)
    at Object.program [as init] (/Users/brandonbloom/Projects/deref/node-json-api/cloud.ts:147:23)
    at Stack.<anonymous> (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/stack.js:86:43)
    at Generator.next (<anonymous>)
tho they do also print out a "CONTEXT" which seems like always ends in
[aws:lambda/permission:Permission]
for the errors i'm seeing
looking at that thread, glad to see you removed the global
unhandledRejection
handler - that was definitely bothering me
or i guess plan to remove it, the PR isn't merged yet
r
Yeah should be merged soon.
FWIW I like @prehistoric-coat-10166’s idea. Why not
extend LocalWorkspace
and override the functionality that you want to be different rather than re-implementing the entire surface area?
d
that's essentially what i'm doing - but instead of doing it implicitly via
extends
, i'm doing it explicitly. my experience is that it's totally a bogus thing to do in practice to override some subset of methods and expect the other set of methods to not break. there is a gist in the issue i opened, you can see that i am delegating to
LocalWorkspace
in a bunch of places
essentially, i made every method/property throw not-implemented and then either overrode or delegated them one-by-one as i progressed through the flow
👍🏽 1
r
Yeah, I mean you’re right though I think it comes down to the fact that automation api in its current incarnation depends on the Pulumi CLI, and has to make it so that the conditions the CLI needs to operate are met. You can still also store your config/settings wherever you like, but the setup for running certain stack commands will depend on you providing the prerequisites that the binary needs. That being said, from the gist it looks like most of what you’re hitting could be fixed by making
runPulumiCmd
public?
d
making that function/method public would alleviate some of the pain, yes
👍🏽 1
another issue i'm discovering now:
sdk/nodejs/config/vars.ts
does
new pulumi.Config("aws")
at the top level of the module. this means that when automation runs later with a different config file, it's already too late to override this config - as a result, some code that relies on
aws.config.region
will return
undefined
🤔 1
r
I’m curious though, what is it that’s missing from LocalWorkspace that you’re trying to customize? I see you’ve implemented
createStack
but it’s the same as in
LocalWorkspace
. Your
selectStack
implementation is different though, I see that.
d
well, i think the config issue i just mentioned is a bug with LocalWorkspace too
r
this means that when automation runs later with a different config file
I’m not quite following what the scenario is here.
d
if you do
import * as aws from '@pulumi/aws'
, as you normally do. that will read your current selected stack's config, in particular you'll notice this in that
aws.config.region
will be set
if you then do
selectStack
to some new stack with a config in a different region,
aws.config.region
is now wrong
r
Curious if you’ve looked through any of the automation api examples https://github.com/pulumi/automation-api-examples
aws.config.region
is read from the config of the current stack when the pulumi program runs.
The config comes from the
Pulumi.[stack].yaml
file. So… it should match whichever stack it is being run on.
d
Copy code
console.log(
        '!!!!!!!!!!?!?!??!?!!',
        new pulumi.Config('aws').require('region'),
        aws.config.region,
      )
outputs:
!!!!!!!!!!?!?!??!?!! us-west-2 undefined
indeed i have looked through most of the automation api examples
i'm wondering if maybe
aws.config.region
in this code is being captured at build time b/c of ts-node or esbuild or whatever
i did:
Copy code
console.log('CAPTURING REGION', process.pid);
exports.region = __config.get("region") || utilities.getEnv("AWS_REGION", "AWS_DEFAULT_REGION");
and that is only logged once... before selectStack has been called
so yeah, no way that will be right 😛
r
Yeah I guess this is an inline program?
d
yup
i can file a bug on this, but it's late, so won't have time tonight to put together a minimal repoduction
r
I dunno if I’d consider that a bug in LocalWorkspace TBH. Seems more like a load-time particularity of the imported module (pulumi-aws) in this case and I’m not really sure what we could do to change that. But yeah, it is late. Feel free to file the bug and fill in more details when you have time.
d
i guess it's not a bug in LocalWorkspace, but definitely affects LocalWorkspace as well
almost done writing up the bug
thanks for the help/discussion, good night!
p
@dazzling-scientist-80826 i’m wondering if you are not making it too complicated. We do something similar that seems to work just fine without going into a CustomWorkspace level. I extracted some code from our app to give you an idea on how we do it: https://gist.github.com/roderik/0ce3768f36421be827150adc99fcb7f6 FYI, tips to make it better are appreciated @red-match-15116 ;)
b
I am also abstracting away pulumi in our in-house Automation API usage. I don't think you need to go so low level so as to create a custom implementation of
Workspace
. This is mainly a pulumi-level abstraction where
LocalWorkspace
is the one that still depends on the CLI and a theoretical future
Workspace
implementation may no longer depend on the CLI - hence the issues you are encountering with certain types being non-public, it wasn't really written with the intention of consumers implementing anything at that level. I am also curious what you are trying to customize in
LocalWorkspace
that you couldn't do by placing your abstraction one level higher and wrapping it. I would call the issue you are mentioning with AWS region config a bug in the sense you are right that there should be no module-level state corruption across pulumi runs via Automation API. Ideally there shouldn't be any global state between runs.
d
@proud-pizza-80589 @bored-oyster-3147 it's entirely possible that a custom workspace is overkill / not required. Being able to override the workDir seems to be sufficient to keep the extra config files out of the git repository. I started building the custom workspace because the docs suggested it was possible & i didn't expect it to be so difficult. in the process, i now much better understand the workspace mechanism & am likely to delete my workspace implementation.
b
Gotcha. Can't hurt to understand it better! Mind pointing toward where docs said it was doable as a consumer? Wonder if that should be re-worded.
Never mind found it on your issue. Thank you!
l
Just catching up. The config bug you mention is a known issue, more detail here: https://github.com/pulumi/pulumi/issues/5582
👍 1
For context on the design of Workspace, the original thought was that as the architecture of Pulumi evolved we could build things like remote workspaces where the program and providers run on different networked hosts, or virtual workspaces that don't rely on static files on disk at all. Reimplementing workspace will be a lot of work, and requires quite a bit of understanding of the pulumi architecture and assumptions about the language runtime you're targeting (if considering inline programs).
Brandon, for your use case it sounds like building a small CLI or a script that is references from npm scripts might be a good fit. Both would allow you to control the surface area of pulumi you expose to your end users, set conventions, default config, etc.