dazzling-scientist-80826
04/26/2021, 12:05 AMLocalWorkspace
works, but it's not quite clear how one would go about implementing a custom workspace. In particular, it looks like there is some machinery around a "language runtime service" and other things that is necessary to make refresh/up implementations. Is there a guide or example for how to do this?LanguageServer
is an 'internal' class...Stack
class directly... i'm sure i can figure it out, but would be nice to have an example somewhere. please and thank you 🙂billowy-army-68599
04/26/2021, 2:19 AMdazzling-scientist-80826
04/26/2021, 3:08 AMred-match-15116
04/26/2021, 4:11 AMdazzling-scientist-80826
04/26/2021, 4:35 AM${org}/{username}-${stackName}
so that they are scoped by dev. Generally, I'll probably have something like mycompany/brandon-dev
or similar while exploring.
It's OK for the production stack's config to live in the repository as a Pulumi.production.yaml file, but it might be nice to store that config elsewhere too. I was thinking it would be better however for all the dev stacks to not have their config live in files in the repo. Tho there are some questions about version control if the stack config is in an external db or similar.
2) I'd like to automate stack create/refresh/up etc from TypeScript, so that i'm not writing bash or shelling out in order to manipulate these dev stacks in CI and in a custom management UI.
What I've discovered is that the Stack
class in the automation package depends on runPulumiCmd
which calls the pulumi
binary. I had to do some hacky stuff with serializeArgsForOp
in order to add a --cwd
flag to be workDir
and then I had to spit out a Pulumi.yaml
file in to that working directory to avoid the pulumi command complaining that there was no project file. I also had to symlink node_modules
in to that directory, tho I'm not sure that was strictly necessary. Additionally, I've had to spit out a Pulumi.*.yaml
file, which I've done by calling the private runPulumiCmd
method with the config
subcommand to spit out a .yaml file (though I could have done that with a direct file write) in order to implement setAllConfig
because the pulumi command reads that config file & it doesn't seem obvious how one might pass the config otherwise. Additionally, I used runPulumiConfig
to run stack init
in order to create a stack in the state management service.
All in all, I did a pile of hacks. Not sure if I'm missing something obvious, or the above is a checklist of issues to fix in order to make it possible to create a dynamic workspace 🙂unhandled rejection
errors trying to do stack.up
thoughprehistoric-coat-10166
04/26/2021, 4:57 AMLocalWorkspace
underneath. You could still abstract away the configuration to another source,
only configuring LocalWorkspace
when necessary using the configuration methods.red-match-15116
04/26/2021, 4:58 AMI’m getting lots ofWe’ve been tracking a bug with unhandled rejections, but they can also be caused by errors in your code. Stack traces would be helpful.errors trying to dounhandled rejection
thoughstack.up
dazzling-scientist-80826
04/26/2021, 4:59 AMSTACK_TRACE:
Error
at Object.debuggablePromise (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/debuggable.js:69:75)
at Object.registerResource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/resource.js:219:18)
at new Resource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/resource.js:215:24)
at new CustomResource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/resource.js:307:9)
at new Permission (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/lambda/permission.ts:270:9)
at createLambdaPermissions (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/apigateway/api.ts:645:30)
at new API (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/apigateway/api.ts:565:29)
at Object.program [as init] (/Users/brandonbloom/Projects/deref/node-json-api/cloud.ts:147:23)
at Stack.<anonymous> (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/stack.js:86:43)
at Generator.next (<anonymous>)
[aws:lambda/permission:Permission]
for the errors i'm seeingunhandledRejection
handler - that was definitely bothering mered-match-15116
04/26/2021, 5:10 AMextend LocalWorkspace
and override the functionality that you want to be different rather than re-implementing the entire surface area?dazzling-scientist-80826
04/26/2021, 5:19 AMextends
, i'm doing it explicitly. my experience is that it's totally a bogus thing to do in practice to override some subset of methods and expect the other set of methods to not break. there is a gist in the issue i opened, you can see that i am delegating to LocalWorkspace
in a bunch of placesred-match-15116
04/26/2021, 6:05 AMrunPulumiCmd
public?dazzling-scientist-80826
04/26/2021, 6:06 AMsdk/nodejs/config/vars.ts
does new pulumi.Config("aws")
at the top level of the module. this means that when automation runs later with a different config file, it's already too late to override this config - as a result, some code that relies on aws.config.region
will return undefined
red-match-15116
04/26/2021, 6:19 AMcreateStack
but it’s the same as in LocalWorkspace
. Your selectStack
implementation is different though, I see that.dazzling-scientist-80826
04/26/2021, 6:20 AMred-match-15116
04/26/2021, 6:20 AMthis means that when automation runs later with a different config fileI’m not quite following what the scenario is here.
dazzling-scientist-80826
04/26/2021, 6:22 AMimport * as aws from '@pulumi/aws'
, as you normally do. that will read your current selected stack's config, in particular you'll notice this in that aws.config.region
will be setselectStack
to some new stack with a config in a different region, aws.config.region
is now wrongred-match-15116
04/26/2021, 6:22 AMaws.config.region
is read from the config of the current stack when the pulumi program runs.Pulumi.[stack].yaml
file. So… it should match whichever stack it is being run on.dazzling-scientist-80826
04/26/2021, 6:24 AMconsole.log(
'!!!!!!!!!!?!?!??!?!!',
new pulumi.Config('aws').require('region'),
aws.config.region,
)
!!!!!!!!!!?!?!??!?!! us-west-2 undefined
aws.config.region
in this code is being captured at build time b/c of ts-node or esbuild or whateverconsole.log('CAPTURING REGION', process.pid);
exports.region = __config.get("region") || utilities.getEnv("AWS_REGION", "AWS_DEFAULT_REGION");
and that is only logged once... before selectStack has been calledred-match-15116
04/26/2021, 6:33 AMdazzling-scientist-80826
04/26/2021, 6:33 AMred-match-15116
04/26/2021, 6:37 AMdazzling-scientist-80826
04/26/2021, 6:38 AMproud-pizza-80589
04/26/2021, 7:29 AMbored-oyster-3147
04/26/2021, 2:05 PMWorkspace
. This is mainly a pulumi-level abstraction where LocalWorkspace
is the one that still depends on the CLI and a theoretical future Workspace
implementation may no longer depend on the CLI - hence the issues you are encountering with certain types being non-public, it wasn't really written with the intention of consumers implementing anything at that level.
I am also curious what you are trying to customize in LocalWorkspace
that you couldn't do by placing your abstraction one level higher and wrapping it.
I would call the issue you are mentioning with AWS region config a bug in the sense you are right that there should be no module-level state corruption across pulumi runs via Automation API. Ideally there shouldn't be any global state between runs.dazzling-scientist-80826
04/26/2021, 2:08 PMbored-oyster-3147
04/26/2021, 2:10 PMlemon-agent-27707
04/26/2021, 3:31 PM