sparse-intern-71089
04/26/2021, 12:05 AMdazzling-scientist-80826
04/26/2021, 12:06 AMLanguageServer is an 'internal' class...dazzling-scientist-80826
04/26/2021, 12:09 AMStack class directly... i'm sure i can figure it out, but would be nice to have an example somewhere. please and thank you 🙂dazzling-scientist-80826
04/26/2021, 1:40 AMdazzling-scientist-80826
04/26/2021, 1:41 AMdazzling-scientist-80826
04/26/2021, 1:45 AMbillowy-army-68599
dazzling-scientist-80826
04/26/2021, 3:08 AMdazzling-scientist-80826
04/26/2021, 3:26 AMred-match-15116
04/26/2021, 4:11 AMdazzling-scientist-80826
04/26/2021, 4:35 AM${org}/{username}-${stackName} so that they are scoped by dev. Generally, I'll probably have something like mycompany/brandon-dev or similar while exploring.
It's OK for the production stack's config to live in the repository as a Pulumi.production.yaml file, but it might be nice to store that config elsewhere too. I was thinking it would be better however for all the dev stacks to not have their config live in files in the repo. Tho there are some questions about version control if the stack config is in an external db or similar.
2) I'd like to automate stack create/refresh/up etc from TypeScript, so that i'm not writing bash or shelling out in order to manipulate these dev stacks in CI and in a custom management UI.
What I've discovered is that the Stack class in the automation package depends on runPulumiCmd which calls the pulumi binary. I had to do some hacky stuff with serializeArgsForOp in order to add a --cwd flag to be workDir and then I had to spit out a Pulumi.yaml file in to that working directory to avoid the pulumi command complaining that there was no project file. I also had to symlink node_modules in to that directory, tho I'm not sure that was strictly necessary. Additionally, I've had to spit out a Pulumi.*.yaml file, which I've done by calling the private runPulumiCmd method with the config subcommand to spit out a .yaml file (though I could have done that with a direct file write) in order to implement setAllConfig because the pulumi command reads that config file & it doesn't seem obvious how one might pass the config otherwise. Additionally, I used runPulumiConfig to run stack init in order to create a stack in the state management service.
All in all, I did a pile of hacks. Not sure if I'm missing something obvious, or the above is a checklist of issues to fix in order to make it possible to create a dynamic workspace 🙂dazzling-scientist-80826
04/26/2021, 4:37 AMunhandled rejection errors trying to do stack.up thoughprehistoric-coat-10166
04/26/2021, 4:57 AMLocalWorkspace underneath. You could still abstract away the configuration to another source,
only configuring LocalWorkspace when necessary using the configuration methods.red-match-15116
04/26/2021, 4:58 AMI’m getting lots ofWe’ve been tracking a bug with unhandled rejections, but they can also be caused by errors in your code. Stack traces would be helpful.errors trying to dounhandled rejectionthoughstack.up
dazzling-scientist-80826
04/26/2021, 4:59 AMdazzling-scientist-80826
04/26/2021, 5:00 AMSTACK_TRACE:
Error
at Object.debuggablePromise (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/debuggable.js:69:75)
at Object.registerResource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/resource.js:219:18)
at new Resource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/resource.js:215:24)
at new CustomResource (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/resource.js:307:9)
at new Permission (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/lambda/permission.ts:270:9)
at createLambdaPermissions (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/apigateway/api.ts:645:30)
at new API (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/apigateway/api.ts:565:29)
at Object.program [as init] (/Users/brandonbloom/Projects/deref/node-json-api/cloud.ts:147:23)
at Stack.<anonymous> (/Users/brandonbloom/Projects/deref/node-json-api/node_modules/@pulumi/pulumi/runtime/stack.js:86:43)
at Generator.next (<anonymous>)dazzling-scientist-80826
04/26/2021, 5:01 AM[aws:lambda/permission:Permission] for the errors i'm seeingdazzling-scientist-80826
04/26/2021, 5:03 AMunhandledRejection handler - that was definitely bothering medazzling-scientist-80826
04/26/2021, 5:04 AMred-match-15116
04/26/2021, 5:10 AMred-match-15116
04/26/2021, 5:12 AMextend LocalWorkspace and override the functionality that you want to be different rather than re-implementing the entire surface area?dazzling-scientist-80826
04/26/2021, 5:19 AMextends, i'm doing it explicitly. my experience is that it's totally a bogus thing to do in practice to override some subset of methods and expect the other set of methods to not break. there is a gist in the issue i opened, you can see that i am delegating to LocalWorkspace in a bunch of placesdazzling-scientist-80826
04/26/2021, 5:22 AMred-match-15116
04/26/2021, 6:05 AMrunPulumiCmd public?dazzling-scientist-80826
04/26/2021, 6:06 AMdazzling-scientist-80826
04/26/2021, 6:17 AMsdk/nodejs/config/vars.ts does new pulumi.Config("aws") at the top level of the module. this means that when automation runs later with a different config file, it's already too late to override this config - as a result, some code that relies on aws.config.region will return undefinedred-match-15116
04/26/2021, 6:19 AMcreateStack but it’s the same as in LocalWorkspace. Your selectStack implementation is different though, I see that.dazzling-scientist-80826
04/26/2021, 6:20 AMred-match-15116
04/26/2021, 6:20 AMthis means that when automation runs later with a different config fileI’m not quite following what the scenario is here.
dazzling-scientist-80826
04/26/2021, 6:22 AMimport * as aws from '@pulumi/aws', as you normally do. that will read your current selected stack's config, in particular you'll notice this in that aws.config.region will be setdazzling-scientist-80826
04/26/2021, 6:22 AMselectStack to some new stack with a config in a different region, aws.config.region is now wrongred-match-15116
04/26/2021, 6:22 AMred-match-15116
04/26/2021, 6:23 AMaws.config.region is read from the config of the current stack when the pulumi program runs.red-match-15116
04/26/2021, 6:24 AMPulumi.[stack].yaml file. So… it should match whichever stack it is being run on.dazzling-scientist-80826
04/26/2021, 6:24 AMconsole.log(
'!!!!!!!!!!?!?!??!?!!',
new pulumi.Config('aws').require('region'),
aws.config.region,
)dazzling-scientist-80826
04/26/2021, 6:24 AMdazzling-scientist-80826
04/26/2021, 6:24 AM!!!!!!!!!!?!?!??!?!! us-west-2 undefined
dazzling-scientist-80826
04/26/2021, 6:25 AMdazzling-scientist-80826
04/26/2021, 6:27 AMaws.config.region in this code is being captured at build time b/c of ts-node or esbuild or whateverdazzling-scientist-80826
04/26/2021, 6:31 AMconsole.log('CAPTURING REGION', process.pid);
exports.region = __config.get("region") || utilities.getEnv("AWS_REGION", "AWS_DEFAULT_REGION");
and that is only logged once... before selectStack has been calleddazzling-scientist-80826
04/26/2021, 6:31 AMred-match-15116
04/26/2021, 6:33 AMdazzling-scientist-80826
04/26/2021, 6:33 AMdazzling-scientist-80826
04/26/2021, 6:34 AMred-match-15116
04/26/2021, 6:37 AMdazzling-scientist-80826
04/26/2021, 6:38 AMdazzling-scientist-80826
04/26/2021, 6:38 AMdazzling-scientist-80826
04/26/2021, 6:42 AMdazzling-scientist-80826
04/26/2021, 6:43 AMproud-pizza-80589
04/26/2021, 7:29 AMbored-oyster-3147
04/26/2021, 2:05 PMWorkspace. This is mainly a pulumi-level abstraction where LocalWorkspace is the one that still depends on the CLI and a theoretical future Workspace implementation may no longer depend on the CLI - hence the issues you are encountering with certain types being non-public, it wasn't really written with the intention of consumers implementing anything at that level.
I am also curious what you are trying to customize in LocalWorkspace that you couldn't do by placing your abstraction one level higher and wrapping it.
I would call the issue you are mentioning with AWS region config a bug in the sense you are right that there should be no module-level state corruption across pulumi runs via Automation API. Ideally there shouldn't be any global state between runs.dazzling-scientist-80826
04/26/2021, 2:08 PMbored-oyster-3147
04/26/2021, 2:10 PMbored-oyster-3147
04/26/2021, 2:11 PMlemon-agent-27707
04/26/2021, 3:31 PMlemon-agent-27707
04/26/2021, 3:34 PMlemon-agent-27707
04/26/2021, 3:36 PM