https://pulumi.com logo
#getting-started
Title
# getting-started
m

magnificent-soccer-44287

01/20/2024, 5:35 AM
RE: https://www.pulumi.com/blog/environments-secrets-configurations-management/#:~:text=For%20example%2C%20the%20Pulumi%20ESC,staging%20%2D%2D%20aws%20s3%20ls%20. Pulumi ESC programmatic access from within stacks (attached screenshot) are we able to pull a specific, named ESC environment from within typescript without adding it to the stack's config file? or was that explicitly restricted for security reasons
i looked around and couldnt find any good examples of how to use ESC, if there is a resource i missed please just point me to it 😄 ty in advance
or is this basically the status still and there is no TS support available? https://github.com/pulumi/esc/issues/60
i could work around this by creating specific "esc-to-stack-output" stacks to use with ref but really want to avoid that
m

miniature-musician-31262

01/20/2024, 5:58 AM
Have you tried using the ESC REST API with an HTTP client? https://www.pulumi.com/docs/pulumi-cloud/cloud-rest-api/#environments
m

magnificent-soccer-44287

01/20/2024, 6:05 AM
i have looked at it - how are you ingesting it? just pulumi.Command to curl it inside the stack??
i'd really like to use them without binding to stack directly but i also dont want to do anything too hacky / tech debty
and it would make sense if only the stack-attached env was available for security reasons
attaching an env to a stack directly and using it via config built-in with hybrid import seems ok but
just curious as to what solutions / implementation patterns others have gone with so far
e

enough-architect-32336

01/20/2024, 6:43 PM
Generally people add an env to a stack directly. What reservations do you have doing that?
m

magnificent-soccer-44287

01/20/2024, 9:56 PM
I'm trying to matrix a GHA CICD build to deploy to all active tenants, as configured in pulumi ESC. As of right now, using this action: https://github.com/pulumi/actions with:
Copy code
pulumi env open
is my best alternative to basically running a "fake stack" in each CICD build to be able to use its' outputs which are forwarded from ESC config So to your point, my primary reservation is that I need to use the ESC variables outside of a pulumi stack.
r

red-match-15116

01/21/2024, 11:03 PM
@magnificent-soccer-44287 yeah https://github.com/pulumi/esc/issues/60 is the right issue for this, and this is something we definitely want to support, but don't just yet. You have a couple of options. One is to use automation api to do basically what you describe here:
is my best alternative to basically running a "fake stack" in each CICD build to be able to use its' outputs which are forwarded from ESC config
Using automation api, you could create a new ephemeral stack, add the environment to it, and then use getConfig to access the environment config values. There is a caveat though, this will currently only work for plain key/value configuration defined in the environment, but not for secrets or esc providers. That's because
getConfig
doesn't currently
open
the environment, it just runs a
check
. There's an open issue to also open the environment during a getConfig. The other alternative is to use my very WIP branch code where I've started playing around with the beginnings of a node sdk. I'm not sure when we'll be in a position to get this to a shippable state. But it may help you get to what you need.
m

magnificent-soccer-44287

01/22/2024, 12:03 AM
@red-match-15116 I actually ended up coming up with something okay-ish for a pure ESC driven matrix deployment. a) create, for example, a central config as such:
Copy code
values:
  pulumiConfig:
    aws:region: us-east-1 # this is the global stack's region.
    glob-infra:tenants:
      jabc: # Note: do NOT place tenants into us-east-1
        stacks:
          core-infra:
            mode: production
          pmb-main:
            mode: production
          domain-user:
            mode: staging
      dev:
        stacks: # if stack is not included, it's disabled
          config-based-matrix-test:
            mode: development
            esc-env: test-gha-matrix-env
            overrides:
              testKey1: testVal1
              testKey2: testVal2
          core-infra:
            esc-env: core-infra-dev
            mode: development # development, test, staging, production
          pmb-main:
            esc-env: pmb-main-dev
            mode: production
      stag:
        stacks: # We could auto-manage stacks based on features.
          config-based-matrix-test:
            mode: development
            overrides:
              s3BucketName: cfg-stag-us-west-test
          domain-user:
            esc-env: domain-user-stag
            mode: staging # TODO: control API integration endpoints.
          core-infra:
            esc-env: core-infra-stag
            mode: development # TODO: control CDN timeouts.
            overrides:
              testOverrideKey: testOverrideValue
          pmb-main:
            esc-env: pmb-main-stag
            mode: production # TODO: control resource allocation.
            overrides: # overrides env-based stack config (not app config)
              mainCpu: 2048
              sidekiqCpu: 512
The core stack hydrates the ESC config and writes it as an output. Then, each Orchestrated stack: • has a ts script which auths to pulumi, uses 'pulumi stack output' to grab the output of above • initiates / refreshes all stacks in scope of current matrix deployment • writes the ESC config and ENV links into the "Pulumi.[StackName].yaml" file • then we matrix execute the regular "pulumi up" and they're fully linked to our dynamic ESC config! so basically add tenant to central ESC config => new stacks for each project get created as necessary and linked to the new ESC configs for them, etc ,etc I've opted to gitignore Pulumi.*.yaml so these files are treated as ephemeral and generated from our central ESC config prior to every up/preview
The goal here was to not have to change anything in our GHA CICD pipeline to enable new tenants, which for us means creating a new aws account, roles, permissions, deploying 5+ stacks to it, linking it to central account ALB, blah blah