Update: All fixed, details at the end of the threa...
# aws
b
Update: All fixed, details at the end of the thread. Thank you Mitch! --- Hi all, I think I'm having a similar issue with SSO and pulumi here: https://pulumi-community.slack.com/archives/CRH5ENVDX/p1701465244614569 This popped up after a few hours of working successfully in pulumi, and oddly, only occurs in one AWS stack. Another AWS stack folder with the same credentials works fine. I'm getting:
Copy code
Diagnostics:
  pulumi:providers:aws (default_6_22_1):
    error: pulumi:providers:aws resource 'default_6_22_1' has a problem: Invalid credentials configured.
    Please see <https://www.pulumi.com/registry/packages/aws/installation-configuration/> for more information about providing credentials.
    NEW: You can use Pulumi ESC to set up dynamic credentials with AWS OIDC to ensure the correct and valid credentials are used.
    Learn more: <https://www.pulumi.com/registry/packages/aws/installation-configuration/#dynamically-generate-credentials>
I've tried: • rm -rf ~/.pulumi ~/.aws and starting over • upgrading awscli • upgrading pulumi • removing # from my sso url (I had one)
My aws config:
Copy code
[sso-session dev-sso]
sso_start_url = <https://openphone-sso.awsapps.com/start>
sso_region = us-west-2

[default]
region = us-west-2

[profile dev-sso]
sso_session = dev-sso
sso_account_id = ...
sso_role_name = AdministratorAccess
region = us-west-2
aws commands work fine:
Copy code
sean@seans-MacBook-Pro eks % aws sts get-caller-identity
{
    "UserId": ...
    "Account": ...
w
Do the aws namespace settings in the
Pulumi.stack.yaml
file for the good stack look the same as the stack yaml file for the problematic stack?
b
ah: 1. in the problem stack, I have
environment
defined 2. the good stack was missing that—adding environment breaks it maybe this a pulumi esc issue? what I still can't figure out is I was working happily in my branch and then without any changes to the stack configuration, this ESC+OIDC error started
w
In the config file that had the
environment
set - was it set to point at an environment that uses the aws::login provider? As described here: https://www.pulumi.com/docs/esc/providers/aws-login/
Or was it just a blank environment section?
b
the "good" was blank. before:
Copy code
config:
  aws:region: us-west-2
when I updated and added the environment to match the other "bad" stack, I started getting the same OIDC error:
Copy code
config:
  aws:region: us-west-2
environment:
  - nextgen
That environment is defined in Pulumi Console
w
What is defined in
nextgen
? Is it an aws-login environment that might be overwriting your local
aws sso login
?
b
I think that might true. I see different values for
AWS_ACCESS_KEY_ID
and the like, but I assume those are the values Pulumi Cloud is using for deployments? My teammate doesn't have this problem. I'm wondering if there's a different between mine and his aws config, especially with sso session name
w
The
environment
settings are not specific to Pulumi Deployments. If there is an
environment
section in the stack config, it is used by
pulumi up
regardless of who or what is running
pulumi up
(i.e. you, someone else, a cicd pipeline or deployments).
b
is this a clue. here is my aws config vs my teammate's that working: mine:
Copy code
[profile sean-dev]
sso_session = admin-dev
sso_account_id = ...
sso_role_name = AdministratorAccess
region = us-west-2
[sso-session admin-dev]
sso_start_url = https://.../start
sso_region = us-west-2
sso_registration_scopes = sso:account:access
theirs (working)
Copy code
[profile admin-dev]
sso_start_url = https://.../start
sso_region = us-west-2
sso_account_id = ...
sso_role_name = AdministratorAccess
region = us-west-2
output = json
w
Are you saying your teammate has the
environment
configured in the stack config and it works for their stack(s) - which I’m assuming is a different stack than yours?
b
we're working in the same shared git branch/sha. same stack.
btw, happy to screen share if that's easier—no pressure either way
Mitch helped me untangle this. The TLDR is I had some
AWS_
shell values conflicting with values in our Pulumi ESC. Thank you Mitch!