This morning I am unable to update. I seem to be e...
# aws
w
This morning I am unable to update. I seem to be encountering https://github.com/pulumi/pulumi-aws/issues/814. I don't get why it would be using an expired token, I am manually calling
aws sts get-session-token
(because I use MFA) and setting the
AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
,
AWS_SESSION_TOKEN
myself. They are definitely not expired. Any idea why this would be happening?
FWIW
pulumi refresh
works fine
l
Is your provider set up to use a particular profile? If it is, it won't be using the env vars...
Those vars are used for the AWS backend and secrets provider always, but they're only used for the provider if the provider doesn't have other creds configured.
w
It definitely does use them, at least to assume the role.
And yes I do have a profile.
l
Oh, I though if you use
role_arm
to configure the role you assume, then you have to have a
source_profile
?
If you have a
source_profile
and it's not default, then those env vars won't be used.
w
The only way I got it to work in the past was with a custom Provider that uses
assume_role
Anyway I see in the terraform logs that
sts/GetCallerIdentity
returns 200 so I don't know why pulumi says it is timing out
l
Ah. I've been using
profile
not
assumeRole
. I don't have experience that can help, sorry.
w
I think
GetCallerIdentity
is a red herring. That seems to be fine. It's not clear exactly what is going wrong. I ran it again and the logs end with
debug: waiting for quiescence; 19 RPCs outstanding
and then it eventually times out.
Ugh ok it is something completely different. The logs are just really unhelpful again.
b
what's in the output of
env
, any other AWS env var in there?
@worried-queen-62794 can you also clear out
~/.aws/sso/cache/
w
It's ok it was something totally unrelated.
b
mind me asking what it was ?
w
It was a custom provider that was hanging. The logs just made it look like the last thing it was doing was calling
GetCallerIdentity
, which it was but that is not what was hanging.
👍 1
It was a bit strange though. I'm sure the first few times it didn't say that resource was the one that it failed on. Hard to know now.