Anyone seen this before? Using the automation API ...
# automation-api
p
Anyone seen this before? Using the automation API in node.
Copy code
+  kubernetes:core/v1:Namespace aaa-7f creating error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "...": getting credentials: exec: fork/exec /usr/local/bin/aws: no such file or directory
Pulumi running in a docker container. First i thought it was because I was using alpine, but in a debian image it does the same thing. Manually running the aws cli works fine
Copy code
root@alpha-launchpad-5d89b45bdc-nz5pp:/usr/src/app# aws --version
aws-cli/2.1.36 Python/3.8.8 Linux/5.4.89+ exe/x86_64.debian.9 prompt/off
root@alpha-launchpad-5d89b45bdc-nz5pp:/usr/src/app# /usr/local/bin/aws --version
aws-cli/2.1.36 Python/3.8.8 Linux/5.4.89+ exe/x86_64.debian.9 prompt/off
I’ve seen similar things happen before when shelling out from node for other apps if I did not force the shelled out command to use bash as a shell.
b
is this when it is running in AWS, or when it is running locally?
you said it is in a container - I'm wondering if it is related to the options that you need to enable to tell it to look in a certain place for AWS creds, on the AWS provider.
p
The error happens in a container, running on GKE
it cannot “launch” the aws binary insude of the container
but manually, inside the container, I can
Running locally this all works fine
b
I'm thinking it is related to this then: https://github.com/pulumi/pulumi-aws/issues/1359
You need some combination of
SkipMetadataApiCheck
and
SkipGetEc2Platforms
to be set to
false
. The defaults were changed because performing those checks every run was a performance hit so the defaults were changed to
true
recently
p
Fairy sure it is not related to credentials, this is before it needs credentials. It cannot launch the AWS binary. Nothing in that issue is close to the error I’m seeing.
b
I'm seeing
getting credentials:
in the original error you mentioned as well as this
unable to load schema information from the API server
so maybe I'm not seeing the same thing you are
p
check the end of that line
Copy code
exec: fork/exec /usr/local/bin/aws: no such file or directory
but /usr/local/bin/aws is there and i can run the aws binary shelled into the container
b
Yes I roger that. I was thinking that maybe it was looking locally for creds because the API check for creds and the inferred EC2 check for creds were disabled by default. Anyways, sorry If I led you astray. Good luck!
p
Ok, /usr/local/bin/aws is a symlink, i modified the path so it chose the real binary first
Copy code
# which aws
/usr/local/aws-cli/v2/current/bin/aws
but out of the automation api i still get
Copy code
fork/exec /usr/local/bin/aws: no such file or directory
OH MY GOD… Since yesterday i’ve been deploying changed containers to one env, and testing it in another env. 🤦🏻‍♂️
❤️ 1
💙 1