In our ci that runs on our `develop` branch, we ex...
# general
In our ci that runs on our
branch, we execute
pulumi preview
as a sanity check before it is merged to
. Generally speaking, our gcloud credentials are provided via env
. Is there a recent pulumi update that could be causing:
Copy code
error: Get <>: error executing access token command "gcloud config config-helper --format=json": err=exit status 1 output= stderr=ERROR: (gcloud.config.config-helper) You do not currently have an active account selected.
Please run:

  $ gcloud auth login
I haven’t any idea why a gcloud config would be called here in our ci.
there is no stack trace so I have no idea where this is originating
I did a
gcloud auth revoke
locally and have no env vars set and I can preview fine….more confused.
@happy-yak-61289 and I were talking about something vaguely similar and she asked if you had done 'gcloud auth application-default login' (since difft layers of sdk's etc fetch difft creds .. a la
@chilly-crayon-57653 I shouldn’t be using
cli at all, I am creating service account credentials in an
stack, then supplying them system-wide with the env
. This has been working fine but a pulumi? update within the past three weeks has perhaps changed something.
the latter command covers SDK use, rather than CLI
The env variable covers sdk use.
Otherwise my setup would not have worked for the past 6 months
I am not an expert here - but is it possible that this command is invoked as part of GKE authentication? I believe the kubeconfig files for accessing GKE end up having to do something like this to get credentials?
FWIW - I am reasonably confident that the Pulumi GCP provider itself would not be invoking this.
@white-balloon-205 this has been working fine - it is an automated process inside a ci container. No gcloud setup has ever been used. This error is definitely coming from
pulumi preview
. Is there a way to have the cli report a stack trace to the console?
Does your stack manage a GKE cluster or deploy GKE resources? If so, could you share the
you are using? You can get very verbose logs with
pulumi preview --logtostderr -v=9 --debug 2> out.txt
which might help here?
the infrastructure stack that is breaking is managing the cluster. It does also deploy some basic shared resources like cert-manager. I’ll try this in the morning and get back to you, it is only occurring on ci so I’ll ssh in and poke around.
It looks like
provider_plugin.go:514] Provider[kubernetes, 0xc00308ab40].Diff(
now thinks I have no gcloud account. Which repo has that code?
Ok, not sure what changed or why I now need to auth gcloud, but I can only assume it has to do with more intensive checking of something on the actual server. Only in my
pulumi update
did I run
gcloud auth activate-service-account
, it appears now that I need it to preview as well. So, not a problem, I just assume it has some better checking or validation.
👍 1