I'm having issue with using `impersonateServiceAcc...
# google-cloud
e
I'm having issue with using
impersonateServiceAccount
option on GCP provider, I'm trying to create a GKE cluster and build a kubeconfig to use later with kubernetes provider, and apparently the
masterAuth
has the original user creds, not the service account set in
impersonateServiceAccount
. What's the right way to deal with this in pulumi?
t
Why not use the
credentials
in the provider instead of impersonation?
e
Only the ergonomics. Instead of storing and distributing creds I would like to run it with gcloud logged in account that has the right to impersonate and so that other collaborators could do the same by just checking out the code without any extra configuration (downloading/generating creds, setting env vars etc). Also stacks are located in different projects, yet the state stored in the bucket in one project. When running from the perspective on my personal account both stacks have access to the state, when running with creds, the respective service accounts would have to have access to the bucket (and KMS).
Impersonating seems to work fine for managing GCP resources, it just breaks at the moment when I generate kubeconfig. Well... it doesn't break, it's just that kubernetes starts complaining on the lack of permissions on my personal account.
t
ah, i try to set things up so i don't have to run anything locally and just use environment vars in my ci/cd platform to handle everything.
e
Yes, that’s the final goal, but I wouldn’t want to lose an ability to run it locally, which is convenient for development. And how do you apply? On PR merge? What is the strategy on failure? A new PR?
t
i configure it so that it does a preview automatically, if there are any issues then we can address it there in the dev branch. if the preview looks fine then manually trigger the 'up'.
we use feature-branch -> dev -> main
issues are caught in dev. service accounts in gcp, etc. are all tied to the branches. the stacks are tied to the branches.
e
Ah, so you manually do the up. Is it a dedicated role on your org, so only you do that, or the others have the same set of keys?
t
there are a few people with privs to manually trigger a job. the credentials necessary for pulumi to do everything are just saved in ci/cd environment variables.
e
Ah, it's still ran in CI/CD, just triggered manually. Gotcha. So you do that before merge and adjust in the branch if needed.
t
no, it's done after merge to dev.
e
I have an idea for my situation, but it feels a bit hacky... I could generate a key in pulumi and create a provider for service account without impersonation and do
new gcp.container.Cluster(..., {provider: non_impersonated_provider})
Not sure it's reasonable...
t
the ci/cd env vars are protected. only dev and main branches are protected. this prevents anyone from exposing the secrets.
e
So if apply fails for some reason, you create a new PR?
t
yes. if the apply fails in dev branch, a new pr is needed to fix it
if required, we can undo the commit from the failed pr to get back to a working state so other pr's aren't blocked
e
Gotcha... yeah, that's pretty much what I was planning to do when we start running it in CI/CD.
t
plan for ci/cd from the start. plan for scalability from the start.
e
I already run preview on trigger and it posts the plan into PR. And I do plan for that for sure, but ideally I would want to keep an ability to run it from my machine conveniently.
t
i'm just the opposite and having a hard time getting pulumi to not require anything be pre-configured locally. i just started using pulumi this week. used terraform before.
i want to automate the creation of pulumi projects from a pulumi project. lol i don't want anything running from a local system.
the idea that there are stateful files for each stack that are not stored in the backend storage blows my mind.
e
We're migrating from terraform as well, and it was one of pain points (besides absolutely insane syntax for conditions and enumerations :)) that we had to carry some private vars and keys around.
I'm not sure I follow though. In my setup everything is stored in the backend storage which is gcs bucket, what do you mean by stateful files?
t
Pulumi.<stack>.yaml
I'm using passphrase secret provider and i noticed that pulumi keeps creating that stack yaml file and adding a
encryptionsalt
value. if you delete the file and run preview again (while still using the same passphrase) you get a new
encryptionsalt
value. this makes me think that if the pulumi config were to store any secrets in the backend that they would be saved with a different hash/salt
e
Ah, I don't use that. And it's for sure annoying there's no way to just turn it off completely 🙂
t
from what i've read and chatted with others, it's apparently expected that you init the stacks locally and commit the stack yaml files.
e
Yes, they're definitely to be committed.
t
i find it bizarre that an automation system cannot be automated. lol
e
Well, they can to some degree. For example, instead of passphrase I use kms, so it stops asking for password every time you run it.
t
i just set the passphrase as an environment variable
same for the gcp creds used by pulumi and the backend storage url
e
Makes sense.