Guys, I could use some help, I have kind of an egg...
# general
f
Guys, I could use some help, I have kind of an egg and chicken problem here. Right now I’m creating and managing the cluster inside the pulumi stack, then somewhere later in the deployment I’m creating roles and bindings for some services, but that can’t work unless I’ve created a RoleBinding that grants my current google identity the role
cluster-admin
for this cluster beforehand (as stated here : https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#setting_up_role-based_access_control), see where I’m going with that ? I need the cluster to be created to be able to make this binding before being able to create any other Role or ClusterRole, but since the cluster creation and the roles are part of the automation, the only way everything works as expected (in one shot) would be to create this RoleBinding right after the cluster is ready inside the pulumi stack deployment. Doing so would require to get my current active google identity to make the right binding on the right account. How/Can I do that within pulumi with the
@pulumi/kubernetes
package (I’m using the typescript flavor) ?
m
I’m not sure about GKE, but in EKS you can simply create a provider based on the kubeconfig and add the provider to all the kubernetes resources.
Here is an example using GKE: https://github.com/pulumi/examples/blob/master/gcp-ts-gke/cluster.ts To create a cluster role, you can do something like this:
Copy code
const clusterRole = new k8s.rbac.v1.ClusterRole(
  'cluster-role-name',
  {
    // options here...
  },
  {
    provider: k8sProvider,
  },
);
f
I’m doing so, I have the provider, but the
ClusterRole
is made by a helm Chart and it seems that passing the provider to a
k8s.helm.v2.Chart
does not change the problem
m
Helm charts have a different provider configuration, you need to pass the kubernetes provider into the
kubernetes
key.
Copy code
const chart = new k8s.helm.v2.Chart(
  'chart-name',
  {
    // options...
  },
  {
    providers: { kubernetes: k8sProvider },
  },
);
That’s how you are doing?
f
yep, that’s how I pass the provider to all my resources
c
I think your questino is not about how to pass the provider, though, right? It’s about how you have some account that you need to bind a role to, but it can’t because it doesn’t have the role that give it permissions to bind a role to, right?
🤔 1
@faint-motherboard-95438?
f
yes that’s the point, the provider is ok, but passing it is useless if the account associated with it does not have the role to create other roles in the cluster (as Google explains it in the docs)
c
Uh, so, if I understand correctly, this is “the bootstrap problem”. Typically your cluster admin (whoever is “owner” in the GCP IAM pane) will provision a bunch of service accounts and IAM bindings, which allows certain users and service accounts to add more cluster role bindings.
So that might look something like this:
Copy code
// The ServiceAccount that will manage Kubernetes resources as part of CI.
export const gcpServiceAccount = new gcp.serviceAccount.Account(`${config.appName}`, {
    accountId: config.appName,
    displayName: "Test CI"
});

// The key that we will place in the Travis CI using `travis encrypt`.
export const gcpServiceAccountKey = new gcp.serviceAccount.Key(config.appName, {
    serviceAccountId: gcpServiceAccount.name
});

export const testCiRole = new gcp.projects.IAMCustomRole(config.appName, {
    roleId: "testci",
    title: "Test CI role",
    project: config.project,
    permissions: [...]
});

// Grants the ServiceAccount the ability to use the gcloud container API.
export const gcpCiRole = new gcp.projects.IAMBinding(config.appName, {
    // role: "projects/pulumi-development/roles/KubernetesTestCIRole",
    role: testCiRole.id,
    members: [gcpServiceAccount.email.apply(email => `serviceAccount:${email}`)]
});

// Grants the ServiceAccount the ability to use the gcloud container API.
const saUser = new gcp.projects.IAMBinding(`${config.appName}-sa`, {
    role: "roles/iam.serviceAccountUser",
    members: [gcpServiceAccount.email.apply(email => `serviceAccount:${email}`)]
});

// Grant the ServiceAccount admin permissions inside the Kubernetes cluster.
export const k8sRoleBinding = new k8s.rbac.v1.ClusterRoleBinding(
    "cluster-admin-binding",
    {
        metadata: { name: "cluster-admin-binding" },
        roleRef: {
            apiGroup: "<http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>",
            kind: "ClusterRole",
            name: "cluster-admin"
        },
        subjects: [
            { apiGroup: "<http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>", kind: "User", name: gcpServiceAccount.email }
        ]
    },
    { provider: k8sProvider }
);

export const clientSecret = gcpServiceAccountKey.privateKey.apply(key =>
    JSON.parse(Buffer.from(key, "base64").toString("ascii"))
);
f
well, that looks definitely like the kind of things I was needing without knowing where to start, thanks a lot ! I’ll give it a shot and let you know
c
@faint-motherboard-95438 I’m going to be adding this to an identity/CI/CD tutorial soon.
f
@creamy-potato-29402 that would be terrific and definitely a must have, thanks
c
So, just remember, to do this bootstrapping, you’ll need to run as a user with permissions to actually do all that stuff…
f
@creamy-potato-29402 thanks again for your snippet, I was able to resolve all of my roles and permissions issues because of it and my code looks definitely way more production ready now.
@creamy-potato-29402 actually I have one last problem that was hidden I just discovered after having cleaned up everything to be sure I was ok. I need to provide a
kubeconfig
to the
k8sProvider
, and I’m kind of lost on what I should put in it to authenticate the serviceAccount I use and to whom I have granted the permissions I needed. By default it uses the
kubectl
active config and if I put
Copy code
auth-provider:
  config:
     cmd-args: config config-helper --format=json
     cmd-path: gcloud
     expiry-key: '{.credential.token_expiry}'
     token-key: '{.credential.access_token}'
   name: gcp
it uses the
gcloud
config which is not what I want either. I can’t find what I should put here and from where.. could you help me on this one too ?
c
@faint-motherboard-95438 so this is confusing, but basically that kubeconfig file will say “take whatever user I am according to
gcloud
and use that as my k8s identity”
If you take something like this:
Copy code
import * as k8s from "@pulumi/kubernetes";
import * as gcp from "@pulumi/gcp";

import { k8sProvider } from "./cluster";
import * as config from "./config";

// The ServiceAccount that will manage Kubernetes resources as part of CI.
export const gcpServiceAccount = new gcp.serviceAccount.Account(`${config.appName}`, {
    accountId: config.appName,
    displayName: "Test CI"
});

// The key that we will place in the Travis CI using `travis encrypt`.
export const gcpServiceAccountKey = new gcp.serviceAccount.Key(config.appName, {
    serviceAccountId: gcpServiceAccount.name
});

// Grants the ServiceAccount the ability to use the gcloud container API.
export const gcpCiRole = new gcp.projects.IAMBinding(config.appName, {
    role: "roles/container.developer",
    members: [gcpServiceAccount.email.apply(email => `serviceAccount:${email}`)]
});

// Grant the ServiceAccount admin permissions inside the Kubernetes cluster.
export const k8sRoleBinding = new k8s.rbac.v1.ClusterRoleBinding(
    "cluster-admin-binding",
    {
        metadata: { name: "cluster-admin-binding" },
        roleRef: {
            apiGroup: "<http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>",
            kind: "ClusterRole",
            name: "cluster-admin"
        },
        subjects: [
            { apiGroup: "<http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>", kind: "User", name: gcpServiceAccount.email }
        ]
    },
    { provider: k8sProvider }
);

export const clientSecret = gcpServiceAccountKey.privateKey.apply(key =>
    JSON.parse(Buffer.from(key, "base64").toString("ascii"))
);
then you can do something like:
Copy code
pulumi stack output clientSecret > client-secret.json
gcloud auth activate-service-account --key-file client-secret.json
then you are auth’d as the service account in question.
f
@creamy-potato-29402 hum, so that means I need to run
pulumi up
, let it fails to get the
clientSecret
, auth
gcloud
manually with the newly created serviceAccount, then re-run
pulumi up
to finish whatever has failed previously ? I would expect to not have to do the manual step
I was trying to find out an
auth-provider
method inside the
kubeconfig
I would be able to configure inside pulumi with whatever is at my disposal to provide the right authentication in the
k8sProvider
so everything would run in one shot.
So, what you’re telling me is that it’s kind of impossible to do so, right ?
c
@faint-motherboard-95438 no
I’m saying that you have to bootstrap the roles from the CLI of someone who is an administrator.
So imagine your’e starting with nothing. Someone is an
owner
in the GCP account. How do you register service accounts and roles for the rest of your org?
The
owner
has to write a pulumi program with all that stuff, which bootstraps those roles.
Then, everyone else can deploy apps, infrastructure, etc.
OTOH, if that
owner
is deploying everything, from the infrastructure up, then you should have no problems because they are already the owner.
There is no escape from this generally: You have to create a GCP account, and then use that to grant access to everyone else. Then the stack outputs are consumed by other applications (e.g., CI) which allows you to deploy other things.
make sense?
f
@creamy-potato-29402 ok, yes I get it. I have to split things up by concerns and not trying to do everything within on single pulumi stack. Thanks for the heads up, that was really helpful
c
I mean, you could do it the other way, but that’s risky.
Especially as you get more people using it.
You usually want kube app devs to write kube app dev code, vs having global access to your data storage facilities, etc.,
Some companies go so far as to have data storage facilities in a completely different AWS/GCP account
f
yes definitely. Well, I wanted to make something smooth and simple to start with before making things more modular and so more complicated
c
in that case, just run as cluster admin and deploy it all in one go.
If you are
owner
you should have permission to do everything.
f
Yes, but I wanted to use a less permissive account, that’s when problems have started of course
anyway, I think I can make something from all of that now, thanks for your time and help !
c
got it.
sure, we’ll have more stuff that provides guidance on this sort of thing soon.