Is it possible to somehow create resource which wi...
# general
s
Is it possible to somehow create resource which will be shared across all stacks? I want to create eks cluster which will be used by dev, staging and prod stack instead of creating a cluster per stack. I can create a separate pulumi folder for the cluster but I'm looking is there any other option I missed?
p
I don’t know how others deal with that but in such cases, I usually create a separate project (with one stack) for that.
You can use stack references in case you need to get some outputs from the related project.
s
Yeah, that's what I used but I was wondering is there any better option 🙈
p
To sum up, your structure might look like this:
Copy code
(project) eks-cluster
  - (stack) main

(project) other-aws-things
  - (stack) prod
  - (stack) staging
  - (stack) dev
s
Thanks Jakub! 🙌
p
another solution would be creating a separate stack in the same project but I think it’d be more hacky that way
Copy code
(project) all-aws-things
  - (stack) eks-cluster
  - (stack) prod
  - (stack) staging
  - (stack) dev
and create config values in such a way that
eks-cluster
create EKS cluster and nothing else and the others don’t create the cluster
s
I set it up exactly as you posted, I have a main stack for cluster and prod, staging and dev for other resources
What do you mean by "and create config values in such a way that 
eks-cluster
 create EKS cluster and nothing else and the others don’t create the cluster"?
I will need that case for some resources like horizontal pod autoscaler but I thought of using `if`statement for it
p
I do have one project called
gcp-project-bootstrap
that has something like that. One stack defines shared GCR project (ECR in GCP) and the rest rely on it.
s
for example:
if (stack === prod) {
const autoscaler = ...
}
p
you can do something like that (stack name is available programatically)
s
or you found some better way for this case?
p
or you can have some more explicit conditions in config values
note that the examples below heavily rely on structure config (https://www.pulumi.com/docs/intro/concepts/config/#structured-configuration)
Pulumi.gcr.yaml:
Copy code
config:
  gcp-project-bootstrap:config:
    create_vpc: false
    project_folder: <REDACTED>
    project_id: <REDACTED>
    project_name: <REDACTED>
    service_accounts:
    - account_id: gh-actions
      create_key: true
      display_name: Service account for Github Actions to push images to shared GCR
      project_roles:
      - roles/storage.admin
s
ohh I see, yeah that is also an option 👍
p
Pulumi.dev.yaml:
Copy code
config:
  gcp-project-bootstrap:config:
    default_service_account:
      gcr_project_id: <PROJECT_ID_FROM_ABOVE>
    project_id: <REDACTED>
    project_name: <REDACTED>
    activate_apis:
      - <http://cloudresourcemanager.googleapis.com|cloudresourcemanager.googleapis.com>
      - <http://container.googleapis.com|container.googleapis.com>
      - <http://iam.googleapis.com|iam.googleapis.com>
      - <http://servicenetworking.googleapis.com|servicenetworking.googleapis.com>
      - <http://redis.googleapis.com|redis.googleapis.com>
so you have a couple of “decision points” here: •
create_vpc
, if it’s false it doesn’t create a VPC at all (it’s not needed for shared project that only gonna contain GCR) • if
default_service_account.gcr_project_id
is present, it’s gonna grant access to the shared project mentioned there
you could have something like:
Copy code
eks_cluster:
  create: boolean
  use_existing_one: string
still, in your case I’d stick with the first version (separate projects) unless it’s gonna complicate things for you 🙂
s
hmm, that's true, but then stacks won't be independent between each other 🤔 I think the better option is to go with a separate project for eks cluster and create other project for other resources like you mentioned before
because otherwise staging stack will depend on dev stack to deploy if for example cluster is created in dev stack
p
if you’d like to use the same project, I won’t stick the eks creation to some randomly chosen stack - I’d still create a separate, “dummy” stack for that
but once again, in this case I guess having 2 projects is the best
1
so far I see all the benefits of it without any downsides
(considering you are aware that pulumi has support for stack references 😄 without that it would be tricky 😄)
s
Yeah, I am aware of that 😄 Thank you for your help and nice conversation 🙂
p
You’re welcome. Have a nice day! 🙂
🖖 1
s
@prehistoric-activity-61023 I have one more question 😅 Can you please provide me a code snipper where you imported cluster from other project. I created eks cluster in project A and exported provider:
Copy code
module.exports = {  provider: cluster.provider };
but when I try to import that provider and use it in the project B i get
unknown provider
error:
Copy code
const stack = pulumi.getStack();
const mainStack = new pulumi.StackReference('ikovac/st-stack-independent/main');
const provider = mainStack.getOutput('provider');

const ns = new k8s.core.v1.Namespace('namespace', {
  metadata: { name: `st-${stack}` }
}, { provider });
const namespace = ns.metadata.name;
p
I’m pretty sure providers are bound to the project.
I export the kubeconfig and recreate the provider in other projects.
+ to be honest, I’m not sure how pulumi deals with exporting whole objects 🤔 So far I was exporting
id
property and was using
<resource>.get(…)
methods to get the full object in a project where stack ref is used.
s
How did you recreate the provider in other project?
Copy code
new Provider(name: string, args?: ProviderArgs, opts?: CustomResourceOptions);
?
p
k8s provide expects
kubeconfig
as a parameter
you can export the
kubeconfig
from the “parent” project as an output, import it using StackReference and create a provider again
how did you create
cluster.provider
mentioned above?
s
Copy code
const cluster = new eks.Cluster('my-cluster', {
  vpcId: vpc.id,
  subnetIds: vpc.publicSubnetIds,
  desiredCapacity: 1
});
p
(it’s possible there are some differences due to the fact I’m used to GKE and you’re talking about EKS)
I see, so the k8s provider is already created for you?
s
yeah, i will try recreate cluster with kubeconfig option you mentioned
p
I think
cluster.provider
should have a property with kubeconfig
s
yes, you are right
it seems that i can't recreate cluster using kubeconfig param as it is not expected as argument
it is readonly prop
Copy code
const provider = new k8s.Provider('provider', {
  kubeconfig: clusterProvider.kubeconfig
});
i needed k8s.Provider 😅 I think this will do the work
p
yep, that’s what I’m doing right now (assuming that
clusterProvider.kubeconfig
is fetched from the other stack)
s
yes, thanks soo much once more! 🙌
p
BTW, in case I need to access some resources created by another stack (and simple name/email is not good enough), I export the
id
property and use get method. You might need it some day as well so I’m sharing:
Copy code
current_stack = pulumi.get_stack()
project_bootstrap_stack = pulumi.StackReference(
    f"<REDACTED>/gcp-project-bootstrap/{current_stack}"
)

vpc_network = gcp.compute.Network.get(
    "vpc-network", id=project_bootstrap_stack.require_output("vpc_network_id")
)
something like that (the example is in python but it should look pretty the same in JS)
thanks to that I can have a fully functional
gcp.compute.Network
object instead of just a network_name for example
s
I can't find get method on aws resources, for example if I do:
eks.Cluster.get
it doesn't exist
I guess you’re using https://www.pulumi.com/registry/packages/eks/api-docs/cluster/. It looks like a more high-level wrapper for creating EKS clusters.
hah, I think I might even find an GH issue for that: https://github.com/pulumi/pulumi-eks/issues/11
hope this will help you to find a solution
I guess
get
method is available for all resources created by “low-level” libraries. This one wraps the cluster creation and doesn’t expose any get method (yet). If you already managed to share a kubeconfig between the stacks, you might simply ignore this fact (as you can already recreate a provider based on that).
s
I guess 
get
 method is available for all resources created by “low-level” libraries
You are totally right! I used wrapper libraries such as eks that doesn't have get method, thanks! 🚀
f
I do the same as @prehistoric-activity-61023 suggested. One project contains the EKS cluster and it's configuration. Then all the k8s instances are in a separate project, one stack for each. Use the pulumi stack reference to get the kubeconfig and work against the created k8s cluster. The good thing about this is that the stacks and the eks can live in the same git project under different folders, or different repos altogether.