Hey everyone. Is it currently possible to properly...
# general
f
Hey everyone. Is it currently possible to properly rehydrate resources via
StackReference
? I am following the micro-service methodology, and keeping my eks cluster (created with
@pulumi/eks
in a separate project from the apps that will run on it. My goal is to make each project create their own AWS Managed Node Group, however it requires to provide a direct
eks.Cluster
object, and upon viewing the logic of creation,
aws.eks.Cluster.get
doesn't provide sufficient properties. Any help would be appreciated, as it's a massive blocker for me.
d
I struck the same problem - and ended up passing in the cluster as a KubeConfig value instead - such that it was possible to instantiate a provider. In similar situations where I originally felt I needed the pulumi object as an instance in code, it turns out I was able to use either the name/resource-group or objectID of the thing instead, just a string. are the downstream users (dev projects) really going to need access to the object? may I ask why?
f
@damp-honey-93158 according to this: https://github.com/pulumi/pulumi-eks/blob/master/nodejs/eks/nodegroup.ts#L842 an arbitrary resource identifier is not enough at all. Kubeconfig doesn't carry enough info (for example subnet ids), and even if I could fake every single field, either by exporting cluster's properties individually, or serializing the whole thing, it's still trying to use cluster from the options as a parent. Passing a "fake" cluster JSON doesn't work either. Passing clusterProvider derived from kubeconfig into the parent doesn't work either (it fails with
Error: 'dependsOn' was passed a value that was not a Resource.
) It's not about the downstream users, it's about parts of Pulumi being too reliant on the full resources.
d
may I ask more about the context in which you need to have a rehydrated resource?
f
I'd like to create an AWS Managed Node Group in a project where the k8s cluster is not defined originally. I have 2 projects. One is dedicated to a k8s cluster. It defines all the common resources, monitoring, daemonsets etc. It's created with
new eks.Cluster
Then I have separate projects for the pieces of our stack (things like web servers, api servers, databases, etc) These have their infrastructure defined in separate repositories, using their own Pulumi project (the stack/environment is shared, like dev, qa, staging). These projects will have their own deployments that run on the parent cluster, inside their own namespace. I would like to create a unique AWS Managed Node Group with its own unique configuration for each such namespace. Managed node groups are created via
@pulumi/eks
and according to the file linked above, and this https://www.pulumi.com/blog/aws-eks-managed-nodes-fargate/#automatically-managed-node-groups cluster needs to be passed as a property during the managed node group creation.
r
@famous-leather-72830 did you managed to sort this out? I'm facing similiar issue. If you have solved this, can you share the code, please
f
@rhythmic-whale-48997 Hey there! I have - unfortunately I cannot share the code as its proprietary, but ultimate what I ended up doing is instead of using
cluster.createNodeGroup
I exported VPC (subnets, oidc, etc) info and cluster info from the cluster project, imported them in one of my service projects, and then replicated what
createNodeGroup
internally does using all this data. (you can go to their GitHub page and see what code they use, but it's just another wrapper around the base eks tools)
r
Not even you own
createNodeGroup
function? 😄 You can leave the sensitive parts out. I was also considering to do this, but didn't have the time, and it looks little bit complicated, but I will try that next