Greetings. Here’s the snippet for creating cluster...
# aws
g
Greetings. Here’s the snippet for creating cluster using aws.eks.cluster():
Copy code
name = "my-cluster"
cluster = aws.eks.Cluster(
            name,
            name=name,
            role_arn=cluster_role_arn,
            vpc_config={
                "subnetIds": subnet_ids,
                "securityGroupIds": [args.security_group_id],
            },
            tags=args.tags,
            opts=opts,
        )
My goal is to override cluster name because i want it to be unique and that’s exactly what i’m doing with
name=name
(and it works), that’s the whole point of any resource in the cloud, devops must insure that resources are unique, and I’m using for example eks unique name without any random numbers in the end so I could comfortably use kubectl and awscli when referring cluster. Problem: I can’t get the name of the cluster by calling this:
cluster.name
and
cluster.cluster.name
doesn’t work. Reason I need it is for addons. I’m provisioning my cluster in separate state and even using a package for that. Passing single object is comfortable. Currently I’m passing two args (one is cluster object and one is cluster_name) which is not very nice. And another question would be how do I form the kubeconfig out of this object. eks.cluster() has this luxury, aws.eks.cluster - doesn’t.
l
cluster.name is an output. It is available when outputs are available, once the Pulumi engine is running. You can use it as an arg parameter for other resources, and Pulumi will work it out for you. You can't use it during state creation (that is, at the time your program is running but before the Pulumi engine is running), because it's not available (as an output) yet. However, you can use the variable that you assign to the cluster.name input. So if you need that for "right-now" use (e.g. as the first (name) parameter to another resource), then you can pass that variable around.
g
what about cluster.cluster_id ?
l
All those things that come back from AWS (once the Pulumi engine has deployed them) are available only as outputs. You will get used to the difference soon; outputs can be used in some places very easily, and in other places they're very hard to use. You need to structure your code or the values you use appropriately, so that you avoid a right-now dependency on not-yet-available values.
g
Hmm. I see in the outputs cluster_id , is that would be it’s name..?
l
A very important distinction is between the arg values that you pass into resource constructors, and the name parameter that is the first value you pass into a resource constructor. That first parameter is used right-now, as the bookmark into the state, so that Pulumi can find it later, so it cannot be an output. All the other arg values aren't used by Pulumi until the engine is running, after your code finishes. So they can all be outputs and it'll work fine.
g
Also endpoint shows is unavailable to me, though it’s in the outputs
so I’m just registering the output in the end and cluster.name is absent
so i believe it would wait for resource to finish a provisioning
l
Your code finishes running before provisioning starts. Pulumi is declarative, not imperative.
g
am.. so how do I get a name
l
Since you're passing the special name into the name arg, you must already have it. Use that value, not the value from inside the cluster object.
g
currently cluster is created in infra/eks, and in infra/eks_utils I’m provisioning it. I can’t get name from the infra/eks, i’m passing the object to the outputs.
yeah that’s sounds more like a workaround
how can i get a name of created resource then cluster.cluster_id?
l
You can get the name of a resource as an output any time. In this case, it is cluster.name.
g
it doesn’t work
l
You just can't access what's inside cluster.name until the engine is running.
It does work. You're just trying to use it at the wrong time.
g
engine is not running
engine is already deployed
l
I think you and I are using the same words to mean different things.
g
after aws.eks.cluster() i register the output and that’s it, stack ends
is it better to show code in here on in pastie?
l
When I say "engine", I mean the bit of Pulumi logic that actually deploys the resources to the cloud.
I'm afraid I don't have time to help in more detail. You can read up on arguments, outputs, and when they're available in Pulumi documentation. https://www.pulumi.com/docs/concepts/inputs-outputs/ https://www.pulumi.com/docs/using-pulumi/define-and-provision-resources/
g
i’ve already have deployed state
it’s deployed -created
i pass the cluster object to the outputs.
l
Yes, but every time you run
pulumi up
or
pulumi preview
, the engine is run again.
g
i manually go to another stack and getting that object and .name doesn’t work it says no such attribute
l
Ah. That's different to the problem as I understood it.
How are you getting that object?
g
in
infra/eks
i’m registering the output, cluster object: # Export the EKS cluster object
Copy code
pulumi.export('cluster', eks_cluster.cluster)
l
Ah. Don't do that. Export just the ID.
g
in
infra/eks_utils
i’m getting the object:
Copy code
eks_stack = pulumi.StackReference("organization/infra-eks/dev")

# You can optionally use any output from infra-eks here
cluster = eks_stack.get_output("cluster")
l
You cannot load a resource like that.
g
I need plenty of stuff from there
it kind’a works excep the name
most of the outputs works
l
You export the resource ID, and you use the get function in the other project.
g
oh ok
l
It may work, but it's not creating a proper Pulumi resource, it's just a plain object with the same values.
g
so that would be cluster_id I suppose
so get function would have name output?
or cluster_id is the same as name
l
Yes.
g
because name is not present in outputs in docs
l
Note that the cluster_id is special, see the documentation:
The ID of your local Amazon EKS cluster on the AWS Outpost. This attribute isn't available for an AWS EKS cluster on AWS cloud.
You may want the ARN instead of the ID. I'm not sure about that.
g
does aws.eks.addon() accepts arn for the name input?
that’s the end goal actually
l
Don't know. You're beyond my knowledge 🙂 I'm good with outputs!
g
this is required input for addon()
that’s why i’m asking where do i get that name from 😄
l
Name is in the outputs of Cluster according to the docs. It says:
All input properties are implicitly available as output properties.
And name is an input.
g
hmm
l
The problem may be that you're exporting the object. That's not really supported, and is only working "accidentally".
g
I see
I’ll try the get method then
l
You really should export the ID, and use GetCluster / get (golang / python) to get the real resource.
g
Thank you, found it
👍 1
is there’s any way by using that ID, get a kubeconfig by using eks.cluster provider?
cause it seems it has it out of the box
GetKubeConfig method I believe
l
Not sure, I don't use k8s. I have a design question: is the _utils code that you're trying to access this from really utils? Or is it another project? Because if it's just re-usable util functions, you could just pass the real cluster object in. It does work.
If it's a different Pulumi project, then you need to use the get function get a read-only managed resource.
Note that you really, really cannot modify a resource from two projects. That is absolutely not going to work.
g
What do you mean by two projects?
All I need is to get formed kubeconfig just for futher provisioning 🙂
I won’t be modifying cluster itself with different provider 🙂
l
So you're in the same Pulumi project? Then you don't need to use the get function. Just pass the object around.
g
Hmm
l
You don't need to export it.
g
That would be a bit difficult to explain.. so we have contexts in folders like mycontext/ and inside we have different stacks like mycontext/eks mycontext/vpc. we’ll have diferent contexts, currently they more like case oriented. context for infra stuff (monitoring, logging and all devops stuff), then contexts for dev where managed databases will be located. so say we have dev/vpc and infra/vpc states.. they are indepedent, can be deployed separately.
is it stack or states.. i forgot already..
anyway we have separate backend for dev/vpc and infra/vpc
but they do share same bucket
l
You're describing projects, not stacks.
g
using stack references for getting their outputs
l
Stacks within projects contain different versions of the same resources: e.g. dev, europe, prod, etc.
g
yeap and we have pulumi.dev.yaml pulumi.stag.yaml
for specifying different congi for different envs
l
Projects contain different resources, like VPC, cluster, databases, etc.
g
that would lie inside infra/eks for example. one cluster config for dev another for staging
l
Apologies, meeting now, I'll swing back later.
g
Grateful for your time 🙂
Copy code
myproject/
├── dev # dev got own vpc
│   └── vpc
│       └── Pulumi.yaml
├── infra 
│   ├── eks
│   │   ├── Pulumi.dev.yaml # eks params for dev
│   │   ├── Pulumi.stag.yaml # eks params for stag
│   │   └── Pulumi.yaml
│   ├── eks_utils      # splitting provisioning
│   │   ├── Pulumi.dev.yaml
│   │   ├── Pulumi.stag.yaml
│   │   └── Pulumi.yaml
│   └── vpc # infra got own vpc, peering will be used
│       ├── Pulumi.dev.yaml
│       └── Pulumi.yaml
└── packages
    └── eks_utils # package for reusing default stuff
l
Since you're setting up peering between the two VPCs, having two projects is appropriate. Otherwise I'd have suggested having two stacks. It is feasible to have two stacks in one VPC project, and a separate project just for setting peering between them, but then you have to handle twice as many exports. If the eks_utils project is intended to be deployed (pretty much) every time the eks project is, I'd recommend merging them. It's less to manage. You can use your language's normal features for calling out to functions / classes defined in other files. If one project is deployed much more frequently than the other, your current arrangement is good. The functions / methods in eks_utils package can have resources from the Pulumi projects passed into it, so that's a good separation. I generally recommend that the only "context" that requires different Pulumi projects is deployment cycle. If you have contexts like "VPC" and "EKS", and you expect them to be deployed more or less at the same time (which is quite likely for those sorts of resources), then it is easiest to manage them as a single project. You can use separate files for the different bits of code. But the ability to get at (for example) VPC subnet IDs directly from your EKS cluster constructor is great: it allows you to avoid exporting and importing a pile of IDs.
g
Main idea about separating eks and eks_utils is that I’ve encountered million times with terraform when something changes in eks cluster settings and tons of deployment scripts fails because there’s no access to cluster anymore. And I have to move out bunch of provisional scripts to access the problem of the cluster itself. Happened for many times in terraform especially with bastion server. With mysql cluster, all my user and db creation been failing.. so I had to move bunch of scripts away and even remove them from state to fix a problem with cluster itself and then import everything back. Separating states for provisioning is always good idea.
Regarding context.. that’s the need of my head of devops. I’d use just separated stacks by services. But then working without context brings troubles. We will have bunch of stuff for various contexts when all we care to find is like 5 services of dev. Yeah prefixes might fix that.. but still things grow a lot in here. In case we need a service which belongs to all contexts (like iam policies) they will be placed in a root,near the packages. Ideally going to a infra folder and running
pulumi up
would be ideal.