Hi folks, how can I get the console log after crea...
# general
l
Hi folks, how can I get the console log after creating an EC2 instance? I know there is an aws call for this, but no wrapper for pulumi
l
Pulumi is a declarative configuration system. It's not used for pulling data from a resource it has just created / updated. You would use the AWS SDK for that...
l
Thank you. What I want is that after an EC2 instance is created, I can get some config files from it. Can I achieve this using pulumi?
d
You specify the configuration of an instance with pulumi. I imagine you would need to use an AWS sdk to fetch the config files. (But I would question why you want to do that in the first place?)
l
alright, I think this is my fault. I should have described the original problem. I want to boostrap a k8s cluster, after that I want to get the kubeconfig from the master.
It is very possible I went the wrong direction. But I don't want EKS
g
If you want to get the cluster config after already bootstrapping, try this:
Copy code
pulumi stack output kubeconfig --show-secrets > kubeconfig
Then you can export it to your local config so kubectl works:
Copy code
export KUBECONFIG=$PWD/kubeconfig
l
I don't understand, the kubeconfig is in some EC2 instance, how can I get it into pulumi stack variable. Say I am using an Ansible script to initialize the instance with userdata.
g
Did you stand up the instance with Pulumi (meaning you wrote out the code with Pulumi and used
pulumi up
to start the instance)? If you didn't use Pulumi to configure the instance in the first place, Pulumi doesn't know anything about the instance. In that case, I think you would get better results by getting the kubeconfig directly using the SDK as mentioned earlier in the thread. Otherwise, you would have to import the infrastructure to Pulumi so the Pulumi system knows the current state, then export the kubeconfig from the stack as I noted.
l
@great-queen-39697 Thanks~ Yes, I wrote code in TS to create EC2 instance, but I didn’t use EKS nor crosswalk, so I don’t have kubeconfig in the stack. I need another find another way to do this, maybe dynamic providers
g
Ah, I think I understand what you're asking for. There's a bit of term confusion. You stood up a Kubernetes cluster in EC2 using Pulumi, and didn't use a managed Kubernetes service, correct? A stack in Pulumi terms is the instance of a Pulumi program. Whatever you put into your code is your Pulumi program, and then when you used
pulumi up
to deploy to EC2, you created a stack. That stack's configuration and state is stored in Pulumi. Any time you create a Kubernetes cluster, there's a kubeconfig. The commands I noted above are terminal commands, not code.
pulumi stack
is a command-line interface to manage the stack (see https://www.pulumi.com/docs/reference/cli/pulumi_stack/). If you create a Kubernetes cluster with Pulumi anywhere, not just on a managed service, you can get the kubeconfig from the stack output at the command line.
l
Hi Laura, you are right that I tried to stand up a k8s cluster in EC2 using Pulumi, but with a userdata script (by using ansible actually). But I didn’t get it why kubeconfig will be in the stack if I don’t use a managed k8s service. To my understanding, after the instance stands up, the config will be in the filesystem of that instance, I need a way to export it (or import it from Pulumi), this is something I wanted to ask. It would be appreciated if you can provide an example that a non-managed k8s cluster is created using Pulumi and
pulumi stack output kubeconfig
works
g
Pulumi would not be able to authenticate against the Kubernetes cluster to do anything to it without the kubeconfig file:
If the kubeconfig file is not in either of these locations, Pulumi will not find it, and it will fail to authenticate against the cluster. (from https://www.pulumi.com/docs/intro/cloud-providers/kubernetes/setup/#steps)
If you've successfully run
pulumi up
against the cluster and updated it, then it's already available. If you're saying that you used Ansible to set up EC2 with a Kubernetes cluster and now you want to use Pulumi to manage it from here but have not actually successfully connected, that's a different story, and you do need to use a different tool to get that kubeconfig file before Pulumi can authenticate and access the cluster. If you can run
kubectl
against the cluster on EC2, you can view the config with
kubectl config view
. If you can't run
kubectl
against that cluster, you need to connect to the instance and locate the configuration file(s) manually (e.g., if you want to do it through ssh, go through this guide from AWS).
l
@great-queen-39697 much appreciated. After digging into Pulumi doc, I decided to write a dynamic provider to cover all the dirty jobs (provision a k8s cluster, then upload a config to somewhere, then fetch it back to a property for my own custom resource), then I can new a kubernetes provider
🙌 1