Hey all. I need that I've been stack in for days n...
# kubernetes
f
Hey all. I need that I've been stack in for days now. My scenario is, I am creating the vpc first, and then I am supposed to use the vpc id while creating eks cluster. They must be created separately. I have one function that runs Upsert
Copy code
func PulumiOps(program func(ctx *pulumi.Context) error, project_name, stack_name, region string, vpcProgram pulumi.RunFunc, ctx context.Context) (auto.OutputMap, error) {
	
	stack, err := auto.UpsertStackInlineSource(ctx, stack_name, project_name, vpcProgram)

	if err != nil {
		logrus.Error("Could not create stack: ", err)
		return auto.OutputMap{}, err
	}

	err = stack.SetConfig(ctx, "aws:region", auto.ConfigValue{Value: region})
	if err != nil {
		logrus.Error("Could not set config: ", err)
		return auto.OutputMap{}, err
	}


// Create a new VPC with a public and private subnet.
  func A(ctx *pulumi.Context) err
	vpc, err := awsx.NewVpc(ctx, project_name, &awsx.VpcArgs{
		CidrBlock: pulumi.StringRef("10.0.0.0/16"),
		Tags: pulumi.StringMap{
			"Name": pulumi.String(project_name),
		},
		NumberOfAvailabilityZones: pulumi.IntRef(4),
		SubnetSpecs: []awsx.SubnetSpecArgs{
			{
        <other args>
        ctx.Export("vpcId", vpc.VpcId)
Copy code
func B(ctx *pulumi.Context) err
       
	// Create an EKS cluster.
	cluster, err := eks.NewCluster(ctx, project_name, &eks.ClusterArgs{
		InstanceType: pulumi.String(eksNodeInstanceType),
		VpcId: 	  pulumi.Sprintf( vpcid),
		PublicSubnetIds:  vpc.PublicSubnetIds ,
		PrivateSubnetIds: vpc.PrivateSubnetIds ,
		MinSize: pulumi.Int(minClusterSize),
		MaxSize: pulumi.Int(maxClusterSize),
		DesiredCapacity: pulumi.Int(desiredClusterSize),
		NodeAssociatePublicIpAddress: pulumi.BoolRef(false),
    
        <other args>
        ctx.Export("kubeconfig", cluster.kubeconfig)

So whenever I create VPC with 
PulumiOps(.... B...) //snippet ---> This create the vpc correctly and I get the vpc id

Then the next time I want to create eks or any other resource using 
Pulumi(...A...) ----->>> This subsequent resource deletes everything that was created by vpc.

The requirement is to call func A first. Wait for to finish then somewhere else, func B is called and provided with vpcid from func A.

func A works perfect, func B deletes all the resources
That works perfectly, I get vpc id and store it somewhere Why is this happening and how can I fix it. Thank you very much.
m
You'll have to show your complete code and explain how you run it. You might be replacing the stack with the VPC with a stack that just contains the cluster but unless you share a complete example of your problem one can only guess.
f
I've updated the question
h
this is still missing a lot of information. how are you calling A and B? that’s very relevant. are these in different stacks? the same stack?
f
I've added the correct scope including stack and all the relationships now
Thank @hallowed-photographer-31251 I've updated it to explain the whole situation
What I have tried is using before creating eks
Copy code
_, err := awsec2.NewVpc(ctx, project_name, &awsec2.VpcArgs{}, pulumi.Import(pulumi.ID(vpc_id)))
	if err != nil {
		logrus.Error("Could not retrieve VPC", err)
		return nil, err
	}
but always getting
aws:ec2:Vpc newproject-9478 **importing failed** error: inputs to import do not match the existing resource
h
it would probably be worth your time to put together an MVRE. this is still missing a lot of important information like the stack names you’re invoking this method with. i don’t think there’s anything specific to kubernetes here, and you could probably re-produce the issue using some random numbers instead of full-blown VPC and k8s resources. my best guess is that you’re doing both of these operations in the same stack instead of different stacks. if you run an update on a stack with only a vpc resource, it will create that vpc. if your next update on the same stack only includes someOtherResource, it will create someOtherResource and delete the vpc because it’s no longer defined. if you want both of these things to co-exist together, you either need to define both resources at the same time inside the same stack, or you need to define them in separate stacks.
f
Thank you Bryce. I really appreciate your time. Follow up question, pulumi has pulumi import right? I tried this
Copy code
package main

import (
	"context"
	"fmt"

	"<http://github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2|github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2>"
	"<http://github.com/pulumi/pulumi-aws/sdk/v5/go/aws/eks|github.com/pulumi/pulumi-aws/sdk/v5/go/aws/eks>"
	"<http://github.com/pulumi/pulumi/sdk/v3/go/auto|github.com/pulumi/pulumi/sdk/v3/go/auto>"
	"<http://github.com/pulumi/pulumi/sdk/v3/go/pulumi|github.com/pulumi/pulumi/sdk/v3/go/pulumi>"
	"<http://github.com/sirupsen/logrus|github.com/sirupsen/logrus>"
)

// PulumiOps function that runs Upsert
func PulumiOps(program func(ctx *pulumi.Context) error, projectName, stackName, region string, ctx context.Context) (auto.OutputMap, error) {
	stack, err := auto.UpsertStackInlineSource(ctx, stackName, projectName, program)
	if err != nil {
		logrus.Error("Could not create stack: ", err)
		return auto.OutputMap{}, err
	}

	err = stack.SetConfig(ctx, "aws:region", auto.ConfigValue{Value: region})
	if err != nil {
		logrus.Error("Could not set config: ", err)
		return auto.OutputMap{}, err
	}

	// Run Pulumi program
	upResult, err := stack.Up(ctx)
	if err != nil {
		logrus.Error("Failed to run stack update: ", err)
		return auto.OutputMap{}, err
	}

	return upResult.Outputs, nil
}

// VPC Creation Program (A)
func CreateVPC(ctx *pulumi.Context) error {
	vpc, err := ec2.NewVpc(ctx, "my-vpc", &ec2.VpcArgs{
		CidrBlock: pulumi.String("10.0.0.0/16"),
		Tags: pulumi.StringMap{
			"Name": pulumi.String("my-vpc"),
		},
	})
	if err != nil {
		return err
	}

	ctx.Export("vpcId", vpc.ID())
	return nil
}

// EKS Creation Program (B)
func CreateEKS(ctx *pulumi.Context, vpcID pulumi.IDInput) error {
	// Import the existing VPC resource
	vpc, err := ec2.NewVpc(ctx, "imported-vpc", &ec2.VpcArgs{}, pulumi.Import(vpcID))
	if err != nil {
		return err
	}

	// Lookup the subnets associated with the VPC
	subnets, err := ec2.GetSubnets(ctx, &ec2.GetSubnetsArgs{
		Filters: []ec2.GetSubnetsFilter{
			{
				Name:   "vpc-id",
				Values: []string{vpcID.String()},
			},
		},
	})
	if err != nil {
		return err
	}

	// Classify subnets into public and private
	var publicSubnetIDs pulumi.StringArray
	var privateSubnetIDs pulumi.StringArray
	for _, subnetID := range subnets.Ids {
		subnet, err := ec2.LookupSubnet(ctx, &ec2.LookupSubnetArgs{
			Id: &subnetID,
		})
		if err != nil {
			return err
		}
		if subnet.MapPublicIpOnLaunch {
			publicSubnetIDs = append(publicSubnetIDs, pulumi.String(subnetID))
		} else {
			privateSubnetIDs = append(privateSubnetIDs, pulumi.String(subnetID))
		}
	}

	// Create the EKS cluster using the imported VPC and discovered subnets
	cluster, err := eks.NewCluster(ctx, "my-eks-cluster", &eks.ClusterArgs{
		VpcConfig: &eks.ClusterVpcConfigArgs{
			VpcId:            vpc.ID(),
			SubnetIds:        publicSubnetIDs,
			PrivateSubnetIds: privateSubnetIDs,
		},
	})
	if err != nil {
		return err
	}

	ctx.Export("kubeconfig", cluster.Kubeconfig)
	return nil
}

func main() {
	ctx := context.Background()

	// First, run the VPC creation program
	outputs, err := PulumiOps(CreateVPC, "my-project", "my-vpc-stack", "us-east-1", ctx)
	if err != nil {
		logrus.Fatal("Failed to create VPC: ", err)
	}

	// Get the VPC ID from the outputs
	vpcID := outputs["vpcId"].Value.(string)

	// Now, run the EKS creation program using the VPC ID
	_, err = PulumiOps(func(ctx *pulumi.Context) error {
		return CreateEKS(ctx, pulumi.ID(vpcID))
	}, "my-project", "my-eks-stack", "us-east-1", ctx)
	if err != nil {
		logrus.Fatal("Failed to create EKS cluster: ", err)
	}

	fmt.Println("EKS cluster created successfully")
}
h
Follow up question, pulumi has pulumi import right?
it does, but i don’t think you need it in this case. the vpc’s ID should be enough to get subnets etc.
🙏 1
m
@flaky-pizza-91785 I second the idea to create a minimal example. Based on the code and questions I think you don't yet fully understand how Pulumi operates when it comes to resources and stacks, so a small toy example to play around with (rather than having to create/remove VPCs and EKS clusters) will be very educational and helpful. (No worries, we've all been there at one point 🙂)
f
Here's one @modern-zebra-45309
Copy code
package main

import (
	"context"
	"fmt"

	"<http://github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2|github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2>"
	"<http://github.com/pulumi/pulumi-aws/sdk/v5/go/aws/eks|github.com/pulumi/pulumi-aws/sdk/v5/go/aws/eks>"
	"<http://github.com/pulumi/pulumi/sdk/v3/go/auto|github.com/pulumi/pulumi/sdk/v3/go/auto>"
	"<http://github.com/pulumi/pulumi/sdk/v3/go/pulumi|github.com/pulumi/pulumi/sdk/v3/go/pulumi>"
	"<http://github.com/sirupsen/logrus|github.com/sirupsen/logrus>"
)

// PulumiOps function that runs Upsert
func PulumiOps(program func(ctx *pulumi.Context) error, projectName, stackName, region string, ctx context.Context) (auto.OutputMap, error) {
	stack, err := auto.UpsertStackInlineSource(ctx, stackName, projectName, program)
	if err != nil {
		logrus.Error("Could not create stack: ", err)
		return auto.OutputMap{}, err
	}

	err = stack.SetConfig(ctx, "aws:region", auto.ConfigValue{Value: region})
	if err != nil {
		logrus.Error("Could not set config: ", err)
		return auto.OutputMap{}, err
	}

	// Run Pulumi program
	upResult, err := stack.Up(ctx)
	if err != nil {
		logrus.Error("Failed to run stack update: ", err)
		return auto.OutputMap{}, err
	}

	return upResult.Outputs, nil
}

// VPC Creation Program (A)
func CreateVPC(ctx *pulumi.Context) error {
	vpc, err := ec2.NewVpc(ctx, "my-vpc", &ec2.VpcArgs{
		CidrBlock: pulumi.String("10.0.0.0/16"),
		Tags: pulumi.StringMap{
			"Name": pulumi.String("my-vpc"),
		},
	})
	if err != nil {
		return err
	}

	ctx.Export("vpcId", vpc.ID())
	return nil
}

// EKS Creation Program (B)
func CreateEKS(ctx *pulumi.Context, vpcID pulumi.IDInput) error {
	// Import the existing VPC resource
	vpc, err := ec2.NewVpc(ctx, "imported-vpc", &ec2.VpcArgs{}, pulumi.Import(vpcID))
	if err != nil {
		return err
	}

	// Lookup the subnets associated with the VPC
	subnets, err := ec2.GetSubnets(ctx, &ec2.GetSubnetsArgs{
		Filters: []ec2.GetSubnetsFilter{
			{
				Name:   "vpc-id",
				Values: []string{vpcID.String()},
			},
		},
	})
	if err != nil {
		return err
	}

	// Classify subnets into public and private
	var publicSubnetIDs pulumi.StringArray
	var privateSubnetIDs pulumi.StringArray
	for _, subnetID := range subnets.Ids {
		subnet, err := ec2.LookupSubnet(ctx, &ec2.LookupSubnetArgs{
			Id: &subnetID,
		})
		if err != nil {
			return err
		}
		if subnet.MapPublicIpOnLaunch {
			publicSubnetIDs = append(publicSubnetIDs, pulumi.String(subnetID))
		} else {
			privateSubnetIDs = append(privateSubnetIDs, pulumi.String(subnetID))
		}
	}

	// Create the EKS cluster using the imported VPC and discovered subnets
	cluster, err := eks.NewCluster(ctx, "my-eks-cluster", &eks.ClusterArgs{
		VpcConfig: &eks.ClusterVpcConfigArgs{
			VpcId:            vpc.ID(),
			SubnetIds:        publicSubnetIDs,
			PrivateSubnetIds: privateSubnetIDs,
		},
	})
	if err != nil {
		return err
	}

	ctx.Export("kubeconfig", cluster.Kubeconfig)
	return nil
}

func main() {
	ctx := context.Background()

	// First, run the VPC creation program
	outputs, err := PulumiOps(CreateVPC, "my-project", "my-vpc-stack", "us-east-1", ctx)
	if err != nil {
		logrus.Fatal("Failed to create VPC: ", err)
	}

	// Get the VPC ID from the outputs
	vpcID := outputs["vpcId"].Value.(string)

	// Now, run the EKS creation program using the VPC ID
	_, err = PulumiOps(func(ctx *pulumi.Context) error {
		return CreateEKS(ctx, pulumi.ID(vpcID))
	}, "my-project", "my-eks-stack", "us-east-1", ctx)
	if err != nil {
		logrus.Fatal("Failed to create EKS cluster: ", err)
	}

	fmt.Println("EKS cluster created successfully")
}
Am in the deep waters here and I must learn how to swim. This gives me this
aws:ec2:Vpc newproject-9478 **importing failed** error: inputs to import do not match the existing resource
There should be a way to import resources before they are used. No? Imagine you are developing an API where you have endpoints that gets triggered along the way.
m
I understand what you're trying to do, but I think you're "holding the tool wrong"
f
Like endpoint to create vpc and another enpoint to create eks. Or how would you approach such a situation?
m
Let me try to unblock you
You have two options: You can make one stack that has both the VPC and the EKS cluster, or you can have two stacks (one for the VPC, one for the EKS cluster)
f
yes. Endpoint to create vpc. and another endpoint to create eks using the vpcid created earlier. but this eks endpoint doesn't delete the vpc resources 🙂
I was going for the 1st option but one endpoint deletes the resources created by the other
m
Note that a physical resource can be at most part of one stack. So the VPC and the cluster can only belong to one stack. This is why importing a resource that you created through Pulumi is usually not done, you would only import resources that had previously been created through other means. (Or when you're restructuring stacks but that's a different topic)
f
Okay. That makes sense now
So now we remain with option 2
m
You can do both options, and having everything in one stack is typically easier unless you have lots of resources that change independently a lot
May I ask why you're using the Automation API? Are you set on this or would you be fine with using Pulumi directly?
https://github.com/pulumi/examples/tree/master/aws-go-eks is a full example of creating an EKS cluster in Go that you can use as a reference
f
I had gone through that
m
OK, nice. Then my suggestion is that you either add the VPC in there (instead of using the default VPC), giving you a "one stack" solution
Or you create a second
main.go
that creates the VPC and exports the VPC ID as an output,. Then you can modify the EKS Pulumi program to refer to this output via a stack reference
f
Awesome. Will look into this
Thank you Kilian
m
I think this will help you understand better what a stack is and how a stack is an instance of a Pulumi program. Unless you absolutely have to use the Automation API, stay with the "standard" use of Pulumi through the Pulumi CLI. You can always switch later, but you save yourself a lot of work and will benefit from keeping things simple.