This message was deleted.
# getting-started
s
This message was deleted.
b
can you show the output from your
pulumi up
?
l
Do you want to perform this update? yes Updating (test): Type Name Status Info pulumipulumiStack ix-aws-infrastructure-test 1 message + └─ eksindexCluster ix-test-eks-cluster-public-gateway created + ├─ eksindexServiceRole ix-test-eks-cluster-public-gateway-eksRole created + │ ├─ awsiamRole ix-test-eks-cluster-public-gateway-eksRole-role created + │ ├─ awsiamRolePolicyAttachment ix-test-eks-cluster-public-gateway-eksRole-4b490823 created + │ └─ awsiamRolePolicyAttachment ix-test-eks-cluster-public-gateway-eksRole-90eb1c99 created + ├─ eksindexServiceRole ix-test-eks-cluster-public-gateway-instanceRole created + │ ├─ awsiamRole ix-test-eks-cluster-public-gateway-instanceRole-role created + │ ├─ awsiamRolePolicyAttachment ix-test-eks-cluster-public-gateway-instanceRole-3eb088f2 created + │ ├─ awsiamRolePolicyAttachment ix-test-eks-cluster-public-gateway-instanceRole-03516f97 created + │ └─ awsiamRolePolicyAttachment ix-test-eks-cluster-public-gateway-instanceRole-e1b295bd created + ├─ awsec2SecurityGroup ix-test-eks-cluster-public-gateway-eksClusterSecurityGroup created + ├─ eksindexRandomSuffix ix-test-eks-cluster-public-gateway-cfnStackName created + ├─ awsec2SecurityGroupRule ix-test-eks-cluster-public-gateway-eksClusterInternetEgressRule created + ├─ awseksCluster ix-test-eks-cluster-public-gateway-eksCluster created + ├─ awsiamInstanceProfile ix-test-eks-cluster-public-gateway-instanceProfile created + ├─ awsiamOpenIdConnectProvider ix-test-eks-cluster-public-gateway-oidcProvider created + ├─ awsec2SecurityGroup ix-test-eks-cluster-public-gateway-nodeSecurityGroup created + ├─ pulumiproviderskubernetes ix-test-eks-cluster-public-gateway-eks-k8s created + ├─ eksindexVpcCni ix-test-eks-cluster-public-gateway-vpc-cni created + ├─ kubernetescore/v1ConfigMap ix-test-eks-cluster-public-gateway-nodeAccess created + ├─ awsec2SecurityGroupRule ix-test-eks-cluster-public-gateway-eksClusterIngressRule created + ├─ awsec2SecurityGroupRule ix-test-eks-cluster-public-gateway-eksNodeInternetEgressRule created + ├─ awsec2SecurityGroupRule ix-test-eks-cluster-public-gateway-eksExtApiServerClusterIngressRule created + ├─ awsec2SecurityGroupRule ix-test-eks-cluster-public-gateway-eksNodeIngressRule created + ├─ awsec2SecurityGroupRule ix-test-eks-cluster-public-gateway-eksNodeClusterIngressRule created + ├─ awsec2LaunchConfiguration ix-test-eks-cluster-public-gateway-nodeLaunchConfiguration created + ├─ awscloudformationStack ix-test-eks-cluster-public-gateway-nodes created + └─ pulumiproviderskubernetes ix-test-eks-cluster-public-gateway-provider created Diagnostics: pulumipulumiStack (ix-aws-infrastructure-test): Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition Resources: + 28 created 20 unchanged Duration: 15m15s
thanks for the prompt response @billowy-army-68599!
b
your nodes are in there!
Copy code
aws:cloudformation:Stack         ix-test-eks-cluster-public-gateway-nodes                              created
it creates a cfn stack with an autoscaling group in it, that are attached to it
l
I know it says that, but I don’t see them in the AWS console.
@billowy-army-68599, am I missing something?
b
those are managed nodes, ie the aws nodes you don't have access to
l
sorry, @billowy-army-68599, I don’t understand. I know that when following the awsx example for creating a VPC & EKS cluster, a couple of nodes are created and appear in the EKS console. In my case, I have an existing VPC that I imported, and then I try to create the cluster. This led me to the issue I first posted. What am I doing wrong?
b
which example did you follow?
l
Copy code
// Create a VPC for our cluster.
const vpc = new awsx.ec2.Vpc("jh-test-vpc", {
    tags: { Name: "jh-test-vpc" },
    numberOfAvailabilityZones: 3,
    subnets: [
        { type: "public", name: "public", tags: { Name: "public", "<http://kubernetes.io/role/elb|kubernetes.io/role/elb>": "1" } },
        { type: "private", name: "gateway-k8s", tags: { Name: "gateway-k8s", "<http://kubernetes.io/cluster/jh-gateway-cluster-eksCluster-a94d2a1|kubernetes.io/cluster/jh-gateway-cluster-eksCluster-a94d2a1>": "owned", "<http://kubernetes.io/role/internal-elb|kubernetes.io/role/internal-elb>": "1" } },
    ],
});

const gatewayCluster = new eks.Cluster("jh-gateway-cluster", {
    vpcId: vpc.id,
    publicSubnetIds: vpc.publicSubnetIds,
    privateSubnetIds: vpc.privateSubnetIds,
    nodeAssociatePublicIpAddress: false,
    createOidcProvider: true,
    providerCredentialOpts: {
        profileName: aws.config.profile,
    },
});
b
okay, I'm provisioning that now to check, but I that won't provision any managed nodes either, did you see managed nodes in the console when you provisioned that?
l
in the EKS console, in the Overview tab, 2 nodes show up when I follow the awsx example. When I try to use my existing VPC, those nodes don’t appear
w
Hey @billowy-army-68599 just confirmed that a super minimal setup:
Copy code
import * as awsx from "@pulumi/awsx";
import * as eks from "@pulumi/eks";
import * as aws from "@pulumi/aws";

const vpc = new awsx.ec2.Vpc("test-vpc", {
    tags: {Name: "test-vpc"},
    numberOfAvailabilityZones: 3,
    subnets: [
        { type: "public", name: "public", tags: {Name: "public", "<http://kubernetes.io/role/elb|kubernetes.io/role/elb>": ""} },
        { type: "private", name: "private", tags: {Name: "private", "<http://kubernetes.io/role/internal-elb|kubernetes.io/role/internal-elb>": ""} },
    ],
});

const testCluster = new eks.Cluster("test-cluster", {
    vpcId: vpc.id,
    publicSubnetIds: vpc.publicSubnetIds,
    privateSubnetIds: vpc.privateSubnetIds,
    nodeAssociatePublicIpAddress: false,
    createOidcProvider: true,
    providerCredentialOpts: {
        profileName: aws.config.profile,
    },
});

export const testKubeconfig = testCluster.kubeconfig;
does result in 2 nodes showing on the EKS console. I'm working with @lively-student-98057 on this little project and I feel sure it's perhaps our subnet variables that are not being pulled through / resolved correctly. He's offline right now but we'll try and debug those variables tomorrow to ensure they're what we think they are.
b
huh, I just confirmed the same thing. checking now what's going on, this isn't what I expected
w
i figure (but haven't checked) that the ASG has a min 2?
b
yeah that's correct, I wonder I thought we provisioned managed nodes. Can you double check your nodes can route to the API control plane? they may never have joined the cluster
w
they do, I see the aws-node daemonset and I can run workloads fine
cni up, coredns, etc...
b
I am completely baffled as to why this might be
sorry to be clear, the nodes are joined correctly on the "bring your own vpc" configuration mentioned earlier?
w
Oh, no, the bring your own vpc/subnet is completely broken
aws-node is 0/2, there are no nodes provisioned or connected
Sorry I should clarify
We have not tried eks placing in the default VPC at all. When creating a vpc ourselves (using awsx, example above), everything works great. The non-working configuration is where we have imported an existing VPC and subnets, and are attempting to use that.
We have the import generated boilerplate in another file, and I think maybe our issue is that
vpc.{private,public}SubnetIds
variables are not (resolving to) what we think they are, leading to the cluster creating but no nodes being provisioned. We'll debug that tomorrow with some logging and figure it out.
b
okay, got it. I'm fairly sure in that case it must be a network/routing issue on your already created vpc. If you check the ec2 console, you should see an autoscaling group, with ec2 instances in it. they probably can't route to the AWS control plane to join the cluster correctly
the nodes are definitely being provisioned, because the cloudformation stack is being created, you can drill down from there to find your nodes
w
👍 Ye we 'inherited' the vpc/subnets/route tables so likely something screwy there. Thanks for replying / investigating!
Awesome thanks, we'll start there.
Oh actually on this 'pre-existing' topic, when imported, those resources are protected. We should still be able to destroy on a stack and have everything else killed right?
b
the AWS control plane is actually on the internet by default, so your private subnets will need an internet gateway to route to it and connect
w
Atm our preview on
destroy
even fails because the vpc is protected (but almost all the other resources are not).
b
you can't destroy if anything in your vpc is protected, no. You'll need to remove it from state using
pulumi state delete <urn>
w
Ye, we have an IGW and a NATG. Also planning to go completely private control plane soon anyway.
Hmm interesting, OK thanks, guess we can write a 'mostly delete' script then by interrogating the state 😛
Any reason it wouldn't / couldn't just reverse the dag and delete what it could (before it got to protected resources)?
b
a
destroy
operations is saying "remove everything in this stack" - the protect just stops that from happening
if it exists in the stack, it'll be deleted
w
Ah OK, so it's an all-or-nothing, 👍