Hey folks. Anyone else suddenly unable to even `preview` their EKS resources? ``` [...] kuber...
r

Robert-Jan Huijsman

over 1 year ago
Hey folks. Anyone else suddenly unable to even
preview
their EKS resources?
[...]

    kubernetes:core/v1:Service istio-loadbalancer-hostname  warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://D01A49DEECB6E23C1A3454E1676679ED.gr7.us-east-2.eks.amazonaws.com/openapi/v2?timeout=32s>": getting credentials: exec: executable aws failed with exit code 255
    kubernetes:core/v1:Service istio-loadbalancer-hostname  error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"
    pulumi:pulumi:Stack resemble-clusters-aws-staging1 running error: preview failed
    kubernetes:core/v1:Service istio-loadbalancer-hostname  1 error; 1 warning
    pulumi:pulumi:Stack resemble-clusters-aws-staging1  1 error; 6 messages
Diagnostics:
  kubernetes:core/v1:Service (istio-loadbalancer-hostname):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://D01A49DEECB6E23C1A3454E1676679ED.gr7.us-east-2.eks.amazonaws.com/openapi/v2?timeout=32s>": getting credentials: exec: executable aws failed with exit code 255
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"

  [...]

  pulumi:pulumi:Stack ([...]):
    error: preview failed

    <botocore.awsrequest.AWSRequest object at 0x78fc7c333190>
    <botocore.awsrequest.AWSRequest object at 0x77d185d577d0>
It seems the
kubeconfig
produced by the EKS provider is incomplete!? That
kubeconfig
has the following (redacted):
[...]
        "exec": {
          "apiVersion": "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>",
          "args": [
            "eks",
            "get-token",
            "--cluster-name",
            "staging1",
            "--role",
            "arn:aws:iam::123456789012:role/Administrator"
          ],
          "command": "aws",
          "env": [
            {
              "name": "KUBERNETES_EXEC_INFO",
              "value": "{\"apiVersion\": \"<http://client.authentication.k8s.io/v1beta1\|client.authentication.k8s.io/v1beta1\>"}"
            }
          ]
        }
[...]
When I run that myself, I indeed get the reported error:
$ aws eks get-token --cluster-name staging1 --role arn:aws:iam::123456789012:role/Administrator

<botocore.awsrequest.AWSRequest object at 0x72e04d3b0a10>

$ echo $?
255
But if I add the missing
--region
flag to the command, then I can it just fine:
aws eks get-token --cluster-name staging1 --role arn:aws:iam::123456789012:role/Administrator --region=us-east-2
{
    "kind": "ExecCredential",
    "apiVersion": "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>",
    "spec": {},
    "status": {
        "expirationTimestamp": "2024-05-16T15:44:50Z",
        "token": "k8s-aws-v1.[...]"
    }
}
So why does Pulumi's EKS provider suddenly produce a
kubeconfig
that doesn't work!? And/or what happened to cause the
--region
flag to have gone missing?
Hey all, I'm struggling to come up with a good way of doing ECS blue /green deployments via pulumi ...
d

Daniel Cooke

over 1 year ago
Hey all, I'm struggling to come up with a good way of doing ECS blue /green deployments via pulumi I see there a few open issues on this https://github.com/pulumi/pulumi-aws/issues/1096 Up until now I had been using standard ECS deployment with the folowing fargate service
const service = new awsx.ecs.FargateService('templi-api', {
  cluster: baseInfra.requireOutput('clusterArn'),
  enableExecuteCommand: true,


  networkConfiguration: {
    securityGroups: [apiSecurityGroup.id],
    assignPublicIp: true,
    subnets: baseInfra.requireOutput('vpcPublicSubnetIds'),
  },

  taskDefinitionArgs: {
    taskRole: {
      roleArn: taskRole.arn,
    },

    container: {
      name: 'templi-api',
      image: apiImage.imageUri,
      cpu: 128,
      memory: 512,

      essential: true,

      portMappings: [
        {
          containerPort: 80,
          targetGroup: blueTargetGroup,
        },
      ],
    },
  },
});
Everytime I ran
pulumi up
pulumi would build a new docker image, and update my service task definition accordingly This worked great But I have been a little bit disappointed when trying to use the same worfklow but with CodeDeploy as my deployment contrller
deploymentController: {
    type: 'CODE_DEPLOY',
  },
With accompanying code deploy infra provisioned, when running
pulumi up
i get the following error
InvalidParameterException: Unable to update task definition on services with a CODE_DEPLOY deployment controller. Use AWS CodeDeploy to trigger a new deployment.
Which seems to be a limitation from AWS https://github.com/aws/aws-cdk/issues/1559 (which has since been resolved by a new L2 Construct in the CDK) my question is, are the any plans/workarounds to support this in Pulumi? Or if anyone has any guidance on how I can proceed- do I provision just my task definition etc. via pulumi, and then use
@pulumi/command
to run a
aws deploy
?