sparse-intern-71089
06/21/2019, 9:04 PMglamorous-thailand-23651
06/21/2019, 9:05 PMconst zone = pulumi.output(aws.route53.getZone({name: config.require("apex-domain")}));
const vpc = new awsx.ec2.Vpc("vpc", {
    cidrBlock: "10.1.0.0/16",
    numberOfNatGateways: 2,
    numberOfAvailabilityZones: "all",
    tags: {"Name": "vpc"}
});
const allSubnetIds = vpc.privateSubnetIds.concat(vpc.publicSubnetIds);
// Create an EKS cluster with the default configuration.
const cluster = new eks.Cluster("k8s", {
    vpcId: vpc.id,
    subnetIds: allSubnetIds,
    nodeAssociatePublicIpAddress: false,
});glamorous-thailand-23651
06/21/2019, 9:12 PM@pulumi/aws: 0.18.13
@pulumi/awsx: 0.18.6
@pulumi/docker: 0.17.0
@pulumi/eks: 0.18.8
@pulumi/kubernetes: 0.24.0
@pulumi/pulumi: 0.17.18
@pulumi/query: 0.3.0lemon-spoon-91807
06/24/2019, 8:55 PMglamorous-thailand-23651
06/24/2019, 8:56 PMglamorous-thailand-23651
06/24/2019, 8:57 PMnpm update now after the awsx update announcement and going to try another pulumi up to see if i get different behavior.glamorous-thailand-23651
06/24/2019, 8:59 PM1 Pods failed to schedule because: [Unschedulable] 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate. and the kubectl describe nodes output is similar to above.breezy-hamburger-69619
06/24/2019, 9:00 PMbreezy-hamburger-69619
06/24/2019, 9:01 PMglamorous-thailand-23651
06/24/2019, 9:02 PMglamorous-thailand-23651
06/24/2019, 9:02 PMpulumi up is about to fail after sufficient retries, i’ll grab the output ASAP.glamorous-thailand-23651
06/24/2019, 9:03 PMbreezy-hamburger-69619
06/24/2019, 9:03 PMglamorous-thailand-23651
06/24/2019, 9:04 PMbreezy-hamburger-69619
06/24/2019, 9:05 PMglamorous-thailand-23651
06/24/2019, 9:06 PMglamorous-thailand-23651
06/24/2019, 9:06 PMglamorous-thailand-23651
06/24/2019, 9:08 PMDiagnostic section from pulumi up that just failed: Diagnostics:
  kubernetes:apps:Deployment (kube-system/kubernetes-dashboard):
    error: Plan apply failed: 3 errors occurred:
    	* Timeout occurred for 'kubernetes-dashboard'
    	* Minimum number of live Pods was not attained
    	* 1 Pods failed to schedule because: [Unschedulable] 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
  kubernetes:extensions:Deployment (kube-system/monitoring-influxdb):
    error: Plan apply failed: 3 errors occurred:
    	* Timeout occurred for 'monitoring-influxdb'
    	* Minimum number of Pods to consider the application live was not attained
    	* 1 Pods failed to schedule because: [Unschedulable] 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
  kubernetes:core:Service (kube-system/heapster):
    error: Plan apply failed: 2 errors occurred:
    	* Timeout occurred for 'heapster'
    	* Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods
  kubernetes:core:Service (kube-system/monitoring-influxdb):
    error: Plan apply failed: 2 errors occurred:
    	* Timeout occurred for 'monitoring-influxdb'
    	* Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods
  kubernetes:core:Service (kube-system/kubernetes-dashboard):
    error: Plan apply failed: 2 errors occurred:
    	* Timeout occurred for 'kubernetes-dashboard'
    	* Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods
  kubernetes:extensions:Deployment (kube-system/heapster):
    error: Plan apply failed: 3 errors occurred:
    	* Timeout occurred for 'heapster'
    	* Minimum number of Pods to consider the application live was not attained
    	* 1 Pods failed to schedule because: [Unschedulable] 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
  pulumi:pulumi:Stack (megadoomer-io-infra):
    error: update failedglamorous-thailand-23651
06/24/2019, 9:08 PMbreezy-hamburger-69619
06/24/2019, 9:09 PMglamorous-thailand-23651
06/24/2019, 9:10 PMup attemptsbreezy-hamburger-69619
06/24/2019, 9:10 PMbreezy-hamburger-69619
06/24/2019, 9:11 PMkubectl get nodes return?glamorous-thailand-23651
06/24/2019, 9:11 PM$ kubectl get nodes
NAME                                         STATUS     ROLES    AGE   VERSION
ip-10-1-129-147.us-west-2.compute.internal   NotReady   <none>   3d    v1.13.7-eks-c57ff8
ip-10-1-192-106.us-west-2.compute.internal   NotReady   <none>   3d    v1.13.7-eks-c57ff8breezy-hamburger-69619
06/24/2019, 9:12 PMkubectl versionglamorous-thailand-23651
06/24/2019, 9:13 PM$ kubectl version
Client Version: <http://version.Info|version.Info>{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-20T04:49:16Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: <http://version.Info|version.Info>{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}breezy-hamburger-69619
06/24/2019, 9:13 PMbreezy-hamburger-69619
06/24/2019, 9:13 PMbreezy-hamburger-69619
06/24/2019, 9:13 PMbreezy-hamburger-69619
06/24/2019, 9:13 PMglamorous-thailand-23651
06/24/2019, 9:14 PMglamorous-thailand-23651
06/24/2019, 9:14 PMpulumi destroy and then another pulumi up to see if it was some kind of transient issuebreezy-hamburger-69619
06/24/2019, 9:14 PMbreezy-hamburger-69619
06/24/2019, 9:15 PMbreezy-hamburger-69619
06/24/2019, 9:15 PMglamorous-thailand-23651
06/24/2019, 9:16 PMdestroy it again, sync + double check version numbers, then up again if that’s an expected path forwardbreezy-hamburger-69619
06/24/2019, 9:17 PMpulumi/eks package per npm install or did you update this from an older version?glamorous-thailand-23651
06/24/2019, 9:18 PMnpm update at one point (and all my pulumi packages are set to latest) and got new versions. i didn’t pay much attention to what the exact version changes werebreezy-hamburger-69619
06/24/2019, 9:19 PMpulumi/eks than v0.18.8. i’ll try re-creating your repro on v0.18.8 to see what that doesglamorous-thailand-23651
06/24/2019, 9:20 PMbreezy-hamburger-69619
06/24/2019, 9:20 PMbreezy-hamburger-69619
06/24/2019, 9:20 PMglamorous-thailand-23651
06/24/2019, 9:21 PMbreezy-hamburger-69619
06/24/2019, 9:23 PMpulumi/eks v0.18.8 and will report backbreezy-hamburger-69619
06/24/2019, 9:23 PMglamorous-thailand-23651
06/24/2019, 9:24 PMglamorous-thailand-23651
06/24/2019, 9:25 PMbreezy-hamburger-69619
06/24/2019, 9:25 PMbreezy-hamburger-69619
06/24/2019, 9:28 PMdeployDashboard: false
1 - https://github.com/pulumi/pulumi-kubernetes/issues/600
2 - https://github.com/pulumi/pulumi-eks/issues/155#issuecomment-504096740glamorous-thailand-23651
06/24/2019, 9:31 PMbreezy-hamburger-69619
06/24/2019, 9:46 PMmetral@argon:~/megadoomer-io/stacks/infra$ kubectl get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-1-132-209.us-east-2.compute.internal   Ready    <none>   11m   v1.12.7
ip-10-1-169-248.us-east-2.compute.internal   Ready    <none>   11m   v1.12.7
metral@argon:~/megadoomer-io/stacks/infra$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   aws-node-2f2z8                          1/1     Running   0          11m
kube-system   aws-node-ztlld                          1/1     Running   0          11m
kube-system   coredns-65f768bbc8-5d4fx                1/1     Running   0          14m
kube-system   coredns-65f768bbc8-7jqq6                1/1     Running   0          14m
kube-system   heapster-684777c4cb-wvskb               1/1     Running   0          10m
kube-system   kube-proxy-kggzd                        1/1     Running   0          11m
kube-system   kube-proxy-nxnhr                        1/1     Running   0          11m
kube-system   kubernetes-dashboard-67d4c89764-rzd6m   1/1     Running   0          10m
kube-system   monitoring-influxdb-5c5bf4949d-sqxsd    1/1     Running   0          10m
metral@argon:~/megadoomer-io/stacks/infra$ kubectl version
Client Version: <http://version.Info|version.Info>{Major:"1", Minor:"12", GitVersion:"v1.12.6", GitCommit:"ab91afd7062d4240e95e51ac00a18bd58fddd365", GitTreeState:"clean", BuildDate:"2019-02-26T12:59:46Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: <http://version.Info|version.Info>{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}breezy-hamburger-69619
06/24/2019, 9:47 PMglamorous-thailand-23651
06/24/2019, 9:48 PMbreezy-hamburger-69619
06/24/2019, 9:48 PMglamorous-thailand-23651
06/24/2019, 10:12 PMdeployDashboard: false, no pods are deployed by default? in which case i should confirm success with kubectl version ; kubectl get nodes, and if everything looks good i should have no problem deploying other pods (via helm charts, etc) right?glamorous-thailand-23651
06/24/2019, 10:12 PMbreezy-hamburger-69619
06/24/2019, 10:14 PMbreezy-hamburger-69619
06/24/2019, 10:14 PMbreezy-hamburger-69619
06/24/2019, 10:15 PMglamorous-thailand-23651
06/24/2019, 10:16 PMaws:eks:Cluster                   k8s-eksCluster                    creating... is done (i don’t currently see core-dns, aws-node, kube-proxy listed in the output yet) — and then give the dash+heapster a try in another iteration.glamorous-thailand-23651
06/24/2019, 10:16 PMbreezy-hamburger-69619
06/24/2019, 10:16 PMkubectl get pods --all-namespaces, not in the Pulumi outputbreezy-hamburger-69619
06/24/2019, 10:17 PMglamorous-thailand-23651
06/24/2019, 10:17 PMglamorous-thailand-23651
06/24/2019, 10:17 PMbreezy-hamburger-69619
06/24/2019, 10:17 PMglamorous-thailand-23651
06/24/2019, 10:18 PMglamorous-thailand-23651
06/24/2019, 10:18 PM