https://pulumi.com logo
#general
Title
# general
e

early-musician-41645

02/04/2019, 10:00 PM
Anyone seen this yet?
Copy code
E0204 13:58:59.709463  351850 memcache.go:126] couldn't get current server API group list; will keep using cached value. (Unauthorized)
g

gorgeous-egg-16927

02/04/2019, 10:06 PM
Assuming that's from the k8s provider, it could be a problem with your kubeconfig.
e

early-musician-41645

02/04/2019, 10:10 PM
This is all failing as part of
pulumi up
, not some later action.
the kubeconfig is pulled directly from the cluster object as I did previously
this error only started today
g

gorgeous-egg-16927

02/04/2019, 10:11 PM
What version of the kubernetes-provider are you using?
e

early-musician-41645

02/04/2019, 10:13 PM
Copy code
$ cat package.json
{
  "name": "eks-cluster",
  "devDependencies": {
    "@types/node": "10.12.19"
  },
  "dependencies": {
    "@pulumi/aws": "0.16.7",
    "@pulumi/eks": "0.16.4",
    "@pulumi/kubernetes": "0.20.0",
    "@pulumi/pulumi": "0.16.12"
  }
}
This is now blocking both
up
and
destroy
g

gorgeous-egg-16927

02/04/2019, 10:14 PM
Ok, I'm pretty sure that's related to
Upgrade to client-go 0.10.0
in https://github.com/pulumi/pulumi-kubernetes/releases/tag/v0.20.0
So it's not just a warning? It's actually failing the update?
e

early-musician-41645

02/04/2019, 10:17 PM
It's failing, yes
g

gorgeous-egg-16927

02/04/2019, 10:18 PM
Hmm, can you give me some more details on your k8s setup? What cloud provider, and is this with a limited-access user account?
e

early-musician-41645

02/04/2019, 10:20 PM
AWS, EKS, using a full access admin account
g

gorgeous-egg-16927

02/04/2019, 10:25 PM
Just to confirm, you're able to access the cluster with
kubectl
?
e

early-musician-41645

02/04/2019, 10:31 PM
no, because the cluster resources are not getting created due to the error above
g

gorgeous-egg-16927

02/04/2019, 10:37 PM
Alright, I did test that scenario pretty extensively on GKE, so I'm confused why it would be failing like that. Can you open an issue with repro steps so I can investigate further? https://github.com/pulumi/pulumi-kubernetes/issues/new
e

early-musician-41645

02/04/2019, 10:45 PM
this is EKS, not GKE
g

gorgeous-egg-16927

02/04/2019, 10:47 PM
Understood. I mean that I tested creating a cluster as part of the workflow. I'm checking with EKS now.
e

early-musician-41645

02/04/2019, 10:56 PM
I've tried deleting the EKS cluster through the AWS console and then
pulumi refresh
, but it continues failing, specifically on the gp2 storageclass
Copy code
kubernetes:<http://storage.k8s.io:StorageClass|storage.k8s.io:StorageClass> (eshamay-test-eks-cluster-gp2):
    error: Preview failed: the cache has not been filled yet
c

creamy-potato-29402

02/04/2019, 11:13 PM
@early-musician-41645 can you post the entire output?
e

early-musician-41645

02/04/2019, 11:15 PM
I've since deleted the EKS cluster directly to try and have it recreated via
pulumi up
, but refreshing is failing with this:
Copy code
Previewing refresh (tableau/eshamay-test):

     Type                                          Name                                              Plan        Info
     pulumi:pulumi:Stack                           eks-cluster-eshamay-test                                      2 messages
     ├─ eks:index:Cluster                          eshamay-test-eks-cluster
     │  ├─ eks:index:ServiceRole                   eshamay-test-eks-cluster-instanceRole
     │  │  ├─ aws:iam:Role                         eshamay-test-eks-cluster-instanceRole-role
     │  │  ├─ aws:iam:RolePolicyAttachment         eshamay-test-eks-cluster-instanceRole-03516f97
     │  │  ├─ aws:iam:RolePolicyAttachment         eshamay-test-eks-cluster-instanceRole-e1b295bd
     │  │  └─ aws:iam:RolePolicyAttachment         eshamay-test-eks-cluster-instanceRole-3eb088f2
     │  ├─ eks:index:ServiceRole                   eshamay-test-eks-cluster-eksRole
     │  │  ├─ aws:iam:Role                         eshamay-test-eks-cluster-eksRole-role
     │  │  ├─ aws:iam:RolePolicyAttachment         eshamay-test-eks-cluster-eksRole-4b490823
     │  │  └─ aws:iam:RolePolicyAttachment         eshamay-test-eks-cluster-eksRole-90eb1c99
     │  ├─ pulumi-nodejs:dynamic:Resource          eshamay-test-eks-cluster-cfnStackName
     │  ├─ aws:ec2:KeyPair                         eshamay-test-eks-cluster-keyPair
 ~   │  ├─ aws:ec2:SecurityGroup                   eshamay-test-eks-cluster-eksClusterSecurityGroup  update      [diff: ~ingress]
 ~   │  ├─ aws:iam:InstanceProfile                 eshamay-test-eks-cluster-instanceProfile          update      [diff: ~roles]
     │  ├─ pulumi:providers:kubernetes             eshamay-test-eks-cluster-eks-k8s
 ~   │  ├─ aws:ec2:SecurityGroupRule               eshamay-test-eks-cluster-eksClusterIngressRule    update      [diff: +cidrBlocks,ipv6CidrBlocks,prefixListIds]
     │  ├─ aws:eks:Cluster                         eshamay-test-eks-cluster-eksCluster
 ~   │  ├─ kubernetes:<http://storage.k8s.io:StorageClass|storage.k8s.io:StorageClass>  eshamay-test-eks-cluster-gp2                      refresh     1 error
     │  ├─ aws:ec2:LaunchConfiguration             eshamay-test-eks-cluster-nodeLaunchConfiguration
     │  ├─ pulumi-nodejs:dynamic:Resource          eshamay-test-eks-cluster-vpc-cni
 ~   │  └─ aws:ec2:SecurityGroup                   eshamay-test-eks-cluster-nodeSecurityGroup        update      [diff: ~ingress]
     ├─ kubernetes:helm.sh:Chart                   splunk-forwarder
     ├─ pulumi:pulumi:StackReference               tableau/mustang-aws-iam/mustang-aws-iam-sandbox
 ~   └─ aws:ec2:SecurityGroupRule                  ssh-ingress                                       update      [diff: +description,ipv6CidrBlocks,prefixListIds]

Diagnostics:
  pulumi:pulumi:Stack (eks-cluster-eshamay-test):
    E0204 22:50:42.172348    6595 memcache.go:126] couldn't get current server API group list; will keep using cached value. (Get <https://88DA2FE0B8F4AF81187BB175B3D7D2F2.yl4.us-west-2.eks.amazonaws.com/api?timeout=32s>: dial tcp: lookup <http://88DA2FE0B8F4AF81187BB175B3D7D2F2.yl4.us-west-2.eks.amazonaws.com|88DA2FE0B8F4AF81187BB175B3D7D2F2.yl4.us-west-2.eks.amazonaws.com> on 10.43.42.34:53: no such host)
    E0204 22:50:42.179106    6595 memcache.go:126] couldn't get current server API group list; will keep using cached value. (Get <https://88DA2FE0B8F4AF81187BB175B3D7D2F2.yl4.us-west-2.eks.amazonaws.com/api?timeout=32s>: dial tcp: lookup <http://88DA2FE0B8F4AF81187BB175B3D7D2F2.yl4.us-west-2.eks.amazonaws.com|88DA2FE0B8F4AF81187BB175B3D7D2F2.yl4.us-west-2.eks.amazonaws.com> on 10.43.42.34:53: no such host)

  kubernetes:<http://storage.k8s.io:StorageClass|storage.k8s.io:StorageClass> (eshamay-test-eks-cluster-gp2):
    error: Preview failed: the cache has not been filled yet

error: an error occurred while advancing the preview
The
no such host
is expected
but the cache not being filled is the same error as before
c

creamy-potato-29402

02/04/2019, 11:16 PM
can it be filled if the host is missing?
I think no, right?
It seems to me that you’re trying to refresh resources when there’s no cluster, and we’re not smart enough to figure that out.
@early-musician-41645 Can you reproduce the original error?
This
refresh
output looks at least sensible to me.
e

early-musician-41645

02/04/2019, 11:20 PM
Shouldn't the refresh at least succeed?
I can't do
refresh
,
up
, or
destroy
because they all fail with an error
So no repros anymore of the original error
c

creamy-potato-29402

02/04/2019, 11:21 PM
I say that yes it should, but I also understand why it fails.
The first error I don’t understand yet.
e

early-musician-41645

02/04/2019, 11:21 PM
I'd expect Pulumi to see that the EKS cluster is gone and thus delete it from the stack
rather than error
c

creamy-potato-29402

02/04/2019, 11:21 PM
Yes, me too.
Yes, that’s not what it does, but that’s what it shoudl do.
We can fix that.
The first error, though, I need to understand so we can make sure there’s not something else we need to fix.
g

gorgeous-egg-16927

02/04/2019, 11:23 PM
e

early-musician-41645

02/04/2019, 11:25 PM
Im currently creating a brand new stack in the same project
c

creamy-potato-29402

02/04/2019, 11:25 PM
@early-musician-41645 you can use
pulumi state delete
to have pulumi “forget” those EKS resources, we will fix that issue soon.
well
e

early-musician-41645

02/04/2019, 11:26 PM
trying
c

creamy-potato-29402

02/04/2019, 11:27 PM
actually, now that I think about it, we probably can do this for EKS and GKE, but detecting a destroyed cluster vs an unreachable cluster in general seems really hard.
e

early-musician-41645

02/04/2019, 11:32 PM
Copy code
[1262][/home/tsi/eshamay/git/mustang/sdp-mustang-terraform/pulumi/eks-cluster]$ pulumi stack --show-urns | grep gp2
    kubernetes:<http://storage.k8s.io/v1:StorageClass|storage.k8s.io/v1:StorageClass>          eshamay-test-eks-cluster-gp2
        URN: urn:pulumi:eshamay-test::eks-cluster::eks:index:Cluster$kubernetes:<http://storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2|storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2>
[1263][/home/tsi/eshamay/git/mustang/sdp-mustang-terraform/pulumi/eks-cluster]$ pulumi state delete urn:pulumi:eshamay-test::eks-cluster::eks:index:Cluster$kubernetes:<http://storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2|storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2>
 warning: This command will edit your stack's state directly. Confirm? Yes
error: No such resource "urn:pulumi:eshamay-test::eks-cluster::eks:index:Cluster:<http://storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2|storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2>" exists in the current state
Am I not supposed to use the URN?
c

creamy-potato-29402

02/04/2019, 11:33 PM
I confess I’ve never actually used it, one second.
@early-musician-41645 hmm, deleting the URN works for me.
@early-musician-41645 try removing the
urn:pulumi:eshamay-test::eks-cluster::eks:index:Cluster$
prefix to that URN
e

early-musician-41645

02/04/2019, 11:46 PM
Copy code
$ pulumi state delete kubernetes:<http://storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2|storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2>
 warning: This command will edit your stack's state directly. Confirm? Yes
error: No such resource "kubernetes:<http://storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2|storage.k8s.io/v1:StorageClass::eshamay-test-eks-cluster-gp2>" exists in the current state
c

creamy-potato-29402

02/04/2019, 11:47 PM
cc @incalculable-sundown-82514
I think you owned this right?
i

incalculable-sundown-82514

02/04/2019, 11:50 PM
Do you see the urn of the resource when you do ‘Pulumi stack -u’? That’s the argument to pass to Pulumi state delete
c

creamy-potato-29402

02/04/2019, 11:51 PM
@incalculable-sundown-82514 yes, see the last couple messages.
He uses
--show-urns
@early-musician-41645 you need to escape the
$
due to bash?
that’s what we think
e

early-musician-41645

02/04/2019, 11:57 PM
ah...
that worked
c

creamy-potato-29402

02/04/2019, 11:59 PM
what a journey
e

early-musician-41645

02/04/2019, 11:59 PM
Deleted the gp2 resource, and now destroyed the rest of them
c

creamy-potato-29402

02/04/2019, 11:59 PM
great.
alright, so, let’s talk more about the original issue.
If you can reproduce it please paste the entire output here
e

early-musician-41645

02/05/2019, 12:02 AM
Yes, next repro I get I'll just leave it be until we can take a closer look 😕
We've got lots of churn in the codebase at the moment and many stack sprouting up for testing