error: exec plugin: invalid apiVersion "<client.au...
# general
g
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" error: Running program [PID: 14944](unknown) failed with an unhandled exception: io.grpc.StatusRuntimeException: UNAVAILABLE: error reading from server: read tcp 127.0.0.158449 &gt;127.0.0.158448: use of closed network connection at io.grpc.Status.asRuntimeException(Status.java:535) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) see this error for this java IaC code
Copy code
private static void stack(final Context aContext) {
final var cluster = new Cluster("eks-cluster");
aContext.export("kubeconfig", cluster.kubeconfig());
   }
any clues why this error, however eks did start when seen from aws console
s
Is
Cluster
aws.eks.Cluster
? (Asking because I need to know which provider you're seeing this with.)
g
Copy code
dependencies {
    implementation 'com.pulumi:pulumi:(,1.0]'
    implementation 'com.pulumi:aws:(,6.0.0]'
    implementation 'com.pulumi:eks:0.37.1'
}
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.eks.Cluster;
It is
com.pulumi.eks.Cluste
When changed to
com.pulumi.aws.eks.Cluste
this method not found
cluster.kubeconfig()
s
What version are you using of the AWS CLI?
g
aws-cli/2.7.24 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
s
What version of kubectl are you using?
g
G:\DWork\osource\cloudc\devops\pulumi\java\gcp\gke>kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: <http://version.Info|version.Info>{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"windows/amd64"}
Kustomize Version: v4.5.4
Unable to connect to the server: dial tcp 127.0.0.1:62506: connectex: No connection could be made because the target machine actively refused it.
G:\DWork\osource\cloudc\devops\pulumi\java\gcp\gke>where kubectl
C:\Program Files\Docker\Docker\resources\bin\kubectl.exe
s
These are the links I'm looking at to debug your issue BTW: ā€¢ https://stackoverflow.com/questions/72473455/is-there-a-way-to-solve-kubeconfig-user-entry-is-using-deprecated-api-version ā€¢ https://github.com/pulumi/pulumi-eks/issues/599 I was able to find the closed GH issue by doing an org-wide search in GH. (But I also remember seeing a similar problem before, so I had extra context.) I'm asking for the details here to see if we can improve the docs so others don't hit the same issue.
šŸ™ 1
And what K8s version is your cluster?
g
by default this java code is launching 1.22, even AWS console is complaining to upgrade..which api should we see to
Copy code
final var cluster = new Cluster("eks-cluster");
aContext.export("kubeconfig", cluster.kubeconfig());
to launch the latest k8s, currently latest one in aws is 1.23
s
Try ensuring that your kubectl version matches your cluster version.
g
How to launch latest k8s cluster.. that way I can take the latest of both..
s
I'd suggest trying making them match first and see if that resolves the issue. It looks like you're launching 1.22 and you have kubectl version 1.24. Try specifying an explicit EKS-supported version when you launch the cluster, and make sure that version matches whatever kubectl you have installed: https://www.pulumi.com/registry/packages/eks/api-docs/cluster/#version_java
šŸ™Œ 1
g
Diagnostics: eksindexVpcCni (eks-cluster-vpc-cni): error: Command failed: kubectl apply -f C:\Users\RAJANA~1\AppData\Local\Temp\tmp-22212N6AzGomhBZEB.tmp Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update. Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[1].key: beta.kubernetes.io/arch is deprecated since v1.14; use "kubernetes.io/arch" instead error: unable to recognize "C:\\Users\\RAJANA~1\\AppData\\Local\\Temp\\tmp-22212N6AzGomhBZEB.tmp": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" kubernetescore/v1ConfigMap (eks-cluster-nodeAccess): error: failed to initialize discovery client: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" pulumipulumiStack (eks-dev): Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update. Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[1].key: beta.kubernetes.io/arch is deprecated since v1.14; use "kubernetes.io/arch" instead error: unable to recognize "C:\\Users\\RAJANA~1\\AppData\\Local\\Temp\\tmp-22212N6AzGomhBZEB.tmp": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" error: Running program [PID: 18808](unknown) failed with an unhandled exception: io.grpc.StatusRuntimeException: UNAVAILABLE: error reading from server: read tcp 127.0.0.153436 &gt;127.0.0.153435: use of closed network connection at io.grpc.Status.asRuntimeException(Status.java:535) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Resources: + 18 created Duration: 10m24s
kubectl version --short Client Version: v1.23.9
with this code, I am able to create 1.23 eks
Copy code
private static void stack(final Context aContext)
   {
final var cluster = new Cluster("eks-cluster", ClusterArgs.builder().version("1.23").build());
aContext.export("kubeconfig", cluster.kubeconfig());
   }
but still above exception
G:\DWork\osource\cloudc\devops\pulumi\java\aws\eks>kubectl version --short Client Version: v1.23.7 Unable to connect to the server: dial tcp: lookup 53C969FC89D77BFAD5FDE8508118F6C0.gr7.ap-south-1.eks.amazonaws.com: no such host
s
That sounds like it could be a network issue?
g
s
Are you running Pulumi as an IAM principal with FullAdmin?
g
I am the root user in aws.. beyond that, I have not configured any IAM principals other than defaults that come with default root user
I think the issue is default role that gets created when CODE does not set it explicitly. Looks like there are many versions of the role, api may be using the wrong verstion of the ROLE..