Thread
#general
    b

    brief-helicopter-28120

    1 month ago
    Hello, I'm trying to use
    k8s.yaml.ConfigFile
    to deploy CRDs into my EKS cluster (typescript) and I run into issues. Btw I'm new to typescript and as well to pulumi. So please kindly bare with me 🙏 Code:
    import * as k8s from "@pulumi/kubernetes";
    import * as eks from "@pulumi/eks";
    
    
    export default {
        install_crds(cluster: eks.Cluster){
            new k8s.yaml.ConfigFile("argocd_namespace", {
                file: "kubernetes_cluster_components/namespaces/argocd-namespace.yaml",
            }, {providers: { "kubernetes": cluster.provider }});
        }
    };
    Error:
    pulumi:pulumi:Stack  k8s-moralis-aws-dev-argo-test  running.    error: an unhandled error occurred: Program exited with non-zero exit code: -1
    I0804 09:23:53.878138   22054 deployment_executor.go:162] deploymentExecutor.Execute(...): exiting provider canceller
         Type                 Name                           Plan     Info
         pulumi:pulumi:Stack  k8s-moralis-aws-dev-argo-test           1 error; 39 messages
    
    Diagnostics:
      pulumi:pulumi:Stack (k8s-moralis-aws-dev-argo-test):
        Cloud Provider: aws Stack: aws-dev-argo-test
    
        error: an unhandled error occurred: Program exited with non-zero exit code: -1
    The error message is not very descriptive, hence difficult to troubleshoot. Can someone please help me here 🙏
    I found the issue. I needed to update provider stuff to:
    new k8s.yaml.ConfigFile("argocd_namespace", {
        file: "kubernetes_cluster_components/namespaces/argocd-namespace.yaml",
    }, {provider: cluster.provider })
    and then I kept getting
    "error: exec plugin: invalid apiVersion "<http://client.authentication.k8s.io/v1alpha1|client.authentication.k8s.io/v1alpha1>"
    error. So deep diving into the issue, I found that I have installed pip version of awscli version and for some reason it wasn't compatible with current pulumi operation (k8s provider). So had to install awscli through brew and update the kubeconfig file again and finally this fixed my issues.