Thread
#kubernetes
    c

    chilly-plastic-75584

    7 months ago
    b

    bored-table-20691

    7 months ago
    Your question should be possible - e.g. I export the kubeconfig from an EKS cluster in one stack and import it into another and create the provider.
    What is not working?
    c

    chilly-plastic-75584

    7 months ago
    cat /workspaces/myproject/.cached/.kube/tmp.kube.config.json | gojq -c | pulumi config --stack=myorgd/dev set kubernetes:kubeconfig --secret
    After this running up/refresh and other commands aren't able to connect to the Kubernetes cluster. However, doing this
    KUBECONFIG=/workspaces/myproject/.cached/.kube/tmp.kube.config pulumi up
    does work. I want to avoid having to pass the kubeconfig and embed in this stack (or at least try it out to see if makes things easier)
    o

    orange-policeman-59119

    7 months ago
    strange, I exclusively use
    kubernetes:kubeconfig
    (declared as secrets) in a test project. I see "tmp" in the string here & that you're passing it to
    gojq -c
    and usually kube configs are YAML, though that should be a super-set of json. You're also using
    KUBECONFIG
    with the ".json" version of the file, is it possible that's just not a valid config? Can you try consuming it as yaml and/or skipping the
    gojq -c
    step, e.g.:
    cat /workspaces/myproject/.cached/.kube/tmp.kube.config | pulumi config --stack=myorgd/dev set kubernetes:kubeconfig --secret
    FWIW, you can set multiline strings as secrets no problem.
    If the file at
    tmp.kube.config.json
    is valid, it should also work as a KUBECONFIG env var