most-toothbrush-43532
09/16/2022, 3:30 AMeks.Cluster
and I have error when pulumi up
from other stack.
I configured output eks.cluster kubeconfig but it seems get previous kubeconfig output.
I tried this, but it's not worked.
PULUMI_K8S_DELETE_UNREACHABLE=true pulumi refresh
this is works.
pulumi stackoutput kubeconfig
but this is not work. (get previous kubeconfig)
const eksStack = new pulumi.StackReference('organization/eks/{stack}');
const kubeconfig = eksStack.outputs.kubeconfig;
My pulumi CLI version is v3.40.0
, and my kubernetes module version "@pulumi/kubernetes": "^3.21.2"
error message belows:
pulumi preview
Previewing update (dev)
View Live: <https://app.pulumi.com/{masked}/argocd/dev/previews/ce0cceec-7ee2-4d75-aa7b-a7f9bfc1d2f0>
Type Name Plan Info
pulumi:pulumi:Stack argocd-dev 1 error
+ └─ kubernetes:core/v1:Namespace argocd-ns create
Diagnostics:
pulumi:pulumi:Stack (argocd-dev):
error: Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: Get "<https://8D443680F22BAD26FCEA969006E150DC.yl4.ap-northeast-2.eks.amazonaws.com/version?timeout=32s>": dial tcp: lookup <http://8D443680F22BAD26FCEA969006E150DC.yl4.ap-northeast-2.eks.amazonaws.com|8D443680F22BAD26FCEA969006E150DC.yl4.ap-northeast-2.eks.amazonaws.com> on 127.0.0.53:53: no such host
anyone have this issue?export const kubeconfig = cluster.kubeconfig;
const provider = new k8s.Provider('k8sProvider', { kubeconfig: kubeconfig });
I changed like this and its work