Hi, I’m doing some exploratory work with pulumi to...
# general
h
Hi, I’m doing some exploratory work with pulumi to see if I want to manage EKS/K8S with it in Typescript. I’ve got an example stack where I create a cluster
Copy code
const cluster = new eks.Cluster('dev-exp-pulumi', {...cluster settings...});
and then later on in the file after adding a few other resources I deploy a manifest.
Copy code
const provider = new k8s.Provider("provider", {
  kubeconfig: cluster.kubeconfig,
});


const flux = new k8s.yaml.ConfigFile("flux-static", { file: "./files/flux_static.yaml" }, { provider });
This gives me the following error trying to build the stack from nothing:
Copy code
pulumi:pulumi:Stack (pulumi-eks-experiment-dev-exp-pulumi):
    error: TypeError: Cannot read property 'map' of undefined
        at pulumi-eks-experiment/node_modules/@pulumi/yaml/yaml.ts:2993:14
        at processTicksAndRejections (internal/process/task_queues.js:97:5)
but if I comment out the
const flux = …
line and deploy the stack then the cluster is deployed successfully and then if I uncomment the
const flux = …
line and run it again, the
ConfigFile
is deployed successfully. I suspect it’s something I’m missing with the
Output<>
and
Input<>
stuff, but the error doesn’t seem to match the code. What’s causing my error?
👍 1
b
i believe you need:
Copy code
const provider = new k8s.Provider("provider", {
  kubeconfig: cluster.kubeconfig,
});


const flux = new k8s.yaml.ConfigFile("flux-static", { file: "./files/flux_static.yaml" }, { provider: provider });
h
Did you just add
: provider
?
I ask because those are functionally equivalent. If the key name and the variable name you’re passing it are identical, you can just do what I did
b
yep, that's good to know. I do think that's causing your issues in this case
h
FYI other people are seeing this identical problem. https://pulumi-community.slack.com/archives/CJ909TL6P/p1654836300810979 I’m pretty sure I tried what you are suggesting, but I will try again. It’s a bit of a slow cycle time when spinning up/down EKS clusters.
b
yep, it appears to be a bug unfortunately 😞
@happy-raincoat-89168 what version of the provider are you using?
h
How do I determine that?
b
run
pulumi about
in your project directory
h
Copy code
Dependencies:
NAME            VERSION
@pulumi/eks     0.40.0
@pulumi/pulumi  3.34.1
@types/node     14.18.21
@pulumi/aws     5.9.0
@pulumi/awsx    0.40.0
b
no kubernetes provider?
h
Copy code
Plugins
NAME        VERSION
aws         5.9.0
docker      3.2.0
eks         0.40.0
kubernetes  3.19.4
nodejs      unknown
It’s possible I have it set up incorrectly
b
have you done
npm install @pulumi/kubernetes
?
i ask because I believe there's a fix for this in
3.19.3
h
actually, it’s not in my package.json
trying that
Looks like
pulumi up
still gives the same error after npm install
now using 3.19.4
b
okay thanks for checking, i'm following up on the bug
h
Thank you
@billowy-army-68599 Just wanted to check in and see if you learned anything on ^^^^^ No pressure. I’m not sure how fast things move on this project, but like I said wanted to check in.
b
ah yes. can you check you're actually using the right provider version? you can see the one attached to the provider by doing
pulumi stack export
h
yeah, I’ll try it and let you know
"provider": "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::pulumi:providers:aws::default_5_9_0::c98df31b-4fcd-4da4-a201-1dbf97261d67"
Is this what you’re looking for?
"provider": "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::pulumi:providers:eks::default::83fd5e78-f400-4901-a9ef-a6fc9ea0efc5"
Copy code
{
                "urn": "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::pulumi:providers:aws::default_5_9_0",
                "custom": true,
                "id": "c98df31b-4fcd-4da4-a201-1dbf97261d67",
                "type": "pulumi:providers:aws",
                "inputs": {
                    "region": "us-west-2",
                    "version": "5.9.0"
                },
                "outputs": {
                    "region": "us-west-2",
                    "version": "5.9.0"
                },
                "parent": "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::pulumi:pulumi:Stack::pulumi-eks-experiment-dev-exp-pulumi",
                "sequenceNumber": 1
            },
b
it's specifically the kubernetes provider I'm looking for
search for
3.19
h
Copy code
{
                "urn": "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::eks:index:Cluster$pulumi:providers:kubernetes::dev-exp-pulumi-eks-k8s",
                "custom": true,
                "id": "d03b2117-53e6-4997-bdf5-069065076adf",
                "type": "pulumi:providers:kubernetes",
                "inputs": {
                    "kubeconfig": "{\"apiVersion\":\"v1\",\"clusters\":[{\"cluster\":{\"server\":\"<https://BEFA413B2734A367A980319426EB945C.gr7.us-west-2.eks.amazonaws.com>\",\"certificate-authority-data\":\"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EWXlPREUzTURjeU0xb1hEVE15TURZeU5URTNNRGN5TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmtwClh6SlFWUlBqRG9nei9Qdk5WaEppSUlQT1hsWTY4WHF2dDVpK3VOZUs3MWdmSjM1VmJ2dEQ0RDdVWnJ6eTB5cFYKZHBYK25NcElOUGRueS80QitnVzZ2TktXcmZhRGI0Y3ljUEFGUllneDcwa0NkYTZtQ2JOYk9saHVPSVFjdVZpOQpBdzBNTlVqT2dyS291bTFhUlgzM3I3dlRtWm8zN0RXYStLbVczdUZRS3ZHakxFS1BEY09rZEtlTUtJZm5EUVBUCnlES1FVeGpvbnVEMXBzVGdwSnp6TjByZDFRZnJIQ3J1aWVIWGw5RXNOaENXb3lLOUpXa001N3MrVjFZR2V4eUIKa2JMcjZYRURqN2pFNXNZU214ZU5ieTBHTzRnSlMrRG5Sb2xGa3c0K2t1RkdNSVpQaG5iN0x3b3U2SU5kMThsZApEUWlUbE5VQjhleE4wQW93M05rQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQaFJvMjhTUjNaQUJvZzJvQXFrQkhsdUtndFZNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS2Z0VDRYM1BiZm9zZU5EKzBHSAppTWZUSmY4amo1ZnN4NkcxNlM5VjBIYU1oWUo1RVd6TjROOEU5cEE3bDMyaXp4OVArUVpZbW9zWGduTHdxU3lICmVVSlI4MlBBd014Znd0L3JycmN2aG9VMEJ2S1BPdHJpWHQ2TW00eGpyU01TbVdYb2RHUzZodlNnMjB5SExxTzMKK0d6eDJnbUtWd1RCOU1UaE9UalJmUEFoU0ZsWUhYUHhNTlhzN25RQkxXR1ZEcjFlOE5tbmNPUWxaWmF5Y21iegpzY281QWdkYjNtWGpVNEJaR0dZMm12K0QveVdvQndaUkN5VXMxSXo1eHA2eCtRblRCdzJOc3VZMnJia202WW8yCkRzZzU5dXFKNUt5NlpFVVBramltWFZ6UmQ2YzFEVzNrRi9yY2R0WlliSmo4bUVpVGttRnYwcy9DOXZ0c291cFIKcUk4PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==\"},\"name\":\"kubernetes\"}],\"contexts\":[{\"context\":{\"cluster\":\"kubernetes\",\"user\":\"aws\"},\"name\":\"aws\"}],\"current-context\":\"aws\",\"kind\":\"Config\",\"users\":[{\"name\":\"aws\",\"user\":{\"exec\":{\"apiVersion\":\"<http://client.authentication.k8s.io/v1beta1\|client.authentication.k8s.io/v1beta1\>",\"command\":\"aws\",\"args\":[\"eks\",\"get-token\",\"--cluster-name\",\"dev-exp-pulumi-eksCluster-cdd6498\"],\"env\":[{\"name\":\"KUBERNETES_EXEC_INFO\",\"value\":\"{\\\"apiVersion\\\": \\\"<http://client.authentication.k8s.io/v1beta1\\\|client.authentication.k8s.io/v1beta1\\\>"}\"},{\"name\":\"AWS_PROFILE\",\"value\":\"default\"}]}}}]}",
                    "version": "3.19.4"
                },
                "outputs": {
                    "kubeconfig": "{\"apiVersion\":\"v1\",\"clusters\":[{\"cluster\":{\"server\":\"<https://BEFA413B2734A367A980319426EB945C.gr7.us-west-2.eks.amazonaws.com>\",\"certificate-authority-data\":\"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EWXlPREUzTURjeU0xb1hEVE15TURZeU5URTNNRGN5TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmtwClh6SlFWUlBqRG9nei9Qdk5WaEppSUlQT1hsWTY4WHF2dDVpK3VOZUs3MWdmSjM1VmJ2dEQ0RDdVWnJ6eTB5cFYKZHBYK25NcElOUGRueS80QitnVzZ2TktXcmZhRGI0Y3ljUEFGUllneDcwa0NkYTZtQ2JOYk9saHVPSVFjdVZpOQpBdzBNTlVqT2dyS291bTFhUlgzM3I3dlRtWm8zN0RXYStLbVczdUZRS3ZHakxFS1BEY09rZEtlTUtJZm5EUVBUCnlES1FVeGpvbnVEMXBzVGdwSnp6TjByZDFRZnJIQ3J1aWVIWGw5RXNOaENXb3lLOUpXa001N3MrVjFZR2V4eUIKa2JMcjZYRURqN2pFNXNZU214ZU5ieTBHTzRnSlMrRG5Sb2xGa3c0K2t1RkdNSVpQaG5iN0x3b3U2SU5kMThsZApEUWlUbE5VQjhleE4wQW93M05rQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQaFJvMjhTUjNaQUJvZzJvQXFrQkhsdUtndFZNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS2Z0VDRYM1BiZm9zZU5EKzBHSAppTWZUSmY4amo1ZnN4NkcxNlM5VjBIYU1oWUo1RVd6TjROOEU5cEE3bDMyaXp4OVArUVpZbW9zWGduTHdxU3lICmVVSlI4MlBBd014Znd0L3JycmN2aG9VMEJ2S1BPdHJpWHQ2TW00eGpyU01TbVdYb2RHUzZodlNnMjB5SExxTzMKK0d6eDJnbUtWd1RCOU1UaE9UalJmUEFoU0ZsWUhYUHhNTlhzN25RQkxXR1ZEcjFlOE5tbmNPUWxaWmF5Y21iegpzY281QWdkYjNtWGpVNEJaR0dZMm12K0QveVdvQndaUkN5VXMxSXo1eHA2eCtRblRCdzJOc3VZMnJia202WW8yCkRzZzU5dXFKNUt5NlpFVVBramltWFZ6UmQ2YzFEVzNrRi9yY2R0WlliSmo4bUVpVGttRnYwcy9DOXZ0c291cFIKcUk4PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==\"},\"name\":\"kubernetes\"}],\"contexts\":[{\"context\":{\"cluster\":\"kubernetes\",\"user\":\"aws\"},\"name\":\"aws\"}],\"current-context\":\"aws\",\"kind\":\"Config\",\"users\":[{\"name\":\"aws\",\"user\":{\"exec\":{\"apiVersion\":\"<http://client.authentication.k8s.io/v1beta1\|client.authentication.k8s.io/v1beta1\>",\"command\":\"aws\",\"args\":[\"eks\",\"get-token\",\"--cluster-name\",\"dev-exp-pulumi-eksCluster-cdd6498\"],\"env\":[{\"name\":\"KUBERNETES_EXEC_INFO\",\"value\":\"{\\\"apiVersion\\\": \\\"<http://client.authentication.k8s.io/v1beta1\\\|client.authentication.k8s.io/v1beta1\\\>"}\"},{\"name\":\"AWS_PROFILE\",\"value\":\"default\"}]}}}]}",
                    "version": "3.19.4"
                },
                "parent": "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::eks:index:Cluster::dev-exp-pulumi",
                "dependencies": [
                    "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::eks:index:Cluster$aws:eks/cluster:Cluster::dev-exp-pulumi-eksCluster"
                ],
                "propertyDependencies": {
                    "kubeconfig": [
                        "urn:pulumi:dev-exp-pulumi::pulumi-eks-experiment::eks:index:Cluster$aws:eks/cluster:Cluster::dev-exp-pulumi-eksCluster"
                    ]
                },
                "sequenceNumber": 1
            }
b
okay, we're looking into this, it'll be next week unfortunately
h
That’s totally fine. Thanks for the update. Let me know if I can run anything
Hi @billowy-army-68599, just wanted to check in and see if the team has looked at this issue yet? Looked like you were aiming for last week.
b
Sorry for the delay, took some PTO yesterday. It’s still in progress
1
h
@billowy-army-68599, no worries. Is there a publicly visible issue/ticket that I can watch so I don’t have to keep bugging you? Thanks!
b
h
Thanks!