Hello, I currently have an EKS cluster that I'm tr...
# aws
a
Hello, I currently have an EKS cluster that I'm trying to update the version from
1.23
to
1.27
. The
pulumi preview
fails with the following error log, seemingly from trying to create :
Copy code
error: Running program '/home/luke/Chthonic/pulumi/infrastructure' failed with an unhandled exception:
    TypeError: Cannot read properties of null (reading 'data')
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/cluster.ts:578:103
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:250:35
        at Generator.next (<anonymous>)
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:21:71
        at new Promise (<anonymous>)
        at __awaiter (/home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:17:12)
        at applyHelperAsync (/home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:229:12)
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:183:65
        at processTicksAndRejections (node:internal/process/task_queues:95:5)
    error: Running program '/home/luke/Chthonic/pulumi/infrastructure' failed with an unhandled exception:
    TypeError: Cannot read properties of null (reading 'data')
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/cluster.ts:578:103
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:250:35
        at Generator.next (<anonymous>)
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:21:71
        at new Promise (<anonymous>)
        at __awaiter (/home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:17:12)
        at applyHelperAsync (/home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:229:12)
        at /home/luke/Chthonic/pulumi/infrastructure/node_modules/@pulumi/pulumi/output.js:183:65
        at processTicksAndRejections (node:internal/process/task_queues:95:5)

    unhandled rejection: CONTEXT(1339): resource:luketest-serve[pulumi:providers:kubernetes]
    STACK_TRACE:
    Error
        at Object.debuggablePromise (/home/luke/Chthonic/pulumi/common/node_modules/@pulumi/pulumi/runtime/debuggable.js:69:75)
        at Object.registerResource (/home/luke/Chthonic/pulumi/common/node_modules/@pulumi/pulumi/runtime/resource.js:219:18)
        at new Resource (/home/luke/Chthonic/pulumi/common/node_modules/@pulumi/pulumi/resource.js:215:24)
        at new CustomResource (/home/luke/Chthonic/pulumi/common/node_modules/@pulumi/pulumi/resource.js:307:9)
        at new ProviderResource (/home/luke/Chthonic/pulumi/common/node_modules/@pulumi/pulumi/resource.js:336:9)
        at new Provider (/home/luke/Chthonic/pulumi/common/node_modules/@pulumi/provider.ts:52:9)
        at Object.exports.k8sProvider (/home/luke/Chthonic/pulumi/common/cluster-outputs.ts:12:38)
        at Object.exports.clusterBuilder (/home/luke/Chthonic/pulumi/infrastructure/cluster.ts:96:40)
        at /home/luke/Chthonic/pulumi/infrastructure/index.ts:24:43
        at Generator.next (<anonymous>)
this does not happen when trying to update the version to
1.24
but anything
1.25
and above the same error occurs. I have seen something similar on the Pulumi Github, in this issue: https://github.com/pulumi/pulumi-eks/issues/676 Can anyone provide any insight into what's happening? EKS version
1.24
will not be supported at the beginning of next year, and
1.23
is out of AWS support at the end of September.
s
For Kubernetes in general, you typically can’t upgrade an existing cluster across multiple minor versions like that. I don’t know if this applies to EKS or not, but you may need to take a stepwise approach (1.24->1.25, 1.25->1.26, etc.).
a
Oh, I maybe should have mentioned that I tried going stepwise and went from
1.24
to
1.25
and got the same error
s
Ah, I must’ve missed that, my apologies! In that case, I would add your information to the above issue (it does sound like this is related to your problem). In the meantime---and I recognize this isn’t the ideal path---I’d start looking to migrate from your old cluster to a new cluster.