https://pulumi.com logo
#aws
Title
# aws
a

astonishing-journalist-77684

01/31/2024, 10:22 PM
I have all the latest versions of
@pulumi/*
modules in our project that uses eks and fargate. It looks like the fargate profile for the
kube-system
is not reliably created when bringing up a new cluster. This results in
coredns
pods to never be scheduled and that effectively breaks all other down-stream operations since they all rely on this. I've tried a number of things to resolve this, but it seems like a defect. Has anyone else run into this before?
s

salmon-account-74572

02/01/2024, 12:22 AM
Would you mind raising an issue (with as much of the relevant code that you're able to share) in the GitHub repo for EKS? https://github.com/pulumi/pulumi-eks
a

astonishing-journalist-77684

02/01/2024, 3:38 PM
Hi Scott, I am working on reproduction steps. I am trying to rule out that I've done something incorrect, but it does seem like this is a new issue. I'll put an issue together once I feel confident that it's not something incorrect on our end.
s

salmon-account-74572

02/01/2024, 5:36 PM
Fair. If we can help further, let me know.
a

astonishing-journalist-77684

02/01/2024, 6:06 PM
it definitely appears that the fargate profiles are no longer created for the
default
or
kube-system
or perhaps there is a race condition. It's difficult to tell but from my testing, they're not in place which causes the
coredns
pods to be unschedulable and because there's no DNS, everything downstream (for us) fails without that.
just following up - I was able to work around the issues by creating a
waitFor
method that I could await and pass it the pulumi.Output<T> properties and now I get a consistent cluster up 👍
s

salmon-account-74572

02/08/2024, 9:06 PM
This might still be worth opening an issue, if you have a few minutes to do so