I have all the latest versions of `@pulumi/*` modu...
# aws
a
I have all the latest versions of
@pulumi/*
modules in our project that uses eks and fargate. It looks like the fargate profile for the
kube-system
is not reliably created when bringing up a new cluster. This results in
coredns
pods to never be scheduled and that effectively breaks all other down-stream operations since they all rely on this. I've tried a number of things to resolve this, but it seems like a defect. Has anyone else run into this before?
s
Would you mind raising an issue (with as much of the relevant code that you're able to share) in the GitHub repo for EKS? https://github.com/pulumi/pulumi-eks
a
Hi Scott, I am working on reproduction steps. I am trying to rule out that I've done something incorrect, but it does seem like this is a new issue. I'll put an issue together once I feel confident that it's not something incorrect on our end.
s
Fair. If we can help further, let me know.
a
it definitely appears that the fargate profiles are no longer created for the
default
or
kube-system
or perhaps there is a race condition. It's difficult to tell but from my testing, they're not in place which causes the
coredns
pods to be unschedulable and because there's no DNS, everything downstream (for us) fails without that.
just following up - I was able to work around the issues by creating a
waitFor
method that I could await and pass it the pulumi.Output<T> properties and now I get a consistent cluster up 👍
s
This might still be worth opening an issue, if you have a few minutes to do so