Posting here in case it's helpful for anyone else ...
# aws
Posting here in case it's helpful for anyone else trying to deploy to EKS with fargate:
oh is this a good idea? I was thinking of try pulumi-operator to deploy Stacks into an EKS cluster via an EKS cluster but maybe Fargate is smarter? @stocky-restaurant-98004 suggestions?
hi Andrew - relatively new to pulumi (and fargate fwiw) but my team has been using this approach until recently and we haven't found a way to get that working. The Pulumi AI also recommends this path but if you can send me documentation for a different approach, I'd be happy to try it out.
my original conception was to create an EKS cluster and only install the pulumi-operator in it. From there use CI/CD with github actions to define Stacks which include the application EKS cluster and child helm chart, etc, container image, so basically you encapsulate your kubernetes orchestration and production workloads as an object in pulumi-operator
that wouldn't address how to configure fargate successfully
(which isn't kubernetes specific, it's an AWS technology for having on-demand worker nodes vs. statically provisioned EC2 based nodes)
I think this is really smart as the pulumi-operator will do CRUD on your workloads.
I concur it does not help with getting fargate spun up.
which is what the issue I filed is in reference to 😅
fwiw, I wrote my own Pulumi package to manage EKS to work around all the bugs in pulumi-eks
it creates components for fargate profiles, installs all the needed addons and some extra goodies like Karpenter
wow lots of good stuff in there
do you have a suggestion for helm releases? Latest version from pulumi is completely broken, I think I have to use a bash script.
ah I see this
Copy code
const wordpress = new kubernetes.helm.v3.Release(
    chart: "wordpress",
    repositoryOpts: {
      repo: "<>",
    values: {
      wordpressUsername: "lbrlabs",
      wordpressPassword: "correct-horse-battery-stable",
      wordpressEmail: "<|>",
      ingress: {
        enabled: true,
        ingressClassName: "external",
        hostname: "",
        tls: true,
        annotations: {
          "<|>": "letsencrypt-prod",
          "<|>": "true",
    provider: provider,
export const kubeconfig = cluster.kubeconfig;
the version I have pinned works for me
@billowy-army-68599 this seems like a bug in a Pulumi-supported module, wouldn’t you agree?
yes, there’s quite a few - that package needs completely overhauling imo