https://pulumi.com logo
Docs
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
kubernetes
  • f

    fast-easter-23401

    01/17/2022, 9:23 PM
    Hello there folks, I'm facing this issue while trying to
    pulumi up
    :
    warning: resource plugin kubernetes is expected to have version >=3.13.0, but has 3.13.0-alpha.1640142079+cb2803c5.dirty; the wrong version may be on your path, or this may be a bug in the plugin
    How can I determine whether the root problem is a misconfigured value in my environment or a problem in the underlying plugin?
    • 1
    • 1
  • b

    bumpy-summer-9075

    01/18/2022, 2:36 PM
    I deployed a helm chart using pulumi and on every
    up
    , pulumi recreates a secret for a reason that escapes me:
    --kubernetes:core/v1:secret: (delete-replaced)
                    [id=sonarqube/sonarqube-postgresql]
                    [urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
                    [provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
                +-kubernetes:core/v1:secret: (replace)
                    [id=sonarqube/sonarqube-postgresql]
                    [urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
                    [provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
                  ~ data: {
                    }
                ++kubernetes:core/v1:secret: (create-replacement)
                    [id=sonarqube/sonarqube-postgresql]
                    [urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
                    [provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
                  ~ data: {
                    }
    Anyone know why this occurs? It doesn't break anything but is an annoyance šŸ˜•
    b
    b
    • 3
    • 3
  • b

    busy-island-31180

    01/18/2022, 10:42 PM
    I’m trying to use the Kubernetes provider for Pulumi to deploy several resources, and I’m using patterns like
    myService.metadata.name
    in order to pass information around. I’ve found that this ends up creating dependencies, and in some cases, deadlock situations. In this example, a service will not become ā€œhealthyā€ until there are pods registered to it. Yet, when I’m using something like ArgoRollouts, there’s a pointer back to my service name:
    strategy:
    
        # Blue-green update strategy
        blueGreen:
    
          # Reference to service that the rollout modifies as the active service.
          # Required.
          activeService: active-service
    So, because of this, the pulumi wants to deploy the service first, and rollout second. But the service never becomes healthy, as it’s waiting on the rollout. The workaround is to define a variable and pass the value of that to all places that need the service name. I was also thinking of a way to tell Pulumi to not block other resources from being created while this one becomes healthy? The data is available regardless of health state. For k8s resources, I feel like, a lot of the data is static, and yet it’s treated as dynamic, thus I can’t easily make these references. (I coming from Jsonnet, which makes this quite easy). Has anyone else run into these types of problems? Do you have any patterns or practices that makes the code feel ā€œrefactor proofā€? What about the case when you don’t know the service name? (maybe it’s being returned as part of higher-level function)
    b
    • 2
    • 21
  • m

    microscopic-library-98015

    01/19/2022, 9:12 AM
    Has anyone experience a segfault like this?
    FirmNav/firmnav/staging (pulumi:pulumi:Stack)
    panic: runtime error: invalid memory address or nil pointer dereference
     
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1ec736c]
     
    goroutine 61 [running]:
     
    <http://k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)|k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)>
     
    	/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/unstructured/unstructured.go:234
     
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc0037b7078|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc0037b7078>, 0x0, 0x0)
     
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/deployment.go:153 +0xec
     
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc0003025e8|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc0003025e8>, 0x2897aa0, 0xc0005b41c0, 0xc000e9ea80, 0x7a, 0xc00379b050, 0x7, 0xc0035e1100, 0xc000726960, 0xc001967d58, ...)
     
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/awaiters.go:147 +0x125
     
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x2897aa0|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x2897aa0>, 0xc0005b41c0, 0xc0003025e8, 0xc000e9ea80, 0x7a, 0xc00379b050, 0x7, 0x0, 0x0, 0xc000726960, ...)
     
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/await.go:430 +0xb35
     
    <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc0002e0f00|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc0002e0f00>, 0x2897b48, 0xc0037d6060, 0xc00376b700, 0xc0002e0f00, 0x21cf001, 0xc0037d13c0)
     
    	/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:2166 +0xfc5
     
    <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x2897b48|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x2897b48>, 0xc0037d6060, 0x23ef780, 0xc00376b700, 0x23db960, 0x3ab62a8, 0x2897b48, 0xc0037d6060)
     
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2638 +0x8b
     
    <http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2897b48|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2897b48>, 0xc003799380, 0x23ef780, 0xc00376b700, 0xc0037a0540, 0xc0037894e8, 0x0, 0x0, 0x284d6a0, 0xc0003a8e70)
     
    	/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x30a
     
    <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x247e9a0|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x247e9a0>, 0xc0002e0f00, 0x2897b48, 0xc003799380, 0xc00378b740, 0xc000704a00, 0x2897b48, 0xc003799380, 0xc0037a4000, 0x2d84)
     
    	/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2640 +0x150
     
    <http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc000421880|google.golang.org/grpc.(*Server).processUnaryRPC(0xc000421880>, 0x28b48f8, 0xc0002e1500, 0xc00379d200, 0xc0007e3b60, 0x3a52770, 0x0, 0x0, 0x0)
     
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1286 +0x52b
     
    <http://google.golang.org/grpc.(*Server).handleStream(0xc000421880|google.golang.org/grpc.(*Server).handleStream(0xc000421880>, 0x28b48f8, 0xc0002e1500, 0xc00379d200, 0x0)
     
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1609 +0xd0c
     
    <http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000777b70|google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000777b70>, 0xc000421880, 0x28b48f8, 0xc0002e1500, 0xc00379d200)
     
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:934 +0xab
     
    created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
     
    	/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:932 +0x1fd
     
    error: update failed
     
    app-staging-deployment (firmnav:app:App$kubernetes:apps/v1:Deployment)
    error: transport is closing
    It’s consistently failing in CI, though it did work once out of like 10 tries.. I can’t see what should’ve changed on our end to suddenly cause it. For info we’re using the GCP and kubernetes providers
    s
    • 2
    • 1
  • p

    prehistoric-kite-30979

    01/24/2022, 7:53 PM
    Is there a good way in native Pulumi to get the token (secret) of a service account I created. ServiceAccount.secrets will give me the name of the token created, so I think I just need to be able to read a secret created outside of Pulumi?
    f
    • 2
    • 4
  • b

    brave-room-27374

    01/25/2022, 3:38 AM
    Hi, I am using
    apple M1
    and getting
    403 HTTP error fetching plugin
    while doing
    pulumi up
    , basically, I can't install kubernetes plugin
    šŸ˜€ āžœ  pulumi plugin install resource kubernetes v1.0.6
    [resource plugin kubernetes-1.0.6] installing
    error: [resource plugin kubernetes-1.0.6] downloading from : 403 HTTP error fetching plugin from <https://get.pulumi.com/releases/plugins/pulumi-resource-kubernetes-v1.0.6-darwin-arm64.tar.gz>
    o
    m
    s
    • 4
    • 11
  • b

    brave-room-27374

    01/25/2022, 3:39 AM
    I followed the document on
    troubleshooting
    but that does not work as well https://www.pulumi.com/docs/troubleshooting/#i-dont-have-access-to-an-intel-based-computer
  • b

    brave-room-27374

    01/25/2022, 3:39 AM
    any further guide to make it work on apple M1 chip?
  • g

    great-tomato-45422

    01/27/2022, 7:07 PM
    [resource plugin digitalocean-2.6.0] downloading from : 403 HTTP error fetching plugin fromĀ https://get.pulumi.com/releases/plugins/pulumi-resource-digitalocean-v2.6.0-darwin-arm64.tar.gz
  • g

    great-tomato-45422

    01/27/2022, 7:07 PM
    pulumi is asking for me to install a plugin. then erroring when installing it
  • g

    great-tomato-45422

    01/27/2022, 7:07 PM
    haven't run pulumi in a few months. not sure what all this is about.
  • g

    great-tomato-45422

    01/27/2022, 7:34 PM
    hmm, creating a new stack seems to be happier. I wish I could fix the old though
  • g

    great-tomato-45422

    01/27/2022, 7:34 PM
    I can stack export and look around. not sure what to edit though
    s
    • 2
    • 1
  • n

    narrow-judge-54785

    01/28/2022, 9:53 AM
    Hi, I'm trying to register a crd to my cluster but I'm getting the error
    error: no resource plugin 'crds' found in the workspace or on your $PATH
    , if I check the documentation page for crd's at the bottom it points me to the kubernetes plugin, this I have installed. I couldn't find any specific resource plugin named crds so couldn't get the tag either. For reference I converted these crd's for snapshotvolumes using the crd2pulumi cli. Anyone an idea for what I'm missing here? fyi I registered these crd's using the yaml files and kubectl just fine, but i want to get them in pulumi. Thx already!
    o
    • 2
    • 2
  • c

    chilly-plastic-75584

    01/28/2022, 6:56 PM
    Trying to figure out the behavior with an update on pod annotations here and a conflict that I think was caused due to a manual edit I wasn't aware of. Not sure if it's a red herring though. I put details here if anyone can help me out: Cheers https://github.com/pulumi/pulumi/discussions/8874
  • s

    sparse-park-68967

    02/04/2022, 7:04 PM
    Hi Folks! FYI - The Helm Release support is now Generally Available as of v3.15.0 of the Kubernetes provider: https://www.pulumi.com/blog/helm-release-resource-for-kubernetes-generally-available/
    šŸš€ 2
    šŸ™Œ 1
  • m

    most-lighter-95902

    02/05/2022, 4:49 AM
    I keep getting this error - works fine if I destroy the stack and re-up it, but I can’t update the stack:
  • m

    most-lighter-95902

    02/05/2022, 4:49 AM
    panic: runtime error: invalid memory address or nil pointer dereference
        [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2ac51cc]
        goroutine 66 [running]:
        <http://k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)|k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)>
            /home/runner/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/unstructured/unstructured.go:234
        <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc002e5b078|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc002e5b078>, 0x0, 0x0)
            /home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/deployment.go:153 +0xec
        <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc000ad9a58|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc000ad9a58>, 0x3495fa0, 0xc000af6d80, 0xc002585220, 0xa0, 0xc002e05050, 0x7, 0xc002e69980, 0xc000710ed0, 0xc0019c5900, ...)
            /home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/awaiters.go:147 +0x125
        <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x3495fa0|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x3495fa0>, 0xc000af6d80, 0xc000ad9a58, 0xc002585220, 0xa0, 0xc002e05050, 0x7, 0x0, 0x0, 0xc000710ed0, ...)
            /home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/await.go:430 +0xb35
        <http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc000583500|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc000583500>, 0x3496048, 0xc002e66c60, 0xc002e08f00, 0xc000583500, 0x2dccf01, 0xc002e69000)
            /home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:2166 +0xfc5
        <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x3496048|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x3496048>, 0xc002e66c60, 0x2fed860, 0xc002e08f00, 0x2fd9900, 0x46b6dc8, 0x3496048, 0xc002e66c60)
            /home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2638 +0x8b
        <http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x3496048|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x3496048>, 0xc002e24270, 0x2fed860, 0xc002e08f00, 0xc002e26140, 0xc002e15ba8, 0x0, 0x0, 0x344ba20, 0xc0003b8f40)
            /home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x30a
        <http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x307cb40|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x307cb40>, 0xc000583500, 0x3496048, 0xc002e24270, 0xc002dfb380, 0xc000b32460, 0x3496048, 0xc002e24270, 0xc002e30000, 0x46e6)
            /home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2640 +0x150
        <http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc000437dc0|google.golang.org/grpc.(*Server).processUnaryRPC(0xc000437dc0>, 0x34b2f78, 0xc000583980, 0xc002f6c000, 0xc000b50330, 0x4653730, 0x0, 0x0, 0x0)
            /home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1286 +0x52b
        <http://google.golang.org/grpc.(*Server).handleStream(0xc000437dc0|google.golang.org/grpc.(*Server).handleStream(0xc000437dc0>, 0x34b2f78, 0xc000583980, 0xc002f6c000, 0x0)
            /home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1609 +0xd0c
        <http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000b04e30|google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000b04e30>, 0xc000437dc0, 0x34b2f78, 0xc000583980, 0xc002f6c000)
            /home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:934 +0xab
        created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
            /home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:932 +0x1fd
    g
    • 2
    • 2
  • m

    most-lighter-95902

    02/05/2022, 4:49 AM
    Anyone run into this?
  • h

    high-grass-3103

    02/05/2022, 2:12 PM
    Hi! I don't have any experience with k8s, so I'm most likely doing it wrong. I'm hiding my k8s server behind a bastion server and only use it via local cli. I don't see a simple way for pulumi to set up ssh tunnelling... what's the industry practice here? Is it safe to expose k8s api to the public?
    q
    s
    • 3
    • 5
  • s

    square-car-84996

    02/07/2022, 2:53 AM
    i'd like to generate random passwords -> store in a kubernetes secret -> and read those in subsequent
    pulumi up
    runs... i tried using
    k8s.core.v1.Secret.get
    but its hard failing due to #3364. It seems that if I just have a normal Secret resource, everytime my code `up`s it just generates new random passwords... Has anyone successfully done something like this?
    h
    q
    • 3
    • 9
  • c

    chilly-plastic-75584

    02/07/2022, 7:10 PM
    Need a pointer (no pun intended) on this issue: https://github.com/pulumi/pulumi/discussions/8939 Getting mixed up on string pointer/input/output.
    b
    • 2
    • 22
  • c

    chilly-plastic-75584

    02/08/2022, 2:10 AM
    🧵 Is Config able to load a nested struct?
    b
    • 2
    • 47
  • c

    chilly-plastic-75584

    02/08/2022, 6:49 PM
    Added Question: Kubernetes kubeconfig setting being set in stack from kubeconfig json as secret
    b
    o
    • 3
    • 5
  • c

    chilly-plastic-75584

    02/08/2022, 8:56 PM
    One more - I'm busy today šŸ˜‰ Thank you for your effort Pulumi Pros šŸ™ What change triggers a redeploy of a Kubernetes Configmap?Ā #8952
    b
    q
    o
    • 4
    • 58
  • s

    square-car-84996

    02/10/2022, 12:43 PM
    i have a helm release i'm deploying with pulumi. Does anyone have a good pattern for making API calls to configure the service after it is created/updated? This helm chart is severely lacking in the ability to configure it decoratively in the chart values, so I need to configure it post-deployment.
    q
    • 2
    • 15
  • s

    square-car-84996

    02/10/2022, 2:25 PM
    is there anyway in pulumi to temporarily open a
    kubectl port-forward
    to a service?
    q
    • 2
    • 2
  • c

    chilly-plastic-75584

    02/10/2022, 8:58 PM
    🧵 Related to the prior question port forwarding... I'd like to optimize the workflow for my team a bit since we aren't currently setup to do local kubernetes development, only docker. The problem is a bunch of shared services we need to access for this to function in a microservices style context. .... (thread has details)
    q
    • 2
    • 6
  • r

    ripe-shampoo-80285

    02/12/2022, 5:39 PM
    https://pulumi-community.slack.com/archives/C84L4E3N1/p1644687520969139
    q
    • 2
    • 1
  • c

    chilly-plastic-75584

    02/14/2022, 6:12 PM
    I need to add a new cluster for a multi cluster deployment. As I'm still learning K8s • Do I just loop the provider and run 2 iterations to deploy through all? Doesn't make sense to me to consider it as a new stack. • OR is there an EASY way to do both so that a "light weight" blue green consistency is achieved like a normal pulumi create/replace and do both clusters as "one". (I guess that's sorta like a multiplexer for providers?). Key is want to keep it simple. Right now everyone else is just using kubectl so any improvement on that approach on multi-cluster would be good. (Agent deploying has access to reach all clusters) ā—¦ I wasn't certain if idiomatic design patterns for kubernetes expect cluster app versions to be matching or it's normal on issues to have them get out of sync etc.
    šŸ‘€ 1
    b
    • 2
    • 8
Powered by Linen
Title
c

chilly-plastic-75584

02/14/2022, 6:12 PM
I need to add a new cluster for a multi cluster deployment. As I'm still learning K8s • Do I just loop the provider and run 2 iterations to deploy through all? Doesn't make sense to me to consider it as a new stack. • OR is there an EASY way to do both so that a "light weight" blue green consistency is achieved like a normal pulumi create/replace and do both clusters as "one". (I guess that's sorta like a multiplexer for providers?). Key is want to keep it simple. Right now everyone else is just using kubectl so any improvement on that approach on multi-cluster would be good. (Agent deploying has access to reach all clusters) ā—¦ I wasn't certain if idiomatic design patterns for kubernetes expect cluster app versions to be matching or it's normal on issues to have them get out of sync etc.
šŸ‘€ 1
šŸ‘‰Ā  bump! Even if just a quick answer, wondering about multi-region (2 clusters at least) deployments and if I just should add a list of clusters to iterate through or something else?Ā  Not finding a lot of clear reading on this.
b

bored-table-20691

02/15/2022, 6:16 PM
I think it’s really up to you how you want to structure it. I’m not quite sure what you mean about blue/green relative to your original question
c

chilly-plastic-75584

02/15/2022, 6:19 PM
Pulumi does blue -> green "light" by create before delete and health checks and all come online
So it's working well without dealing with full copies of the entire app.
Now, I have been told
n
clusters (multi-region right now) need my deployment. I am not sure if I just loop through a list of clusters and deploy each, or if there is a better practice with pulumi. I was trying to figure out if say 1 cluster failed to get my updated deployment and then the other worked, a loop would just let them be on different versions (not sure that's normal). Trying to figure out if I need some way of saying all deployments = success, NOT just one cluster being ok but the other being a failure.
Still learning k8s, so not sure what is an idiomatic pattern for k8s with deployments like this. I'm assuming right now most folks are just taking deployment yaml and kubectl apply on both clusters and no real logic of "make sure all succeeded".
b

bored-table-20691

02/15/2022, 7:07 PM
I think this is more up to you and less about Pulumi, which is what level of granularity do you wnt to do things and ensure success, handle rollbacks, etc. Pulumi can be the tool that will help you achieve the desired state, but for something like what you’re describing, you need some higher level orchestration to reason about it.
c

chilly-plastic-75584

02/15/2022, 7:35 PM
Ok, assuming the current solution is just raw kubectl yaml to each in loop, I'm assuming I'll just add a list of clusters to my stack and have it loop entire stack through each. The alternative I can think of is that I'd need to use components more and make my stack actually deploy the resources to different provides as two unique parts of the stack instead of a loop, which seems overcomplicated and annoying.
View count: 7