fast-easter-23401
01/17/2022, 9:23 PMpulumi up
:
warning: resource plugin kubernetes is expected to have version >=3.13.0, but has 3.13.0-alpha.1640142079+cb2803c5.dirty; the wrong version may be on your path, or this may be a bug in the plugin
How can I determine whether the root problem is a misconfigured value in my environment or a problem in the underlying plugin?bumpy-summer-9075
01/18/2022, 2:36 PMup
, pulumi recreates a secret for a reason that escapes me:
--kubernetes:core/v1:secret: (delete-replaced)
[id=sonarqube/sonarqube-postgresql]
[urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
[provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
+-kubernetes:core/v1:secret: (replace)
[id=sonarqube/sonarqube-postgresql]
[urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
[provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
~ data: {
}
++kubernetes:core/v1:secret: (create-replacement)
[id=sonarqube/sonarqube-postgresql]
[urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
[provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
~ data: {
}
Anyone know why this occurs? It doesn't break anything but is an annoyance šbusy-island-31180
01/18/2022, 10:42 PMmyService.metadata.name
in order to pass information around. Iāve found that this ends up creating dependencies, and in some cases, deadlock situations.
In this example, a service will not become āhealthyā until there are pods registered to it. Yet, when Iām using something like ArgoRollouts, thereās a pointer back to my service name:
strategy:
# Blue-green update strategy
blueGreen:
# Reference to service that the rollout modifies as the active service.
# Required.
activeService: active-service
So, because of this, the pulumi wants to deploy the service first, and rollout second. But the service never becomes healthy, as itās waiting on the rollout.
The workaround is to define a variable and pass the value of that to all places that need the service name.
I was also thinking of a way to tell Pulumi to not block other resources from being created while this one becomes healthy? The data is available regardless of health state.
For k8s resources, I feel like, a lot of the data is static, and yet itās treated as dynamic, thus I canāt easily make these references. (I coming from Jsonnet, which makes this quite easy).
Has anyone else run into these types of problems? Do you have any patterns or practices that makes the code feel ārefactor proofā?
What about the case when you donāt know the service name? (maybe itās being returned as part of higher-level function)microscopic-library-98015
01/19/2022, 9:12 AMFirmNav/firmnav/staging (pulumi:pulumi:Stack)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1ec736c]
goroutine 61 [running]:
<http://k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)|k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)>
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/unstructured/unstructured.go:234
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc0037b7078|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc0037b7078>, 0x0, 0x0)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/deployment.go:153 +0xec
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc0003025e8|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc0003025e8>, 0x2897aa0, 0xc0005b41c0, 0xc000e9ea80, 0x7a, 0xc00379b050, 0x7, 0xc0035e1100, 0xc000726960, 0xc001967d58, ...)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/awaiters.go:147 +0x125
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x2897aa0|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x2897aa0>, 0xc0005b41c0, 0xc0003025e8, 0xc000e9ea80, 0x7a, 0xc00379b050, 0x7, 0x0, 0x0, 0xc000726960, ...)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/await.go:430 +0xb35
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc0002e0f00|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc0002e0f00>, 0x2897b48, 0xc0037d6060, 0xc00376b700, 0xc0002e0f00, 0x21cf001, 0xc0037d13c0)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:2166 +0xfc5
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x2897b48|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x2897b48>, 0xc0037d6060, 0x23ef780, 0xc00376b700, 0x23db960, 0x3ab62a8, 0x2897b48, 0xc0037d6060)
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2638 +0x8b
<http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2897b48|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2897b48>, 0xc003799380, 0x23ef780, 0xc00376b700, 0xc0037a0540, 0xc0037894e8, 0x0, 0x0, 0x284d6a0, 0xc0003a8e70)
/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x30a
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x247e9a0|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x247e9a0>, 0xc0002e0f00, 0x2897b48, 0xc003799380, 0xc00378b740, 0xc000704a00, 0x2897b48, 0xc003799380, 0xc0037a4000, 0x2d84)
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2640 +0x150
<http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc000421880|google.golang.org/grpc.(*Server).processUnaryRPC(0xc000421880>, 0x28b48f8, 0xc0002e1500, 0xc00379d200, 0xc0007e3b60, 0x3a52770, 0x0, 0x0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1286 +0x52b
<http://google.golang.org/grpc.(*Server).handleStream(0xc000421880|google.golang.org/grpc.(*Server).handleStream(0xc000421880>, 0x28b48f8, 0xc0002e1500, 0xc00379d200, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1609 +0xd0c
<http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000777b70|google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000777b70>, 0xc000421880, 0x28b48f8, 0xc0002e1500, 0xc00379d200)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:934 +0xab
created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:932 +0x1fd
error: update failed
app-staging-deployment (firmnav:app:App$kubernetes:apps/v1:Deployment)
error: transport is closing
Itās consistently failing in CI, though it did work once out of like 10 tries.. I canāt see what shouldāve changed on our end to suddenly cause it. For info weāre using the GCP and kubernetes providersprehistoric-kite-30979
01/24/2022, 7:53 PMbrave-room-27374
01/25/2022, 3:38 AMapple M1
and getting 403 HTTP error fetching plugin
while doing pulumi up
, basically, I can't install kubernetes plugin
š ā pulumi plugin install resource kubernetes v1.0.6
[resource plugin kubernetes-1.0.6] installing
error: [resource plugin kubernetes-1.0.6] downloading from : 403 HTTP error fetching plugin from <https://get.pulumi.com/releases/plugins/pulumi-resource-kubernetes-v1.0.6-darwin-arm64.tar.gz>
brave-room-27374
01/25/2022, 3:39 AMtroubleshooting
but that does not work as well
https://www.pulumi.com/docs/troubleshooting/#i-dont-have-access-to-an-intel-based-computerbrave-room-27374
01/25/2022, 3:39 AMgreat-tomato-45422
01/27/2022, 7:07 PMgreat-tomato-45422
01/27/2022, 7:07 PMgreat-tomato-45422
01/27/2022, 7:07 PMgreat-tomato-45422
01/27/2022, 7:34 PMgreat-tomato-45422
01/27/2022, 7:34 PMnarrow-judge-54785
01/28/2022, 9:53 AMerror: no resource plugin 'crds' found in the workspace or on your $PATH
, if I check the documentation page for crd's at the bottom it points me to the kubernetes plugin, this I have installed. I couldn't find any specific resource plugin named crds so couldn't get the tag either. For reference I converted these crd's for snapshotvolumes using the crd2pulumi cli. Anyone an idea for what I'm missing here? fyi I registered these crd's using the yaml files and kubectl just fine, but i want to get them in pulumi. Thx already!chilly-plastic-75584
01/28/2022, 6:56 PMsparse-park-68967
02/04/2022, 7:04 PMmost-lighter-95902
02/05/2022, 4:49 AMmost-lighter-95902
02/05/2022, 4:49 AMpanic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2ac51cc]
goroutine 66 [running]:
<http://k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)|k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)>
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/unstructured/unstructured.go:234
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc002e5b078|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc002e5b078>, 0x0, 0x0)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/deployment.go:153 +0xec
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc000ad9a58|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc000ad9a58>, 0x3495fa0, 0xc000af6d80, 0xc002585220, 0xa0, 0xc002e05050, 0x7, 0xc002e69980, 0xc000710ed0, 0xc0019c5900, ...)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/awaiters.go:147 +0x125
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x3495fa0|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x3495fa0>, 0xc000af6d80, 0xc000ad9a58, 0xc002585220, 0xa0, 0xc002e05050, 0x7, 0x0, 0x0, 0xc000710ed0, ...)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/await.go:430 +0xb35
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc000583500|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc000583500>, 0x3496048, 0xc002e66c60, 0xc002e08f00, 0xc000583500, 0x2dccf01, 0xc002e69000)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:2166 +0xfc5
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x3496048|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x3496048>, 0xc002e66c60, 0x2fed860, 0xc002e08f00, 0x2fd9900, 0x46b6dc8, 0x3496048, 0xc002e66c60)
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2638 +0x8b
<http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x3496048|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x3496048>, 0xc002e24270, 0x2fed860, 0xc002e08f00, 0xc002e26140, 0xc002e15ba8, 0x0, 0x0, 0x344ba20, 0xc0003b8f40)
/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x30a
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x307cb40|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x307cb40>, 0xc000583500, 0x3496048, 0xc002e24270, 0xc002dfb380, 0xc000b32460, 0x3496048, 0xc002e24270, 0xc002e30000, 0x46e6)
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2640 +0x150
<http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc000437dc0|google.golang.org/grpc.(*Server).processUnaryRPC(0xc000437dc0>, 0x34b2f78, 0xc000583980, 0xc002f6c000, 0xc000b50330, 0x4653730, 0x0, 0x0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1286 +0x52b
<http://google.golang.org/grpc.(*Server).handleStream(0xc000437dc0|google.golang.org/grpc.(*Server).handleStream(0xc000437dc0>, 0x34b2f78, 0xc000583980, 0xc002f6c000, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1609 +0xd0c
<http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000b04e30|google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000b04e30>, 0xc000437dc0, 0x34b2f78, 0xc000583980, 0xc002f6c000)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:934 +0xab
created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:932 +0x1fd
most-lighter-95902
02/05/2022, 4:49 AMhigh-grass-3103
02/05/2022, 2:12 PMsquare-car-84996
02/07/2022, 2:53 AMpulumi up
runs... i tried using k8s.core.v1.Secret.get
but its hard failing due to #3364. It seems that if I just have a normal Secret resource, everytime my code `up`s it just generates new random passwords... Has anyone successfully done something like this?chilly-plastic-75584
02/07/2022, 7:10 PMchilly-plastic-75584
02/08/2022, 2:10 AMchilly-plastic-75584
02/08/2022, 6:49 PMchilly-plastic-75584
02/08/2022, 8:56 PMsquare-car-84996
02/10/2022, 12:43 PMsquare-car-84996
02/10/2022, 2:25 PMkubectl port-forward
to a service?chilly-plastic-75584
02/10/2022, 8:58 PMripe-shampoo-80285
02/12/2022, 5:39 PMchilly-plastic-75584
02/14/2022, 6:12 PMchilly-plastic-75584
02/14/2022, 6:12 PMbored-table-20691
02/15/2022, 6:16 PMchilly-plastic-75584
02/15/2022, 6:19 PMn
clusters (multi-region right now) need my deployment.
I am not sure if I just loop through a list of clusters and deploy each, or if there is a better practice with pulumi. I was trying to figure out if say 1 cluster failed to get my updated deployment and then the other worked, a loop would just let them be on different versions (not sure that's normal).
Trying to figure out if I need some way of saying all deployments = success, NOT just one cluster being ok but the other being a failure.bored-table-20691
02/15/2022, 7:07 PMchilly-plastic-75584
02/15/2022, 7:35 PM