sparse-butcher-73713
12/23/2021, 9:04 AMsparse-butcher-73713
12/23/2021, 9:04 AMsparse-butcher-73713
12/23/2021, 10:00 AMmammoth-honey-6147
12/23/2021, 1:52 PMsparse-butcher-73713
12/23/2021, 5:46 PMsparse-butcher-73713
12/23/2021, 5:47 PMbland-lamp-16797
12/27/2021, 3:48 PMmost-lighter-95902
01/01/2022, 5:19 PMeks
package and having issues.most-lighter-95902
01/01/2022, 5:20 PMnew eks.ManagedNodeGroup(defaultNodeGroupName, {
cluster: {
...cluster.core,
nodeGroupOptions: {
keyName: 'key_name'
}
},
...
}
most-lighter-95902
01/01/2022, 5:20 PMmost-lighter-95902
01/01/2022, 5:21 PMhundreds-article-77945
01/03/2022, 8:36 PMbusy-island-31180
01/04/2022, 1:49 AMtall-photographer-1935
01/06/2022, 2:34 AMdef add_ingress_ip_address(obj, opts):
if obj['kind'] == 'Ingress':
try:
t = obj['metadata']['annotations']['<http://kubernetes.io/ingress.global-static-ip-name|kubernetes.io/ingress.global-static-ip-name>'] = **DYNAMIC BASED ON CREATED IP ADDRESS NAME**
Is that possible?freezing-umbrella-80278
01/06/2022, 1:33 PMbored-table-20691
01/06/2022, 11:46 PMpulumi-eks
eks.Cluster
type, how do I get the name of that cluster (e.g. if I need to pass it to an annotation or an IAM role or some such)? Specifically in Go, Iāve seen several examples in TS (specifically, cluster.eksCluster.name
).hundreds-article-77945
01/11/2022, 2:13 PMmodern-boots-64313
01/12/2022, 1:58 AMpulumi up
, checked pulumi log file it did shows the resources still in the state. Any idea why ?fast-easter-23401
01/17/2022, 9:23 PMpulumi up
:
warning: resource plugin kubernetes is expected to have version >=3.13.0, but has 3.13.0-alpha.1640142079+cb2803c5.dirty; the wrong version may be on your path, or this may be a bug in the plugin
How can I determine whether the root problem is a misconfigured value in my environment or a problem in the underlying plugin?bumpy-summer-9075
01/18/2022, 2:36 PMup
, pulumi recreates a secret for a reason that escapes me:
--kubernetes:core/v1:secret: (delete-replaced)
[id=sonarqube/sonarqube-postgresql]
[urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
[provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
+-kubernetes:core/v1:secret: (replace)
[id=sonarqube/sonarqube-postgresql]
[urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
[provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
~ data: {
}
++kubernetes:core/v1:secret: (create-replacement)
[id=sonarqube/sonarqube-postgresql]
[urn=urn:pulumi:dev::infra-do-cluster::company:kubernetes:Sonarqube$kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql|helm.sh/v3:Chart$kubernetes:core/v1:secret::sonarqube/sonarqube-postgresql>]
[provider=urn:pulumi:dev::infra-do-cluster::pulumi:providers:kubernetes::doks::01acb2ec-8062-4904-9f58-b2144e4043f3]
~ data: {
}
Anyone know why this occurs? It doesn't break anything but is an annoyance šbusy-island-31180
01/18/2022, 10:42 PMmyService.metadata.name
in order to pass information around. Iāve found that this ends up creating dependencies, and in some cases, deadlock situations.
In this example, a service will not become āhealthyā until there are pods registered to it. Yet, when Iām using something like ArgoRollouts, thereās a pointer back to my service name:
strategy:
# Blue-green update strategy
blueGreen:
# Reference to service that the rollout modifies as the active service.
# Required.
activeService: active-service
So, because of this, the pulumi wants to deploy the service first, and rollout second. But the service never becomes healthy, as itās waiting on the rollout.
The workaround is to define a variable and pass the value of that to all places that need the service name.
I was also thinking of a way to tell Pulumi to not block other resources from being created while this one becomes healthy? The data is available regardless of health state.
For k8s resources, I feel like, a lot of the data is static, and yet itās treated as dynamic, thus I canāt easily make these references. (I coming from Jsonnet, which makes this quite easy).
Has anyone else run into these types of problems? Do you have any patterns or practices that makes the code feel ārefactor proofā?
What about the case when you donāt know the service name? (maybe itās being returned as part of higher-level function)microscopic-library-98015
01/19/2022, 9:12 AMFirmNav/firmnav/staging (pulumi:pulumi:Stack)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1ec736c]
goroutine 61 [running]:
<http://k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)|k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.(*Unstructured).GetNamespace(...)>
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/unstructured/unstructured.go:234
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc0037b7078|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.(*deploymentInitAwaiter).Await(0xc0037b7078>, 0x0, 0x0)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/deployment.go:153 +0xec
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc0003025e8|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.glob..func2(0xc0003025e8>, 0x2897aa0, 0xc0005b41c0, 0xc000e9ea80, 0x7a, 0xc00379b050, 0x7, 0xc0035e1100, 0xc000726960, 0xc001967d58, ...)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/awaiters.go:147 +0x125
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x2897aa0|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/await.Update(0x2897aa0>, 0xc0005b41c0, 0xc0003025e8, 0xc000e9ea80, 0x7a, 0xc00379b050, 0x7, 0x0, 0x0, 0xc000726960, ...)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/await/await.go:430 +0xb35
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc0002e0f00|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0xc0002e0f00>, 0x2897b48, 0xc0037d6060, 0xc00376b700, 0xc0002e0f00, 0x21cf001, 0xc0037d13c0)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:2166 +0xfc5
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x2897b48|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1(0x2897b48>, 0xc0037d6060, 0x23ef780, 0xc00376b700, 0x23db960, 0x3ab62a8, 0x2897b48, 0xc0037d6060)
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2638 +0x8b
<http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2897b48|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2897b48>, 0xc003799380, 0x23ef780, 0xc00376b700, 0xc0037a0540, 0xc0037894e8, 0x0, 0x0, 0x284d6a0, 0xc0003a8e70)
/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x30a
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x247e9a0|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler(0x247e9a0>, 0xc0002e0f00, 0x2897b48, 0xc003799380, 0xc00378b740, 0xc000704a00, 0x2897b48, 0xc003799380, 0xc0037a4000, 0x2d84)
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.19.0/proto/go/provider.pb.go:2640 +0x150
<http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc000421880|google.golang.org/grpc.(*Server).processUnaryRPC(0xc000421880>, 0x28b48f8, 0xc0002e1500, 0xc00379d200, 0xc0007e3b60, 0x3a52770, 0x0, 0x0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1286 +0x52b
<http://google.golang.org/grpc.(*Server).handleStream(0xc000421880|google.golang.org/grpc.(*Server).handleStream(0xc000421880>, 0x28b48f8, 0xc0002e1500, 0xc00379d200, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:1609 +0xd0c
<http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000777b70|google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000777b70>, 0xc000421880, 0x28b48f8, 0xc0002e1500, 0xc00379d200)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:934 +0xab
created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.38.0/server.go:932 +0x1fd
error: update failed
app-staging-deployment (firmnav:app:App$kubernetes:apps/v1:Deployment)
error: transport is closing
Itās consistently failing in CI, though it did work once out of like 10 tries.. I canāt see what shouldāve changed on our end to suddenly cause it. For info weāre using the GCP and kubernetes providersprehistoric-kite-30979
01/24/2022, 7:53 PMbrave-room-27374
01/25/2022, 3:38 AMapple M1
and getting 403 HTTP error fetching plugin
while doing pulumi up
, basically, I can't install kubernetes plugin
š ā pulumi plugin install resource kubernetes v1.0.6
[resource plugin kubernetes-1.0.6] installing
error: [resource plugin kubernetes-1.0.6] downloading from : 403 HTTP error fetching plugin from <https://get.pulumi.com/releases/plugins/pulumi-resource-kubernetes-v1.0.6-darwin-arm64.tar.gz>
brave-room-27374
01/25/2022, 3:39 AMtroubleshooting
but that does not work as well
https://www.pulumi.com/docs/troubleshooting/#i-dont-have-access-to-an-intel-based-computerbrave-room-27374
01/25/2022, 3:39 AMgreat-tomato-45422
01/27/2022, 7:07 PMgreat-tomato-45422
01/27/2022, 7:07 PMgreat-tomato-45422
01/27/2022, 7:07 PMgreat-tomato-45422
01/27/2022, 7:34 PM