gorgeous-egg-16927
03/03/2020, 5:21 PMchilly-waiter-18319
03/05/2020, 4:32 PMsome-spring-67797
03/05/2020, 8:00 PMThese resources are in an unknown state because the Pulumi CLI was interrupted while
waiting for changes to these resources to complete
Does anyone know of a way to recover from this?
I deployed about 100 deployments in a loop so trying to avoid manually taking down each pod.damp-painter-36857
03/08/2020, 9:52 AMdefault
namespace, to be specific), but if I fetch it with ns = Namespace.get(...)
and then try to modify ns.metadata.annotations
then Pulumi bombs out claiming the property is read-only. Any ideas for a workaround? I've tried defining it as plain old YAML and using ConfigFile
but that fails too.polite-motherboard-78438
03/08/2020, 12:08 PMbrave-ambulance-98491
03/09/2020, 2:29 PMkubernetes.Provider
with an inline kubeconfig
string, the string (with cluster access tokens) isn't being treated as a secret. Is there any way to mark this as a secret, so that the cluster's cleartext root credentials aren't available in diffs and the state store?worried-city-86458
03/11/2020, 12:07 AMgorgeous-elephant-23271
03/11/2020, 9:47 PMquiet-morning-24895
03/11/2020, 11:21 PMkubernetes:core:ConfigMap (eks-cluster-nodeAccess):
error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
I believe this has something to do with my ConfigGroup
and/or provider
parameter. Can anyone shed some light on this?brainy-garden-89849
03/12/2020, 1:38 AMbrave-ambulance-98491
03/14/2020, 11:24 PMpanic: interface conversion: interface {} is resource.PropertyMap, not string
goroutine 28 [running]:
<http://github.com/pulumi/pulumi/pkg/resource.PropertyValue.StringValue(...)|github.com/pulumi/pulumi/pkg/resource.PropertyValue.StringValue(...)>
/home/travis/gopath/pkg/mod/github.com/pulumi/pulumi@v1.6.1/pkg/resource/properties.go:359
<http://github.com/pulumi/pulumi-kubernetes/pkg/provider.parseKubeconfigPropertyValue(0x2386280|github.com/pulumi/pulumi-kubernetes/pkg/provider.parseKubeconfigPropertyValue(0x2386280>, 0xc0001fea50, 0x2475423, 0xa, 0xc0001b2508)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/util.go:85 +0x169
<http://github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).DiffConfig(0xc000014000|github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).DiffConfig(0xc000014000>, 0x26e1260, 0xc0001fe9f0, 0xc0001380e0, 0xc000014000, 0x2275301, 0xc00031a0c0)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/provider.go:278 +0x61b
<http://github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_DiffConfig_Handler.func1(0x26e1260|github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_DiffConfig_Handler.func1(0x26e1260>, 0xc0001fe9f0, 0x23a5e80, 0xc0001380e0, 0x23c0a00, 0x33070c0, 0x26e1260, 0xc0001fe9f0)
/home/travis/gopath/pkg/mod/github.com/pulumi/pulumi@v1.6.1/sdk/proto/go/provider.pb.go:1504 +0x86
<http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x26e1260|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x26e1260>, 0xc000577200, 0x23a5e80, 0xc0001380e0, 0xc00000cb20, 0xc00000cb40, 0x0, 0x0, 0x26a05e0, 0xc0000cf7b0)
/home/travis/gopath/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20171105060200-01f8541d5372/go/otgrpc/server.go:61 +0x36e
<http://github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_DiffConfig_Handler(0x2402f60|github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_DiffConfig_Handler(0x2402f60>, 0xc000014000, 0x26e1260, 0xc000577200, 0xc0000de3c0, 0xc0004d2040, 0x26e1260, 0xc000577200, 0xc000331300, 0x101f)
/home/travis/gopath/pkg/mod/github.com/pulumi/pulumi@v1.6.1/sdk/proto/go/provider.pb.go:1506 +0x14b
<http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc00034e300|google.golang.org/grpc.(*Server).processUnaryRPC(0xc00034e300>, 0x26fdc00, 0xc00045b500, 0xc00015a200, 0xc000436180, 0x32d3258, 0x0, 0x0, 0x0)
/home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:998 +0x46a
<http://google.golang.org/grpc.(*Server).handleStream(0xc00034e300|google.golang.org/grpc.(*Server).handleStream(0xc00034e300>, 0x26fdc00, 0xc00045b500, 0xc00015a200, 0x0)
/home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:1278 +0xd97
<http://google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00039bd30|google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00039bd30>, 0xc00034e300, 0x26fdc00, 0xc00045b500, 0xc00015a200)
/home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:717 +0xbb
created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
/home/travis/gopath/pkg/mod/google.golang.org/grpc@v1.21.1/server.go:715 +0xa1
fast-dinner-32080
03/17/2020, 4:16 PMbrave-ambulance-98491
03/17/2020, 9:24 PMpulumi.eks.Cluster
version wants me to tear down everything, which is really not ideal.enough-greece-61665
03/18/2020, 1:31 AMprometheus-operator
in their k8s cluster (https://github.com/helm/charts/tree/master/stable/prometheus-operator)
I found an issue related to this chart (https://github.com/pulumi/pulumi-kubernetes/issues/824) which helped alleviate some of the issues but I'm still stuck with some errors...enough-greece-61665
03/18/2020, 3:36 AMbillowy-army-68599
03/18/2020, 4:39 AMbillowy-army-68599
03/18/2020, 7:32 PMgorgeous-animal-95046
03/19/2020, 4:51 PMkubernetesx
? I can't seem to find the right magicbreezy-hamburger-69619
03/19/2020, 4:56 PMbillions-scientist-31826
03/20/2020, 1:24 PM@pulumi/eks
? I created my first cluster using the createNodeGroup
method. That worked, but I can't set _ignoreChanges: [_"desiredCapacity"_]_
. I tried using the new NodeGroup()
that takes the pulumi.ComponentResourceOptions
where I can set ignoreChanges, but my pods can't resolve DNS. I also saw there are createNodeGroup()
and createManagedNodeGroup()
functions too. So, 2 questions:
• Which is the recommended way?
• Why when using new NodeGroup()
that my pods in that nodegroup can't resolve DNS?breezy-hamburger-69619
03/20/2020, 5:47 PMincalculable-engineer-92975
03/20/2020, 6:03 PMbillowy-army-68599
03/23/2020, 8:20 PMcrooked-helicopter-55521
03/25/2020, 4:05 PMPriorityClass
. I'm trying to do the following (on GCP):
const schedulingPriorities = new k8s.scheduling.v1.PriorityClassList("scheduler-priorities", {
items: [
new k8s.scheduling.v1.PriorityClass("selector-spread-priority", {
value: 10,
metadata: {namespace: "default"}
})
]
});
but it fails with the error:
resource scheduler-priorities-5ynfh088 was not successfully created by the Kubernetes API server : failed to determine if the following GVK is namespaced: <http://scheduling.k8s.io/v1|scheduling.k8s.io/v1>, Kind=PriorityClassList
Anyone know what's going wrong there? Seems like the PriorityClassList doesn't take a namespace of its owngorgeous-egg-16927
03/25/2020, 8:12 PMfamous-bear-66383
03/26/2020, 11:54 AM/etc/ssl/certs/
.
When I want to automated with pulumi like so
const cacertificates = new k8s.yaml.ConfigFile("ca-certificates", {
file: "ca-certificates.yaml",
}
);
While ca-certificates.yaml is larg file about 271 KB containing GlobalSign Root CA
.
It works only if I use kubectl create
.
Reding about related issues like in here https://github.com/argoproj/argo-cd/issues/820 . It seems like Pulumi is also using kubectl apply
to manage resources which leads to failure in my case. resource default/ca-certificates was not successfully created by the Kubernetes API server : ConfigMap "ca-certificates" is invalid: metadata.annotations: Too long: must have at most 262144 characters
.
Any hints/workaround this problem ?
Appreciated : )brave-ambulance-98491
03/26/2020, 3:37 PMJob
that I want to guarantee completes before the rest of my stack runs. I feel like the two Pulumi ways to do this are: 1) add a dependsOn
to the rest of the items in the stack, or 2) put the rest of the stack in an `apply`d function off of one of the job's outputs. The first option is a lot of busywork & passing arguments around - but when I did the second, it now produces alarming previews that imply all my Kubernetes objects will be deleted & recreated with each deploy.
I think what I'm looking for doesn't exist: Something like a function on Resource
that looks like:
myResource.andThen(() => { /* stuff that happens only after the resource is created */ });
Does such a function already exist? Is there a pattern to achieve this other than plumbing dependsOn
down the call stack?breezy-gold-44713
03/26/2020, 6:48 PMcrooked-helicopter-55521
03/26/2020, 8:05 PMcore.v1.ConfigMap
object. Having a bit of trouble, curious if anyone has any ideas (threading the details)brave-ambulance-98491
03/29/2020, 10:00 PMDeployment
resources show up as "replace" when I update ConfigMap
values. Ideally, these should just "update" in that case. Is this a known issue?brave-ambulance-98491
03/29/2020, 10:00 PMDeployment
resources show up as "replace" when I update ConfigMap
values. Ideally, these should just "update" in that case. Is this a known issue?better-rainbow-14549
03/30/2020, 8:32 AMbrave-ambulance-98491
03/30/2020, 3:57 PMDeployment
, so I don't love it.gorgeous-egg-16927
03/30/2020, 4:43 PMbrave-ambulance-98491
03/30/2020, 4:44 PMgorgeous-egg-16927
03/30/2020, 4:57 PMbrave-ambulance-98491
03/30/2020, 4:59 PMConfigMap
name, where the Deployment
spec isn't changing at all. Also, did the preview show a replace
or an update
for the Deployment
for you?gorgeous-egg-16927
03/30/2020, 4:59 PMupdate
brave-ambulance-98491
03/30/2020, 5:07 PMorange-policeman-59119
05/20/2020, 1:53 AM# many lines removed
~ spec : {
~ template: {
~ spec : {
~ containers : [
~ [0]: {
~ env : [
# many lines removed
~ [14]: {
~ name : "ENV_VAR_A" => "SOME_NEW_ENV_VAR"
- value : "env-var-a-value"
+ valueFrom: {
+ secretKeyRef: {
+ key : "a-key"
+ name: "on-a-secret-resource"
}
}
}
+ [15]: {
+ name : "ENV_VAR_A"
+ value: "env-var-a-value"
}
Is the reason why because:
• A new env var was introduced into a higher position
• This shifted all the other env vars "down"
• And during that shift, the secretKeyRef was added/removed?
If so I think that's a defect, we already add an annotation to our pod spec with the hash of the secret key data so that when the secret updates, our pods do too~ template: {
metadata: {
annotations: {
checksum/secrets: "[secret]"
}
brave-ambulance-98491
05/20/2020, 4:55 PMConfigMap
(not having Pulumi generate a name for you)? My suspicion was that this was what triggered this bug.