incalculable-dream-27508
05/19/2020, 9:46 AMcuddly-smartphone-89735
05/19/2020, 10:23 AMazure.containerservice.KubernetesCluster
resource provides the property kubeAdminConfigRaw
which we feed directly in the k8s provider like so: let cluster = new k8s.Provider(name, {kubeconfig: aks.kubeAdminConfigRaw,})
This works perfectly fine except due to this conservative diffing https://github.com/pulumi/pulumi-kubernetes/blob/master/provider/pkg/provider/provider.go#L270 every change on the AKS resource will trigger a complete recreation of all resources that use this k8s provider instance. Note, that the kubeconfig is mostly a means of authentication, not actually something stateful itself ...
Does anyone have the same problem? Any solutions? 🙂elegant-twilight-45010
05/19/2020, 11:10 AMlimited-rainbow-51650
05/19/2020, 1:14 PMgorgeous-animal-95046
05/19/2020, 5:39 PMDeployment
which I believe has the same spec, but pulumi is giving me a diff where the entire containers
array is going to be deleted. When I dump out my deployment object in the code, it has the containers array filled in. I'm guessing i'm wrong and the state doesn't match for the import. Is this a pulumi bug where it can't diff the array contents?orange-policeman-59119
05/20/2020, 2:08 AMdeleteBeforeReplace
is not used, causing the deployment to fail because deployment names are unique within a namespace and the creation fails
I have two questions:
1. Is there a way to force the Kubernetes provider to perform an "update"?
2. Are there best practices for maintaining large lists of env vars to avoid bizarre updates like the one shown in the link above, where inserting an env var "out of order" resulted in a bizarre diff?kind-mechanic-53546
05/21/2020, 1:53 AMimportant-jackal-88836
05/22/2020, 3:35 AMsuppressDeprecationWarnings
is on by default now.wet-noon-14291
05/22/2020, 9:03 PMConfigFile
and an actual manifest. Running the manifest with kubectl apply -f
works, but it fails through pulumi. The error message is:
error: resource default was not successfully created by the Kubernetes API server : namespaces "default" already exists
So I guess pulumi does something else than kubectl apply
. Is there a way to run a manifest that updates a namespace?
The part in the code I have now looks like this (F#)
ConfigFile("proxy_inject",
ConfigFileArgs(
File = input "manifests/proxy_inject.yaml"
)) |> ignore
wet-noon-14291
05/27/2020, 9:50 AMappsettings.json
file in the frontend app, but that doesn't work since pulumi adds some characters. I guess one way could be to get the name from the environment somehow instead.wet-noon-14291
05/27/2020, 12:40 PMgorgeous-egg-16927
05/28/2020, 12:25 AMprehistoric-account-60014
05/29/2020, 4:33 PMinitContainers
? Would something like https://github.com/pulumi/pulumi-kubernetes/pull/633 help with waiting for a migration Job
to finish before deploying other resources?full-dress-10026
05/29/2020, 8:01 PMfull-dress-10026
05/29/2020, 8:12 PMfull-dress-10026
05/29/2020, 8:13 PMbillowy-army-68599
full-dress-10026
05/29/2020, 8:23 PMbillowy-army-68599
few-apartment-82932
06/01/2020, 2:11 PMfew-apartment-82932
06/01/2020, 2:12 PMfew-apartment-82932
06/01/2020, 2:14 PMselector
in the code, and the README shows svc/frontend
for the port forwarding which does not exist there), but even deploying locally on minikube I can't access the nginx forward thereabundant-airplane-93796
06/04/2020, 1:20 AMpulumi preview
giving different behavior to pulumi up
when using GKE? I'm finding that preview
fails to properly use gcloud
to authenticate with the cluster, causing the preview to not see any existing kubernetes resources. up
on the other hand works as expectedfamous-jelly-72366
06/04/2020, 7:53 AMhelm repo add ...
dazzling-sundown-39670
06/04/2020, 1:09 PMbitnami/phpmyadmin
but the maximum upload size is too low for me and they don't offer a way to set it. So I would like to override the config file with a different oneminiature-rose-15269
06/04/2020, 8:11 PMbitter-dentist-28132
06/05/2020, 2:23 AMconfigured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
. any idea why this might be the case? i'm explicitly getting the kubeconfig and providing it as the provider
, so it's not like some sort of ambient credential problem.jolly-bear-34819
06/05/2020, 10:47 AMnew k8s.helm.v3.Chart(name, {
chart: "nginx-ingress",
version: "1.36.3",
fetchOpts: {
repo: "<https://kubernetes-charts.storage.googleapis.com>",
},
values: {
controller: {
service: {
loadBalancerIP: props.backendIpAddress,
annotations: {
"<http://service.beta.kubernetes.io/azure-load-balancer-internal|service.beta.kubernetes.io/azure-load-balancer-internal>": "true"
}
},
}
},
})
In my case the cluster gets recreated and the Kubernetes provider tries to deploy the helm chart on the new cluster with the same IP address.
Because the IP address is still allocated by the old loadbalancer, it will fail.
Do you got any ideas on how to delete the old service/loadbalancer before the new one gets created?famous-jelly-72366
06/05/2020, 2:54 PMbusy-soccer-65968
06/05/2020, 7:10 PMmetadeta.name
to my secret. Then my deployment diff shows [secret]
and does a delete-replace instead of simply updated the deployment spec. If I do not include metadata.name
in my secret and update the stringData
then the deployent does an update instead of delete replace. It also, does not show the [secret]
diff. Is this expect?