dry-teacher-74595
08/19/2021, 9:04 PMconst cluster = new aws.eks.Cluster("cluster", {...})
const namespace = new k8s.core.v1.Namespace("dev-namespace", {...})
const secret = new k8s.core.v1.Secret("dev-secret", {...})
this works when im devleoping locally but when my CI pipeline tries to run pulumi up
i get error like this
error: configured Kubernetes cluster is unreachable: unable to load Kubernetes client configuration from kubeconfig file: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
can someone point me to some docs on what this KUBERNETES_MASTER environment variable is? or how to pass the cluster credentials from new eks.Cluster()
to the new k8s.Namespace()
call?best-summer-38252
08/19/2021, 11:56 PMbreezy-bear-50708
08/20/2021, 12:38 PMdry-teacher-74595
08/22/2021, 7:55 PMpulumi state delete urn:pulumi:main::infra::eks:index:Cluster::cluster --force
error: This resource can't be safely deleted because the following resources depend on it:
* "cluster-eksClusterSecurityGroup" (urn:pulumi:main::infra::eks:index:Cluster$aws:ec2/securityGroup:SecurityGroup::cluster-eksClusterSecurityGroup)
* "cluster-eksRole" (urn:pulumi:main::infra::eks:index:Cluster$eks:index:ServiceRole::cluster-eksRole)
* "cluster-instanceRole" (urn:pulumi:main::infra::eks:index:Cluster$eks:index:ServiceRole::cluster-instanceRole)
is the string in brackets the urn?
this error message also looks weird to me, the urn is different in the error message?
➜ infra git:(main) ✗ pulumi state delete --force urn:pulumi:main::infra::eks:index:Cluster$kubernetes:<http://storage.k8s.io/v1:StorageClass::cluster-gp2|storage.k8s.io/v1:StorageClass::cluster-gp2>
warning: This command will edit your stack's state directly. Confirm? Yes
error: No such resource "urn:pulumi:main::infra::eks:index:Clusterer-gp2" exists in the current state
brainy-lion-38675
08/23/2021, 2:10 PMbrainy-lion-38675
08/23/2021, 2:36 PMripe-shampoo-80285
08/23/2021, 3:57 PMcolossal-car-2729
08/25/2021, 9:51 AMstraight-cartoon-24485
08/25/2021, 8:55 PMpulumi up
?
Something like pulumi up --diff
?billowy-vr-96461
08/26/2021, 6:19 PMkubectl rollout restart...
or similar? If not, how do you usually go about this?dry-teacher-74595
08/26/2021, 10:48 PMeksctl
it says no cluster found, even tho the region is correct. have anyone had similar issues before?dry-teacher-74595
08/26/2021, 11:29 PMeager-hydrogen-72542
08/27/2021, 12:36 PMcolossal-car-2729
08/27/2021, 2:13 PMalert-mechanic-59024
08/29/2021, 11:54 AMproud-pizza-80589
08/30/2021, 9:27 AMdry-teacher-74595
08/31/2021, 7:47 PMrapid-soccer-18092
09/01/2021, 6:29 AMcert-manager
Helm chart and setting up a LetsEncrypt cluster issuer using Pulumi in our Azure Kubernetes cluster. We are using Kubernetes version 1.21.2 and cert-manager. 1.5.3. When running pulumi up
I get the following error:
kubernetes:<http://cert-manager.io/v1:ClusterIssuer|cert-manager.io/v1:ClusterIssuer> (cert-manager-letsencrypt):
error: creation of resource cert-manager/letsencrypt failed because the Kubernetes API server reported that the apiVersion for this resource does not exist. Verify that any required CRDs have been created: no matches for kind "ClusterIssuer" in version "<http://cert-manager.io/v1|cert-manager.io/v1>"
error: update failedaToolsCertManager cert-manager
When running pulumi up
again it succeeds and the letsencrypt ClusterIssuer is correctly created. I don't want to have to run pulumi up
consecutive times to reach a successful deployment. Can anyone see what the issue is here?prehistoric-translator-89978
09/01/2021, 11:52 PMrender_yaml_to_directory
without it affecting the state of my k8s cluster?ripe-shampoo-80285
09/02/2021, 3:55 PMfuture-refrigerator-88869
09/02/2021, 8:50 PMpurple-traffic-44372
09/03/2021, 2:26 PMbusy-house-95123
09/05/2021, 7:13 PMChartOpts
initializer. 😄brash-cricket-30050
09/08/2021, 8:00 AM--set-file
on a k8s.helm.v3.Chart? Regular --set
is done through ChartOpts.values
, but its unclear to me how to supply a file as value.
Am trying to get an install of Linkerd via Helm going, by porting https://linkerd.io/2.10/tasks/install-helm/ to Pulumi codeproud-pizza-80589
09/08/2021, 1:25 PMgray-hamburger-90102
09/09/2021, 10:20 AMredis.namespace
property. I want to retrieve it from the helm chart output as I need to ensure the secret retrieval for redisPassword
happens after the helm chart has deployed.
const redis = new k8s.helm.v3.Chart("tyk-redis", {
fetchOpts:{
repo: "<https://charts.bitnami.com/bitnami>",
},
repo: "bitnami",
chart: "redis",
namespace: tykFargateProfile.selectors.apply(selectors => selectors[0].namespace)
});
const redisPassword = k8s.core.v1.Secret.get("redisPassword", `${/*namespace here*/}/tyk-redis`).data.apply(data => data["redis-password"]);
polite-shoe-79877
09/09/2021, 11:56 AMorange-policeman-59119
09/09/2021, 8:23 PMbrave-ambulance-98491
09/09/2021, 9:42 PMDeployment
that uses a ConfigMap
, and also snags the name
property of the ConfigMap
for injecting as an environment variable. The abbreviated code is:
const myConfigMap = new k8s.core.v1.ConfigMap(...);
const deployment = new k8s.apps.v1.Deployment(
"my-deployment",
{
spec: {
template: {
spec: {
containers: [
{
name: "example",
env: [
{
name: "CONFIG_MAP_NAME",
value: myConfigMap.metadata.name,
},
],
},
],
},
},
},
},
);
My problem is that when I'm running an update, Pulumi does:
1. Create new myConfigMap
.
2. Delete old myConfigMap
.
3. Run update on deployment
, including changes to point to new myConfigMap
.
This leaves a period of time between step 2 ending and step 3 ending where the injected name in my old version of deployment
no longer points to a valid ConfigMap
in our namespace.
What I want to have Pulumi do is reverse steps 2 & 3:
1. Create new myConfigMap
.
2. Run update on deployment
, including changes to point to new myConfigMap
.
3. Delete old myConfigMap
.
Is there a way to do this in Pulumi (have cleanup / deletion happen conditional on other resource updates completing)?busy-journalist-6936
09/10/2021, 6:02 PM