most-lighter-95902
10/22/2021, 3:00 AMmost-lighter-95902
10/22/2021, 3:03 AMproud-pizza-80589
10/22/2021, 10:16 AMbulky-area-51023
10/23/2021, 3:41 PMsecret
from stack, and generating kubernetes manifest from it (e.g. environment variable). But it doesn’t seem to be replaced into string value. I’m aware of that the return value of require_secret
is Output[T]
but having some passthrough apply(lambda val: val)
doesn’t work. I think the main problem is that secret
object is not a direct argument of subsequent resource. Any kind of piece of advice would be very appreciatedwooden-receptionist-75654
10/25/2021, 10:45 AMazure-native.containerservice
lib to create AKS cluster and I also would like deploy k8s RBAC objects with kubernetes
lib.
I have something like:
# Creating AKS
const cluster = new containerservice.ManagedCluster(...)
# Getting a kubectlconfig
const creds = pulumi.all([cluster.name, resourceGroup.name]).apply(([clusterName, rgName]) => {
return containerservice.listManagedClusterUserCredentials({
resourceGroupName: rgName,
resourceName: clusterName,
});
});
const encoded = creds.kubeconfigs[0].value;
const kubeconfig = encoded.apply(enc => Buffer.from(enc, "base64").toString());
# Creating provider
const aksProvider = new k8s.Provider("aks", {
kubeconfig: kubeconfig
})
# And deploying a role
const devsGroupRole = new k8s.rbac.v1.Role("pulumi-devs",{...})
When run it locally with pulumi up
I got auth request:
To sign in, use a web browser to open the page <https://microsoft.com/devicelogin>".
Am I missing something?cuddly-tailor-40542
10/25/2021, 10:20 PMcluster.KubeConfig
as I don't want its value to be printed to the command line. I just need to make the provider in meta-stack
call from kubernetes stack
green-park-28305
10/26/2021, 7:32 AMcluster:eks.Cluster = eks.Cluster(f"{cluster_name}-cluster",
name=cluster_name,
....
node_group_options=eks.ClusterNodeGroupOptionsArgs(
cloud_formation_tags={
"Name": "EKS Worker Node"
},
encrypt_root_block_device=True,
),
...
)
eks.ManagedNodeGroup(f"{cluster_name}-node-group-" + str(i),
cluster=cluster.core,
node_group_name=f"{cluster_name}-managed-node-group-" + str(i),
....
))
witty-belgium-75866
10/27/2021, 9:34 AMkubectl
command to python-pulumi: coredns
:
kubectl patch deployment coredns \
-n kube-system \
--type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
thanks!ripe-exabyte-83007
10/27/2021, 1:33 PMripe-shampoo-80285
10/28/2021, 10:28 PMripe-shampoo-80285
10/28/2021, 10:28 PMripe-shampoo-80285
10/28/2021, 10:48 PMripe-shampoo-80285
10/28/2021, 10:48 PMripe-shampoo-80285
10/28/2021, 10:48 PMripe-shampoo-80285
10/28/2021, 10:49 PMripe-shampoo-80285
10/28/2021, 10:51 PMripe-shampoo-80285
10/28/2021, 10:51 PMvictorious-exabyte-70545
10/29/2021, 6:26 PMvictorious-exabyte-70545
10/29/2021, 6:26 PMmodern-city-46733
10/30/2021, 3:31 PMkubectl
commands that just don’t convert to Pulumi yet? (for example Helm Chart deploys with hooks)colossal-car-2729
11/02/2021, 1:34 PMimage
field in containers with pulumi preview --diff
~ kubernetes:apps/v1:Deployment: (update)
[id=default/selfservice-s99yer46]
...
~ spec: {
~ template: {
~ spec: {
~ containers: [
~ [0]: {
}
[1]: <null>
~ [2]: {
}
]
}
}
}
how can pulumi show the actual image diff?average-market-57523
11/02/2021, 6:04 PMsalmon-raincoat-19475
11/02/2021, 7:02 PMOutput
class in Python - which seems to suggest that it needs to stay in the Output
type so that Pulumi objects are realized....
Is there a different approach I should be taking? Thanks!future-window-78560
11/02/2021, 7:07 PMquiet-leather-94755
11/05/2021, 12:57 PMaws-load-balancer-controller
, but it doesn't seem to pass values properly.. I'm trying to override image.repository
to use a repo in eu-west-1, but it's still using the default us-west-2 region (and failing to pull images).. Any ideas? Possibly a bug, or am I just not passing the value in the right way?square-coat-62279
11/07/2021, 10:13 AMif(helmConfigs.secret){
let k8sSecret = new k8s.core.v1.Secret(${helmConfigs.secret.name}, {
metadata: {
name: helmConfigs.secret.name,
namespace: helmConfigs.namespace
},
data: {
helmConfigs.secret.key: helmConfigs.secret.value
}
});
}
input:
helm: {
"cert-manager": {
namespace: "cert-manager",
version: "v1.5.3",
repo: "<https://charts.jetstack.io>",
chart: "cert-manager",
values: require(`./google/helm_values/cert_manager`),
secret: {}
},
"external-dns-public": {
namespace: "external-dns",
version: "3.1.1",
repo: "<https://charts.bitnami.com/bitnami>",
chart: "external-dns",
values: require(`./google/helm_values/external_dns_public`),
secret: {
name: "external-dns-public"
key: "credentials.json"
value: "dnskey.privateKey"
}
}
}
brainy-appointment-20633
11/07/2021, 8:29 PMbroad-helmet-79436
11/08/2021, 10:19 AMkubectl edit
.
My StatefulSet is called prometheus-grafana
, and runs in a namespace called monitoring
.
The Kubernetes cluster runs in GCP on version 1.20.10-gke.1600.
kubectl
tells me a few things (of particular interest, that the apiVersion
is apps/v1
):
$ kubectl get statefulset --context dev -n monitoring prometheus-grafana -o json
{
"apiVersion": "apps/v1",
"kind": "StatefulSet",
[…]
}
when I try to import the resource with Pulumi, however, the preview shows me some really old data:
// index.ts
new kubernetes.apps.v1.StatefulSet(
'prometheus-grafana',
{},
{ import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);
$ pulumi preview --stack dev --diff
[…]
= ├─ kubernetes:apps/v1:StatefulSet prometheus-grafana import [diff: -spec~apiVersion,metadata]; 1 warni
= kubernetes:apps/v1:StatefulSet: (import)
[id=monitoring/prometheus-grafana]
[urn=urn:pulumi:dev::folio::kubernetes:apps/v1:StatefulSet::prometheus-grafana]
[…]
[provider=urn:pulumi:dev::folio::pulumi:providers:kubernetes::kubernetes_provider::a326d7af-28d0-4aa9-b5e1-7017e0244985]
~ apiVersion: "apps/v1beta2" => "apps/v1"
[…]
This is especially surprising because I’m using new kubernetes.apps.v1.StatefulSet(…)
, which I believe shouldn’t see anything with apiVersion
apps/v1beta2
.
From the looks of it, the Pulumi preview shows me the very first version of the statefulset that was deployed three years ago: I’ve also modified one of the containers to add an environment variable, and updated the Docker image multiple times using kubectl edit
.
I figured that was really weird, so I decided to log the imported StatefulSet to see if it’s somehow an issue with the diffing and/or preview:
const statefulSet = new kubernetes.apps.v1.StatefulSet(
'prometheus-grafana',
{},
{ import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);
statefulSet.apiVersion.apply(apiVersion =>
console.log({ apiVersion })
);
$ pulumi up --stack dev
[…]
Diagnostics:
pulumi:pulumi:Stack (folio-dev):
{ apiVersion: 'apps/v1' }
I double-checked that this code is indeed printing the imported resource by logging the metadata
and containers as well. It sure seems to me like the code is doing what I intended.
On a related note, I also see the correct data with this code:
const res = kubernetes.apps.v1.StatefulSet.get(
'p-g',
'monitoring/prometheus-grafana',
{ provider: kubernetesProvider }
);
res.apiVersion.apply(apiVersion => console.log({ apiVersion }));
If I’m right so far, I believe it means that the imported resource has its apiVersion
set to apps/v1
, but the Pulumi resource differ and/or preview for some reason believe they’re working with the Very First Version of the StatefulSet that was deployed in 2018.
I know that the Kubernetes API does some caching, particularly on requests to watch
a resource, by checking the resourceVersion
property on the objects. I verified that the StatefulSet’s resourceVersion
gets updated when I change the container spec’s image
with kubectl edit
, though.
I have no idea if it matters, but the StatefulSet was deployed by Cloud Marketplace (https://console.cloud.google.com/marketplace/product/google/prometheus), which means it has some ownerReferences
set. Still, I don’t really see that triggering this behaviour, since my diagnostics/logs show that Pulumi has the correct data to work with.
I’ve put quite a few hours into debugging this, and now I’m at a complete loss. Am I missing something obvious? 😅sparse-spring-91820
11/09/2021, 9:42 AMconst nginx = new k8s.helm.v3.Chart('nginx',
{
namespace,
chart: 'nginx-ingress',
version: '1.24.4',
fetchOpts: { repo: '<https://charts.helm.sh/stable/>' },
values: {
controller: {
annotations: {
'<http://service.beta.kubernetes.io/aws-load-balancer-ssl-cert|service.beta.kubernetes.io/aws-load-balancer-ssl-cert>': 'arn:aws:acm:us-east-1:XXXXXXXXXXXX:certificate/XXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
'<http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>': 'alb',
'<http://service.beta.kubernetes.io/aws-load-balancer-backend-protocol|service.beta.kubernetes.io/aws-load-balancer-backend-protocol>': 'http',
'<http://service.beta.kubernetes.io/aws-load-balancer-ssl-ports|service.beta.kubernetes.io/aws-load-balancer-ssl-ports>': 'https'
},
publishService: { enabled: true }
}
}
},
{ providers: { kubernetes: options.provider } }
);
I tried a lot of variations but none of them worked for me. I end up getting error: 400 Bad Request "Play HTTP request was sent to HTTPS port"
or another case, I get auto-generated Kubernetes Ingress Controller Fake Certificate
which shows me Not secure
flag in the browser because that certificate is not signed by authority that browser trusts.
Has anyone else set nginx-ingress working with certificate generated by ACM?limited-rainbow-51650
11/09/2021, 11:13 AMDeployment
using a transformation? I seem to fail on setting it the correct way.