dry-keyboard-94795
01/05/2023, 4:01 PMImport
doesn't seem to be documented on a lot of kubernetes resources.
Have the docs not generated correctly, or is this intentional?
example: Ingressmany-helicopter-89037
01/06/2023, 6:08 AMrenderYamlToDirectory
as an option to dump Yaml files into our CD system. It works great with one caveat. How does file names are generated?
Currently it's generating like apps_v1-deployment-default-nginx-008d52b7.yaml
. I wanted to add some prefix to the file names so kubectl apply
respects order of the resources. Is it possible to do?flat-engineer-30260
01/06/2023, 12:25 PMpulumi config set --secret kubernetes:kubeconfig --path ~/.kube/config
, something like this, it won't load the file content, and it's hard to parse the yaml to string and set with pulumi secret. The only worked way is to read it from the local file pulumi_k8s = kubernetes.Provider("pulumi_k8s", kubeconfig=(lambda path: open(path).read())("kubeconfig"))
, but it is not secure to store kubeconfig file in github. How did you do that? Confusing...proud-pizza-80589
01/09/2023, 6:20 PMsquare-laptop-45713
01/10/2023, 6:53 PMSecret
s used for encryption and mounts those as volumes. At some point during an update, the `Secret`s are no longer in the k8s cluster but Pulumi believes they are. These missing `Secret`s are preventing pods from starting and the pods are stuck in the creation state (ContainerCreating
or CreateContainerConfigError
). I’ve attempted refreshing multiple times and Pulumi still believes these `Secret`s are there and are never updated in the state. I did find these entries in the job run logs from a job I ran to Preview stack changes a few hours ago (we’re using GH Actions):
-- kubernetes:core/v1:Secret ***/***-dev-***-jobservice delete original
+- kubernetes:core/v1:Secret ***/***-dev-***-jobservice replace [diff: ~data]
++ kubernetes:core/v1:Secret ***/***-dev-***-jobservice create replacement [diff: ~data]
kubernetes:core/v1:Secret ***/***-dev-***-trivy
-- kubernetes:core/v1:Secret ***/***-dev-***-registry delete original
+- kubernetes:core/v1:Secret ***/***-dev-***-registry replace [diff: ~data]
++ kubernetes:core/v1:Secret ***/***-dev-***-registry create replacement [diff: ~data]
kubernetes:core/v1:ConfigMap ***/***-dev-***-core
-- kubernetes:core/v1:Secret ***/***-dev-***-core delete original
+- kubernetes:core/v1:Secret ***/***-dev-***-core replace [diff: ~data]
++ kubernetes:core/v1:Secret ***/***-dev-***-core create replacement [diff: ~data]
the -dev-***-trivy
is the only Secret
that remains in the clusterbland-pharmacist-96854
01/17/2023, 12:25 PMcreate_oidc_provider
to true. This creates the idp provider in the iam but it does not associate it with the clusterbitter-twilight-16606
01/19/2023, 12:30 PMquiet-laptop-13439
01/24/2023, 11:33 AMgorgeous-minister-41131
01/24/2023, 6:23 PMeager-lifeguard-95876
01/27/2023, 9:22 AMvalue_yaml_files
isn’t working. I tried deploying a helm release and passing that argument. I can see the resource being updated but when I check the resources they had the chart’s defaults… any idea what I could be missing?quiet-leather-94755
01/27/2023, 3:57 PMastonishing-dress-81433
01/29/2023, 8:47 AMk8s.helm.v3.Chart
resource for a fairly simple application:
const chart = new k8s.helm.v3.Chart("daskhub", {
version: "2023.1.0",
chart: "daskhub",
namespace: "dev",
fetchOpts: {
repo: "<https://helm.dask.org/>",
},
}, { providers: { kubernetes: cluster.provider }});
running pulumi up
fails with:
Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: Get "<https://helm.dask.org/daskhub-2023.1.0.tgz>": dial tcp [2606:4700:3033::6815:2751]:443: connect: no route to host
The interesting thing is that the url above seems to be perfectly valid. Does anyone has thoughts on what is going on here? Thanks!rhythmic-whale-48997
01/30/2023, 9:46 AMrenderYamlToDirectory
and then create this file on GitHub with @pulumi/github
However, I need to call pulumi up
twice. Is there a way to dump yaml files and then to read them in one go? Already tried dependsOn
but it's not working.
Sample code that I'm using for PoC
// Instantiate a Kubernetes Provider and specify the render directory.
const provider = new k8s.Provider("render-yaml", {
renderYamlToDirectory: "./samples/rendered",
enableServerSideApply: true
});
const s3 = new kx.Secret("credentials-s3", {
metadata: {
name: "credentials-s3"
},
stringData: {
"access-key-id": "access",
"secret-access-key": "secret"
}
}, {
provider
});
fs.readdirSync(path.resolve(__dirname, "samples/rendered/1-manifest/")).forEach(file => {
const fileContent = fs.readFileSync(path.resolve(__dirname, `samples/rendered/1-manifest/${file}`), "utf-8")
if (file.includes("v1-configmap")) {
const yaml = load(fileContent) as k8s.core.v1.ConfigMap;
new github.RepositoryFile(`files-${yaml.metadata.name}`, {
repository: tenantsRepository.name,
file: `./tenants/${yaml.metadata.namespace}/configmaps/${yaml.metadata.name}.yaml`,
content: fileContent,
branch: "master",
overwriteOnCreate: true
})
}
});
prehistoric-toddler-40668
01/30/2023, 3:04 PMHelm release "monitoring/kube-prometheus-stack" was created, but failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release monitoring/kube-prometheus-stack: the server could not find the requested resource
after retrying with pulumi up everything is fine and it deployes. help anyone please? 🙂many-knife-65312
01/30/2023, 5:46 PMpulumi refresh
or pulumi preview
neither command works because the provider endpoint is invalidhelpful-baker-38839
01/30/2023, 10:09 PMpulumi_kubernetes.yaml.ConfigFile
- is it possible to get the logs from the run of that job using Pulumi? Ideally to capture and store somewhere (S3 maybe) but I’d even settle for printing it in the CI output.strong-microphone-65970
01/31/2023, 6:11 PMapiVersion: <http://helm.toolkit.fluxcd.io/v2beta1|helm.toolkit.fluxcd.io/v2beta1>
kind: HelmRelease
metadata:
name: example
namespace: exampleNS
spec:
postRenderers:
- kustomize:
patchesStrategicMerge:
- apiVersion: v1
kind: Secret
metadata:
name: example-regsecret
data:
.dockerconfigjson:
I was trying to make something like this work in Pulumi but have not had any luck so farpolite-summer-58169
02/01/2023, 1:11 PMpolite-summer-58169
02/06/2023, 1:08 PMgorgeous-minister-41131
02/06/2023, 10:53 PMbillowy-horse-79629
02/08/2023, 11:13 AMvalues: {
FileSystemId: fileSystemId,
datadog: {
agents: {
volumeMounts: [
{
name: "log-mount",
mountPath: "logs",
}
],
volumes: [
{
name: "log-mount",
persistentVolumeClaim: {
claimName: "log-mount"
}
}
]
},
clusterAgent: {},
datadog: {
confd: {
"rabbitmq.yaml": {
ad_identifiers: [
"rabbitmq"
],
init_config: {},
instances: [{
rabbitmq_api_url: "f-rabbitmq.rabbitmq.svc:15672/api/",
username: "datadog",
password: "datadog"
}]
}
},
.
.
.
.
This is the error
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (datadog-chart):
error: error reading from server: EOF
pulumi:pulumi:Stack (permit-main-stg):
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x101ea0848]
goroutine 1301 [running]:
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.setReleaseAttributes(0x1400017e960|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.setReleaseAttributes(0x1400017e960>, 0x0, 0x0)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/helm_release.go:1149 +0x178
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*helmReleaseProvider).helmUpdate(0x14001f61f10|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*helmReleaseProvider).helmUpdate(0x14001f61f10>, 0x1400017e960, 0x1400017eb40)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/helm_release.go:588 +0x51c
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*helmReleaseProvider).Update(0x14001f61f10|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*helmReleaseProvider).Update(0x14001f61f10>, {0x1027c6fe8, 0x140023b15c0}, 0x140021a9700)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/helm_release.go:979 +0x45c
<http://github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0x14000817b00|github.com/pulumi/pulumi-kubernetes/provider/v3/pkg/provider.(*kubeProvider).Update(0x14000817b00>, {0x1027c6fe8, 0x140023b15c0}, 0x140021a9700)
/home/runner/work/pulumi-kubernetes/pulumi-kubernetes/provider/pkg/provider/provider.go:2124 +0x1434
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1({0x1027c6fe8|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler.func1({0x1027c6fe8>, 0x140023b15c0}, {0x102682820, 0x140021a9700})
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.26.1/proto/go/provider.pb.go:2665 +0x7c
<http://github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1({0x1027c6fe8|github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1({0x1027c6fe8>, 0x14002383710}, {0x102682820, 0x140021a9700}, 0x140023967e0, 0x140022e3d10)
/home/runner/go/pkg/mod/github.com/grpc-ecosystem/grpc-opentracing@v0.0.0-20180507213350-8e809c8a8645/go/otgrpc/server.go:57 +0x3bc
<http://github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler({0x1027045e0|github.com/pulumi/pulumi/sdk/v3/proto/go._ResourceProvider_Update_Handler({0x1027045e0>, 0x14000817b00}, {0x1027c6fe8, 0x14002383710}, 0x1400238daa0, 0x1400078d080)
/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.26.1/proto/go/provider.pb.go:2667 +0x150
<http://google.golang.org/grpc.(*Server).processUnaryRPC(0x14000960c40|google.golang.org/grpc.(*Server).processUnaryRPC(0x14000960c40>, {0x1028046b8, 0x140005836c0}, 0x140023907e0, 0x140002a9680, 0x103b55850, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.45.0/server.go:1282 +0xc5c
<http://google.golang.org/grpc.(*Server).handleStream(0x14000960c40|google.golang.org/grpc.(*Server).handleStream(0x14000960c40>, {0x1028046b8, 0x140005836c0}, 0x140023907e0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.45.0/server.go:1619 +0xa34
<http://google.golang.org/grpc.(*Server).serveStreams.func1.2(0x140008b2fe0|google.golang.org/grpc.(*Server).serveStreams.func1.2(0x140008b2fe0>, 0x14000960c40, {0x1028046b8, 0x140005836c0}, 0x140023907e0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.45.0/server.go:921 +0x94
created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.45.0/server.go:919 +0x1f0
flat-planet-10000
02/08/2023, 11:13 AMgitlab-runner
on AKS
from the helm chart, but pulumi up
fails.
Here is the definition
export const runner = new k8s.helm.v3.Chart(
"gitlab-runner",
{
chart: "gitlab/gitlab-runner",
version: "0.49.1",
fetchOpts: {
repo: "<https://charts.gitlab.io>",
},
values: {
gitlabUrl: config.gitlabUrl,
runnerRegistrationToken: config.gitlabRunnerToken,
runUntagged: false,
"rbac.create": true,
namespace,
},
{ provider: cluster.k8sProvider },
);
The chart exists (checked with helm search repo -l gitlab/gitlab-runner
NAME CHART VERSION APP VERSION DESCRIPTION
gitlab/gitlab-runner 0.49.1 15.8.1 GitLab Runner
But pulumi up
gives me following error:
Diagnostics:
pulumi:pulumi:Stack (danube-dev):
error: Running program '/home/klemens/work/danube/deployment/pulumi/danube' failed with an unhandled exception:
Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: chart "gitlab/gitlab-runner" version "0.49.1" not found in <https://charts.gitlab.io> repository
at Object.callback (/home/klemens/work/danube/deployment/pulumi/danube/node_modules/@pulumi/runtime/invoke.ts:172:33)
at Object.onReceiveStatus (/home/klemens/work/danube/deployment/pulumi/danube/node_modules/@grpc/grpc-js/src/client.ts:338:26)
at Object.onReceiveStatus (/home/klemens/work/danube/deployment/pulumi/danube/node_modules/@grpc/grpc-js/src/client-interceptors.ts:426:34)
at Object.onReceiveStatus (/home/klemens/work/danube/deployment/pulumi/danube/node_modules/@grpc/grpc-js/src/client-interceptors.ts:389:48)
at /home/klemens/work/danube/deployment/pulumi/danube/node_modules/@grpc/grpc-js/src/call-stream.ts:276:24
at processTicksAndRejections (node:internal/process/task_queues:78:11)
Does anyone know what the problem can be, or how I can investigate closer?
Thanks in advancewooden-room-54680
02/08/2023, 1:25 PMname
argument in k8s.helm.v3.Release
but it didn't fix the naming issue. Any way around that?curved-kitchen-24115
02/09/2023, 12:52 AMpulumi up
. That helm chart results in a DaemonSet and a Deployment. I'd like to trigger a rollout of the Deployment after the helm chart applies. I feel like I may be able to achieve this with Server-Side Apply, but I'm failing to see how. If anyone has any pointers I'd be very grateful !bland-pharmacist-96854
02/09/2023, 8:29 AMpurple-beach-36424
02/14/2023, 12:23 AMpolite-summer-58169
02/14/2023, 8:19 AMdazzling-oxygen-84405
02/15/2023, 1:52 PMwarning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://172.18.3.244:6443/openapi/v2?timeout=32s>": dial tcp 172.18.3.244:6443: i/o timeout
error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster has been deleted, you can edit the pulumi state to remove this resource
This is mostly expected, because the preview is running in CI which doesn't have access to the VPN that the cluster is running on. However, this setup worked for several months, until sometime between two weeks ago and today, so I'm curious if anything has changed here?
Downgrading the Pulumi version doesn't seem to make a difference, and we haven't changed the version of the kubernetes provider.flat-planet-10000
02/16/2023, 1:30 PMhaproxy/kubernetes-ingress
installed from the helm chart with
export const loadBalancer = new k8s.helm.v3.Chart(
"kubernetes-ingress",
{
chart: "kubernetes-ingress",
version: "1.28.1",
fetchOpts: {
repo: "<https://haproxytech.github.io/helm-charts>",
},
values: {
controller: {
service: {
type: "LoadBalancer",
},
},
},
namespace: ingressNs.metadata.name,
},
{ provider: cluster.k8sProvider },
);
And now on every pulumi up
it wants to change the metadata from [secret]
to a new object, like this:
~ [13]: {
apiVersion: "v1"
data : [secret]
~ id : "haproxy/kubernetes-ingress-default-cert" => output<string>
kind : "Secret"
- metadata : [secret]
+ metadata : {
+ annotations: {
+ <http://helm.sh/hook|helm.sh/hook> : "pre-install"
+ <http://helm.sh/hook-delete-policy|helm.sh/hook-delete-policy>: "before-hook-creation"
}
+ labels : {
+ <http://app.kubernetes.io/instance|app.kubernetes.io/instance> : "kubernetes-ingress"
+ <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: "pulumi"
+ <http://app.kubernetes.io/name|app.kubernetes.io/name> : "kubernetes-ingress"
+ <http://app.kubernetes.io/version|app.kubernetes.io/version> : "1.9.3"
+ <http://helm.sh/chart|helm.sh/chart> : "kubernetes-ingress-1.28.1"
}
+ name : "kubernetes-ingress-default-cert"
+ namespace : "haproxy"
}
type : "<http://kubernetes.io/tls|kubernetes.io/tls>"
urn : "urn:pulumi:dev::danube::kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret::haproxy/kubernetes-ingress-default-cert|helm.sh/v3:Chart$kubernetes:core/v1:Secret::haproxy/kubernetes-ingress-default-cert>"
}
]
How can I tell that id
and metadata
did not have changed?
Thanks in advanceearly-plumber-68898
02/19/2023, 11:25 AM