chilly-garage-80867
09/18/2020, 7:27 PMException: invocation of kubernetes:yaml:decode returned an error: error converting YAML to JSON: yaml: line 128: mapping values are not allowed in this context
brash-waiter-73733
09/19/2020, 12:34 PMrender_yaml_to_directory
to output YAML files from my Python code.
However, I noticed (as documented) that this means I can’t then deploy those resources to the cluster with Pulumi as well.
Does anyone have a pattern for doing both? ie.
1. run pulumi up
2. deploy to cluster
3. generate YAML
I tried:
• Passing providers
and a list of providers, but that didn’t seemed to revert to defaults ❌
• Using a config as a toggle between two providers, but this led to state problems ❌
• Abstracting the resources, and then applying twice in the same script, but Pulumi complains about them having the same name ❌
Appreciate this has a nice BETA FEATURE
warning. I’d be interested if anyone has a pattern for doing the above, or if this might be supported in thee future.
:param pulumi.Input[str] render_yaml_to_directory: BETA FEATURE - If present, render resource manifests to this directory. In this mode, resources will not
be created on a Kubernetes cluster, but the rendered manifests will be kept in sync with changes
to the Pulumi program. This feature is in developer preview, and is disabled by default.
Note that some computed Outputs such as status fields will not be populated
since the resources are not created on a Kubernetes cluster. These Output values will remain undefined,
and may result in an error if they are referenced by other resources. Also note that any secret values
used in these resources will be rendered in plaintext to the resulting YAML.
salmon-account-74572
09/21/2020, 7:49 PMsalmon-account-74572
09/21/2020, 8:28 PMkustomize
support to also render YAML to a directory? (It appears as if Pulumi's kustomize
support is acting more like kubectl -k
as opposed to standalone kustomize
.)bitter-application-91815
09/23/2020, 8:34 AMbitter-application-91815
09/23/2020, 8:34 AMbitter-application-91815
09/23/2020, 8:34 AMkubectl apply -f <https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml>
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler <http://cluster-autoscaler.kubernetes.io/safe-to-evict=%22false%22|cluster-autoscaler.kubernetes.io/safe-to-evict="false">
bitter-application-91815
09/23/2020, 8:35 AMbitter-application-91815
09/23/2020, 8:35 AM0922 12:44:54.845750 1 aws_manager.go:265] Failed to regenerate ASG cache: cannot autodiscover ASGs: AccessDenied: User: arn:aws:sts::919601712473:assumed-role/staging-f-exec-node-role-7ca4419/i-01b2a7eb09e0d618b is not authorized to perform: autoscaling:DescribeTags
status code: 403, request id: 1a3191cd-ffda-4ee8-bdd9-8f3af2b1af93
F0922 12:44:54.845780 1 aws_cloud_provider.go:382] Failed to create AWS Manager: cannot autodiscover ASGs: AccessDenied: User: arn:aws:sts::919601712473:assumed-role/staging-f-exec-node-role-7ca4419/i-01b2a7eb09e0d618b is not authorized to perform: autoscaling:DescribeTags
status code: 403, request id: 1a3191cd-ffda-4ee8-bdd9-8f3af2b1af93
bitter-application-91815
09/23/2020, 8:36 AMbitter-application-91815
09/23/2020, 8:36 AMbitter-application-91815
09/23/2020, 1:05 PMbitter-application-91815
09/23/2020, 1:05 PMsalmon-account-74572
09/23/2020, 10:16 PMkustomize
support that transformations "happen in memory, and are not persisted to disk." Is this still true if the RenderYamlToDirectory
property is set on the Kubernetes provider?salmon-account-74572
09/23/2020, 11:00 PMapiVersion: <http://kustomize.config.k8s.io/v1beta1|kustomize.config.k8s.io/v1beta1>
kind: Kustomization
resources:
- ../../base
and I'd like to add this to that YAML:
patchesJson6902:
- path: name.json
target:
group: <http://infrastructure.cluster.x-k8s.io|infrastructure.cluster.x-k8s.io>
kind: AWSCluster
name: base
version: v1alpha3
Is this possible using a transformation?fierce-memory-34976
09/24/2020, 2:46 PMmessage: 'the HPA was unable to compute the replica count: missing request for cpu'
bitter-application-91815
09/24/2020, 8:05 PMbitter-application-91815
09/24/2020, 8:05 PMwitty-vegetable-61961
09/25/2020, 9:48 PMworried-city-86458
09/29/2020, 3:35 AMvpc-resource-controller-role
cluster role, vpc-resource-controller-role-binding
cluster role binding, and vpc-resource-controller
service account already exist after standing up a new eks cluster, while the vpc-resource-controller
deployment does not exist, but the cluster role is different from the guide's download link and is missing config map access (so needs modification anyway),
... so I want to delete the lot iff they exist (and not managed by pulumi) and create new ones with consistent vpc-resource-controller
names throughout, which would leave duplicates for cluster role and cluster role binding and clashes with the service account if I can't delete the pre-existing ones first. 🤔worried-city-86458
09/30/2020, 5:11 AMvpc-admission-webhook
?melodic-printer-39640
09/30/2020, 12:00 PMconst nginxController = new k8s.helm.v3.Chart(`core1-${stack}`, {
version: "3.3.0",
chart: "ingress-nginx",
fetchOpts: {
repo: "<https://kubernetes.github.io/ingress-nginx>"
},
values: {
controller: {
admissionWebhooks: {
enabled: true,
patch: {
enabled: true
}
},
service: {
annotations: {
"<http://external-dns.alpha.kubernetes.io/hostname|external-dns.alpha.kubernetes.io/hostname>": "<http://mydomain.net|mydomain.net>"
},
externalTrafficPolicy: 'Local',
},
config: {
"use-forwarded-headers": 'true'
}
}
}
}, { provider: cluster.provider });
melodic-printer-39640
09/30/2020, 12:00 PMmelodic-printer-39640
09/30/2020, 12:01 PMnutritious-flower-51098
09/30/2020, 12:13 PMmelodic-printer-39640
09/30/2020, 12:59 PM"<http://helm.sh/hook|helm.sh/hook>": pre-install,pre-upgrade
from metadata/annotations in Job template than Pulumi creates the Job. It looks like those hooks are not supported porperly: I got this problem while doing clean installmelodic-printer-39640
09/30/2020, 1:52 PMlimited-rainbow-51650
09/30/2020, 3:33 PMexports
to my Pulumi project from a k8s Service
resource but only a single output is created:
exports.temporalFrontendEndpoint = temporal_frontendService.spec.externalName;
exports.temporalFrontEndName = temporal_frontendService.metadata.name;
results in:
--outputs:--
+ temporalFrontEndName: "temporal-frontend-vw2qfr65"
Any idea why the externalName
is not created as an output?mammoth-afternoon-82670
09/30/2020, 6:54 PMmammoth-afternoon-82670
09/30/2020, 6:54 PMmammoth-afternoon-82670
09/30/2020, 6:54 PM