few-painting-77267
09/07/2020, 8:06 AMfew-painting-77267
09/07/2020, 8:07 AMnew Secret("<>>", {
metadata: {
name: "<>",
namespace: "default"
},
stringData: {
auth: password.result.apply(p => {
let hash = crypto.createHash('md5');
hash.update(p);
return hash.digest('base64')
}).apply(p => `<>:${p}`)
}
})
few-painting-77267
09/07/2020, 8:07 AMeager-analyst-8893
09/07/2020, 2:21 PMconst istio = new k8s.yaml.ConfigFile("istio-1.7.yaml", {
file: "istio/istio-1.7.yaml",
}, {dependsOn: istioSystemNamespace, providers: {kubernetes: k8sProvider}, customTimeouts: {create: "10m"}});
But may be there is more beautiful way to install ?nutritious-flower-51098
09/08/2020, 1:48 PMconst argocd = new kubernetes.helm.v3.Chart("argo", {
repo: "argo",
chart: "argo-cd",
fetchOpts: {
repo: "<https://argoproj.github.io/argo-helm>",
},
namespace: argoNamespace.metadata.name,
}, {
provider: kubernetesProvider,
dependsOn: [argoNamespace]
})
limited-rainbow-51650
09/08/2020, 6:36 PMStatefulSet
? This is what I have so far:
attachPostgreSQLConfiguration(args: pulumi.ResourceTransformationArgs): pulumi.ResourceTransformationResult | undefined {
let props: pulumi.Input<kubernetes.types.input.apps.v1.StatefulSet> = args.props;
let env : kubernetes.types.input.core.v1.EnvVar[] = props.spec.template.spec.containers[0].env;
env = env.concat([
{
name: 'ORTHANC__POSTGRESQL__ENABLE_SSL',
value: 'true'
}
]);
<http://pulumi.log.info|pulumi.log.info>(`In psql plugin transformation: ${util.inspect(env)}`)
props.spec.template.spec.containers[0].env = env
return { props: props, opts: args.opts }
}
But I get a TypeScript error on props.spec
saying Object is possibly 'undefined'
limited-rainbow-51650
09/08/2020, 6:37 PMerror TS2339: Property 'template' does not exist on type 'Input<StatefulSetSpec>'
dazzling-sundown-39670
09/09/2020, 8:47 AMpulumi up
. I read somewhere that I should do a CronJob without a schedule but the typescript didn't like that. Any other suggestions?clever-byte-21551
09/09/2020, 5:28 PMp, err := providers.NewProvider(ctx, "kubernetes", &providers.ProviderArgs{
Kubeconfig: pulumi.StringPtr(k.kubeConfig),
})
if err != nil {
return errors.Wrap(err, "failed to create k8s provider")
}
_, err = helmv3.NewChart(ctx, "my-chart", helmv3.ChartArgs{
Path: pulumi.String(filepath.Join(chartsPath, "my-chart")),
Namespace: pulumi.String("some-ns"),
}, pulumi.Provider(p))
if err != nil {
return errors.Wrap(err, "failed to deploy chart")
}
I’m receiving this error:
failed to deploy k8s stack: failed to update stack: code: 0
, stdout: Updating (yarin/test-aws/k8s_us-west-2):
pulumi:pulumi:Stack: (same)
[urn=urn:pulumi:yarin/test-aws/k8s_us-west-2::test-aws::pulumi:pulumi:Stack::test-aws-yarin/test-aws/k8s_us-west-2]
+ pulumi:providers:kubernetes: (create)
[urn=urn:pulumi:yarin/test-aws/k8s_us-west-2::test-aws::pulumi:providers:kubernetes::kubernetes]
kubeconfig: \"CENSORED\"
+ 1 created
1 unchanged
Duration: 5s
, stderr: engine: 127.0.0.1:57240resmon: 127.0.0.1:57252error: expected '::' in provider reference ''
: failed to run update: failed to run inline program and shutdown gracefully: rpc error: code = Unavailable desc = transport is closing"
So I’m not sure what:
expected '::' in provider reference ''
supposed to meanworried-needle-99800
09/09/2020, 8:20 PMcrd2pulumi
, a CLI tool that generates typed CustomResources based on a k8s CRD, to a stand-alone repo at pulumi/crd2pulumi.
You can download the latest release here, which now supports Python and C#. Please reach out to me if you run into any bugs!kind-mechanic-53546
09/09/2020, 11:42 PMworried-ambulance-50217
09/10/2020, 12:23 AMlimited-rainbow-51650
09/11/2020, 2:38 PMpulumi
is hanging/waiting during a preview
(or the preview part of up
)? This started happening since I deployed cert-manager
v0.16.1 on a k8s 1.15 cluster.clever-byte-21551
09/14/2020, 7:36 AMhundreds-receptionist-31352
09/14/2020, 3:38 PMfaint-motherboard-95438
09/15/2020, 2:44 PMabundant-airplane-93796
09/17/2020, 1:08 AMistio-ingressgateway
service as a resource so that I can extract the value of some annotations to use in some later resourcescreamy-forest-42826
09/17/2020, 11:54 AMpulumi.all(...).apply(...)
for (let i = 0; i < args.containerConf.replicas; i++) {
const service = new k8s.core.v1.Service(
`zooservice-${i}`,
{
metadata: {
name: `zooservice-${i}`,
labels: {
app: ZookeeperApp,
},
},
spec: {
ports: [
{ port: 2181, name: "client" },
{ port: 2888, name: "server" },
{ port: 3888, name: "leader-election" },
],
},
},
{
parent: this,
}
);
this.zkServices.push(service);
}
const inputIPs = this.getZookeeperServicesIPs();
pulumi.all(inputIPs).apply((servicesIPs) => {
servicesIPs.forEach((ip) => {
console.log(ip);
});
for (let i = 0; i < args.containerConf.replicas; i++) {
const zookeeper = new Zookeeper(
`zookeeper-${i}`,
{
containerConf: args.containerConf,
imagePullSecrets: args.imagePullSecrets,
serviceIPs: servicesIPs,
},
{
parent: this,
}
);
this.zookeepers.push(zookeeper);
}
});
PS: I cannot use DNS names.dazzling-sundown-39670
09/17/2020, 2:11 PMpulumi up
it takes down all my cluster nodes and starts them again because of a new template body. Can I avoid this? This time I didn't even change anything related to the clusterchilly-garage-80867
09/18/2020, 7:27 PMException: invocation of kubernetes:yaml:decode returned an error: error converting YAML to JSON: yaml: line 128: mapping values are not allowed in this context
brash-waiter-73733
09/19/2020, 12:34 PMrender_yaml_to_directory
to output YAML files from my Python code.
However, I noticed (as documented) that this means I can’t then deploy those resources to the cluster with Pulumi as well.
Does anyone have a pattern for doing both? ie.
1. run pulumi up
2. deploy to cluster
3. generate YAML
I tried:
• Passing providers
and a list of providers, but that didn’t seemed to revert to defaults ❌
• Using a config as a toggle between two providers, but this led to state problems ❌
• Abstracting the resources, and then applying twice in the same script, but Pulumi complains about them having the same name ❌
Appreciate this has a nice BETA FEATURE
warning. I’d be interested if anyone has a pattern for doing the above, or if this might be supported in thee future.
:param pulumi.Input[str] render_yaml_to_directory: BETA FEATURE - If present, render resource manifests to this directory. In this mode, resources will not
be created on a Kubernetes cluster, but the rendered manifests will be kept in sync with changes
to the Pulumi program. This feature is in developer preview, and is disabled by default.
Note that some computed Outputs such as status fields will not be populated
since the resources are not created on a Kubernetes cluster. These Output values will remain undefined,
and may result in an error if they are referenced by other resources. Also note that any secret values
used in these resources will be rendered in plaintext to the resulting YAML.
salmon-account-74572
09/21/2020, 7:49 PMsalmon-account-74572
09/21/2020, 8:28 PMkustomize
support to also render YAML to a directory? (It appears as if Pulumi's kustomize
support is acting more like kubectl -k
as opposed to standalone kustomize
.)bitter-application-91815
09/23/2020, 8:34 AMbitter-application-91815
09/23/2020, 8:34 AMbitter-application-91815
09/23/2020, 8:34 AMkubectl apply -f <https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml>
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler <http://cluster-autoscaler.kubernetes.io/safe-to-evict=%22false%22|cluster-autoscaler.kubernetes.io/safe-to-evict="false">
bitter-application-91815
09/23/2020, 8:35 AMbitter-application-91815
09/23/2020, 8:35 AM0922 12:44:54.845750 1 aws_manager.go:265] Failed to regenerate ASG cache: cannot autodiscover ASGs: AccessDenied: User: arn:aws:sts::919601712473:assumed-role/staging-f-exec-node-role-7ca4419/i-01b2a7eb09e0d618b is not authorized to perform: autoscaling:DescribeTags
status code: 403, request id: 1a3191cd-ffda-4ee8-bdd9-8f3af2b1af93
F0922 12:44:54.845780 1 aws_cloud_provider.go:382] Failed to create AWS Manager: cannot autodiscover ASGs: AccessDenied: User: arn:aws:sts::919601712473:assumed-role/staging-f-exec-node-role-7ca4419/i-01b2a7eb09e0d618b is not authorized to perform: autoscaling:DescribeTags
status code: 403, request id: 1a3191cd-ffda-4ee8-bdd9-8f3af2b1af93
bitter-application-91815
09/23/2020, 8:36 AMbitter-application-91815
09/23/2020, 8:36 AM