I was wondering if it is possible to have the conf...
# kubernetes
b
I was wondering if it is possible to have the config on the stack spec be map[string]interface{}? For instance if I want some nested maps in my config it currently isn’t possible
The use case I am thinking about would be having some generic components (programs) that could be used with Kubevela
h
i think this is essentially https://github.com/pulumi/pulumi-kubernetes-operator/issues/258 as a workaround, if you’re using v2 of the operator, you should be able to mount a configmap into your program’s directory containing whatever structure you want
b
i think this is essentially https://github.com/pulumi/pulumi-kubernetes-operator/issues/258
Yes, reading the description it is
as a workaround, if you’re using v2 of the operator, you should be able to mount a configmap into your program’s directory containing whatever structure you want
hmm I’ll look to see how this works. Got a link?
h
https://www.pulumi.com/blog/pulumi-kubernetes-operator-2-0/ in particular the workspaceTemplate field on the Stack is how you can customize the workspace’s pod
b
I am trying this out and I am getting
cmd.serve  unable to get the project settings: unable to find project settings in workspace
any pointers to what the cause may be?
h
we can make this error more helpful, but it indicates it’s not finding a Pulumi.yaml in the configured directory
b
interesting
I am using a programRef inside of a Stack
I also find it interesting that programs seem to get stored as flux artifacts?
The tar.gz has a Pulumi.yaml so that checks out
You simply inject an init container, mount the
share
volume to
/share
, and then place the project files into
/share/workspace
This seems like it might be the issue. I had created an init container dropped in the stack config ie Pulumi.dev.yaml file as a workaround to get structured configs into the program. And then expected the Pulumi.yaml file to get put in there per normal flow. However, the normal flow looks like it fetches the artifact for the programRef and then tries to symlink that dir to /share/workspace
h
ah ok so this might be an issue with customizing programRefs specifically
b
yeah or I am not doing it right
basic gist of what I tried
Copy code
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: s3-bucket-stack-config
data:
  Pulumi.dev.yaml: |
    aws:region: us-east-1
    s3-bucket:name: my-bucket-test
---
apiVersion: <http://pulumi.com/v1|pulumi.com/v1>
kind: Program
metadata:
  name: s3-bucket
program:
  outputs:
    bucketName: "${bucket.bucket}"
  resources:
    bucket:
      type: aws:s3/Bucket
      properties:
        bucketPrefix: "${s3-bucket:name}-"
        acl: "private"
        tags: 
          Name: "${s3-bucket:name}"
---
apiVersion: <http://pulumi.com/v1|pulumi.com/v1>
kind: Stack
metadata:
  name: s3-bucket-stack
spec:
  envRefs:
    PULUMI_ACCESS_TOKEN:
      type: Secret
      secret:
        name: pulumi-api-secret
        key: accessToken
    AWS_ACCESS_KEY_ID:
      type: Secret
      secret:
        name: pulumi-aws-secrets
        key: AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY:
      type: Secret
      secret:
        name: pulumi-aws-secrets
        key: AWS_SECRET_ACCESS_KEY
  stack: org/s3-op-project/dev
  programRef:
    name: s3-bucket
  stack: org/s3-op-project/dev
  workspaceTemplate:
    spec:
      image: pulumi/pulumi:3.142.0-arm64
      podTemplate:
        spec:
          initContainers:
          - name: extra
            image: busybox
            command: ["sh", "-c", "mkdir -p /share/workspace && cp /stack-config/Pulumi.dev.yaml /share/workspace/Pulumi.dev.yaml"]
            volumeMounts:
            - name: share
              mountPath: /share
            - name: stack-config
              mountPath: /stack-config
          containers: []
          volumes:
          - name: stack-config
            configMap:
              name: s3-bucket-stack-config
h
instead of an init container try mounting the cm in the pulumi container with a mount path of share/workspace/Pulumi.dev.yaml
b
ok, trying
Hitting a different error, but Monday is a new day and I will pick it back up then
Ever hit this?
pulumi  error: installing plugin; run pulumi plugin install resource aws v6.64.0 to retry manually: getting current user: user: Current requires cgo or $USER set in environment
h
@icy-controller-6092 that can happen if your user doesn’t have a home directory. are you seeing it during the built-in plugin install step or during something else?
b
I figured it out I think
I am on arm so I built my own kitchen sink image. I see now you are using the nonroot one which sets up the user, so set my target and rebuilt
seems to work now