I would like to tack on to Liran's question about ...
# kubernetes
s
I would like to tack on to Liran's question about GH Actions. I can get this to work locally but it does not work in GH, anyone have any suggestions about this?
Copy code
name: Pulumi preview 🚀
on:
  pull_request:
    paths:
    - 'eks/eks-dev/system/**'

jobs:
  preview:
    name: Preview
    runs-on: ubuntu-22.04
    steps:
      - uses: actions/checkout@v3
      - name: Setup node 18
        uses: actions/setup-node@v4
        with:
          node-version: 18
          registry-url: <https://npm.pkg.github.com/>
          scope: '@openphone'
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      - name: npm install
        working-directory: ./eks/eks-dev/system
        run: npm install
        env:
          NODE_AUTH_TOKEN: ${{ secrets.GPR_ACCESS_TOKEN}}
      - name: print helm version
        run: helm version
      - name: Install latest helm
        uses: azure/setup-helm@v3
      - name: print helm version (again)
        run: helm version
      - uses: pulumi/actions@v4
        with:
          command: preview
          stack-name: openphone/dev-eks/dev-charts-system #this is the stack we are working on
          work-dir: ./eks/eks-dev/system
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
d
What kind of errors are you seeing? Do you have any configs set on your aws provider or for aws in your stack/project configs (the Pulumi yaml files)?
s
oh so that is a good question, I know I am only using k8sprovider and a minimal Pulumi.yaml file.
Copy code
import * as aws from '@pulumi/aws'
import * as k8s from '@pulumi/kubernetes'
import * as pulumi from '@pulumi/pulumi'
import * as awsx from '@pulumi/awsx'

// output the stack name to pulumi so as to use config verification
export const stack = pulumi.getStack()

const config = new pulumi.Config('infra');

export const kubeconfigString = generateKubeconfig.apply((kubeconfig) => JSON.stringify(kubeconfig))
export const k8sProvider: k8s.Provider = new k8s.Provider('k8sProvider', {
  kubeconfig: kubeconfigString,
})
this is about it
d
OK, but what's the error?
Too late, I'm traveling now. The most common issue I see regarding aws local vs CI is that people set the aws profile in config instead of using environment variables or proper tooling to manage aws profiles
s
sorry it took so long and I had to look it up. there are two at present.
Copy code
0s
Run npm install
  npm install
  shell: /usr/bin/bash -e {0}
  env:
    NPM_CONFIG_USERCONFIG: /home/runner/work/_temp/.npmrc
    NODE_AUTH_TOKEN: ***
    AWS_DEFAULT_REGION: ***
    AWS_REGION: ***
    AWS_ACCESS_KEY_ID: ***
    AWS_SECRET_ACCESS_KEY: ***
Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/work/pulumi-deployments/pulumi-deployments/./eks/eks-dev/system'. No such file or directory
this was another error but I need to check if it is still an issue
Copy code
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> resource 'cert-manager': property chart value {cert-manager} has a problem: chart requires kubeVersion: >= 1.22.0-0 which is incompatible with Kubernetes v1.20.0; check the chart name and repository configuration.
d
It might be worth trying without the
./
prefix, I don't remember using it