Friendly unsolicited advice: You guys should reall...
# general
c
Friendly unsolicited advice: You guys should really consider slowly replacing the entirety of Gitlab's CI/CD offering. I already see a bit of it with the docker provider, and someone really needs to do to them what you're doing to terraform. Gitlab-ci.yaml files are just not powerful enough.
1
l
Have you tried using on-prem runners and custom images? There shouldn't be anything that you can't do using GitLabCI
The free runners are limited, but that's the nature of free...
c
We use an on-prem instance and custom images. But here's an example of a problem we have: Our frontend is React. React doesn't actually have any concept of runtime environment variables; they have to be present in an .env file at compilation. So the env vars have to be included when the react app is built.
You can probably see where I'm going with this. There's code duplication involved if you want to have an env var dictate, for example, a shared value between frontend and backend, or a property of an IaaS resource that you'd rather just retrieve dynamically via pulumi up.
In an ideal world I could go clone a monorepo and when I ran an apply, everything that remained modified or unbuilt would be recompiled or built by pulumi (maybe even in remote runners?). Pulumi already has the graph, it just needs to be extended to the code deployed onto applications.
l
That has nothing to do with GitLab though. That's just build-in-one-container-and-run-in-a-different-one problems.
And they're very solvable. Generating files, building images on the fly, mounting in interesting ways, generous use of vaults...
Example: scaling env vars / shared env files is often solved through shared value maps like redis.
c
I'm not saying the problem is unsolvable; I solved a similar problem via redis when it comes to figuring out the hash of the latest built container. But it requires a bunch of engineering around gitlab, and code duplication within gitlab. It would be much simpler for me to tell pulumi about the dependency between a pod and its docker image, and between the build process for that image and some infrastructure details, etc. Here's another one: I'd like to build every docker container (there are a lot) under a certain directory hierarchy with slight tweaks in tags/naming. This means repeating this set of gitlab code, mostly, for each app, instead of iterating over the directory:
Copy code
examiner_src:
    stage: build_containers
    image:
      name: <http://gcr.io/kaniko-project/executor:debug-edge|gcr.io/kaniko-project/executor:debug-edge>
      entrypoint: [""]
    variables:
      WORKDIR: "$CI_PROJECT_DIR/src/backend/examiner"
    artifacts:
        paths:
            - src/pulumi/digests
    script:
      - mkdir -p /kaniko/.docker
      - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
      - /kaniko/executor --cache=true --digest-file=src/pulumi/digests/examiner --context $WORKDIR --dockerfile $WORKDIR/Dockerfile --cache-repo $CI_REGISTRY/${CI_PROJECT_PATH}/cache --destination $CI_REGISTRY/${CI_PROJECT_PATH}/${CI_COMMIT_REF_NAME}/examiner:latest
    rules:
        - changes:
            - src/backend/examiner/**/*
It's possible that gitlab has some feature that I don't know about that has solved this, or that they have some templating/directory-enumeration system I can use. But if Gitlab gave me the kind of interface for writing these .yml files as Pulumi gave me for creating infrastructure, I wouldn't even have to find out about it and Gitlab wouldn't have to build it. I'd just write the code.
This is obviously not a ding @ pulumi btw; you guys are so competent at design that I'm just saying it could be a good idea to extend the model you've built to other areas
l
Yep, there's a downside to yaml-based development 🙂 That's what Pulumi is looking to solve. For the particular issue of "lots of Dockerfiles", I'd probably step away from using GitLabCI directly, and do it via a single docker-compose.yml that has a service for each Dockerfile. GitLabCI can call
docker-compose build
.
c
+1 for Dean's unsolicited suggestion. There are a lot of problems yaml CI pipelines cannot solve in a DRY way. With Github Actions for example, it's impossible to have a top-level env var refer to another top-level env var. I'm forced to hardcode a value twice. I also cannot have parts of a workflow run depending on which branch I'm on. If I want a sliiiightly different workflow for another branch, I have to copypaste the entire file and change the one thing I want to be different. Huge pains.
🙌 2
l
Currently suffering from exactly this with BitBucket pipelines. Several pipelines have duplicated unit test sections.. 😞
☹️ 1
w
This is something we’d love to support - and or foster an ecosystem project to support. One challenge we’ve had previously when looking at this is the lack of good REST APIs over this workflow configuration, making it difficult to build desired state systems around this. Curious if folks here have thoughts on concrete approaches a Pulumi provider could take to manage this?
l
It seems to me to be outside Pulumi's domain. It's closer to docker-compose, k8s or something like that. The advantages of using managed build systems like GitLab / Actions / BuildKite /... always come with flipsides like restricted permissions and limited DSLs/YAML configs. To get around those flipsides, I'd simply build my own build platform (using Pulumi), and use the managed build platform only to trigger a build...
But right now, the managed build system is good enough for me 🙂 Even with the workarounds.
c
The ability to use pulumi to manage/queue builds on a custom-made build platform is sort of exactly what I would be looking for. In my case, an extension to the docker provider so that it can queue builds on remote hosts/k8s pods would mean I'd have the primitives to build this sort of thing myself.
l
This has suddenly got a lot easier on AWS 🙂 They're just announced
kubectl exec
c
https://www.pulumi.com/docs/reference/pkg/docker/image/#imageregistry ^Something like this, fed a compute component (or maybe just a k8s api key) generalized out to arbitrary builds, not just docker.
That I can depends_on later in my pulumi code. Then expanded to some sort of resources for stages in a build process like linting/checks, etc