Hello, I'm looking for guidance on how to structur...
# getting-started
h
Hello, I'm looking for guidance on how to structure our Github repositories. So first of all, I need help grasping the concept of a "Project". Let's say, I have a webserver, load balancer and an app server. Should the definition of that infrastructure go into one project or three projects?
Assuming it's three "Projects" like this:
Copy code
├── ec2/
│   ├── index.ts
│   └── config.yaml
├── load-balancer/
│   ├── index.ts
│   └── config.yaml
├── app-server/
│   ├── index.ts
│   └── config.yaml
How do I then iterate through this with github actions? I really don't want to:
Copy code
- name: Build Each Project
        run: |
          cd ec2
          pulumi up --preview --non-interactive --yes
          cd ..
          cd load-balancer
          pulumi up --preview --non-interactive --yes
Which brings me to a conclusion that maybe a monorepo setup is not optimal.
But then if I were to use multiple repos for each of the "projects" above, how would I notify, for example, the load-balancer about a change that happened in the ec2 project?
The examples above is not what I'm building. I'm moving a complex infrastructure from Terraform to Pulumi. There are tens of different AWS services in use.
So perhaps a "project" is a "web service" in this example. In it, I will define an ec2 instance(s), a load balancer, target group, app server, db server, etc?
I would love to hear or see examples of how you have structured complex projects.
Really appreciate any input.
i
Why not have each project run its own pipeline, yielding build artifacts? Do you want to use pulumi to build the artifacts? I use Pulumi to manage repos, setting env vars, permissions and stuff, but the rest is up to whatever you use for CI, in my case Gitlab CI, in your case Github Actions. Or do you want to deploy it from the pipeline and use pulumi for that?
h
Artifacts are build on their own in Github actions. I will use Pulumi strictly to define the infrastructure that the application runs on.
Ah, but I get what you're saying.. Each project can be a separate workflow file.
i
Then I would sth like that:
Copy code
/mystuff/codethings/lb
/mystuff/codethings/ws
/mystuff/codethings/ui
/mystuff/gitops/deploystuff
The latter i would use to deploy stuff
So you have one single place where you say whatever automates your deployment: Deploy this version of X, this version of Y and that of Z.
you could even use renovate on the deploystuff, it will pick up newer versions, create a merge/pull request , run a pipeline for that that does some test deployment …
h
So that last deploystuff project would have to iterate through all the projects.
Ok, I need to run for a bit but have something to think about.
Thanks @incalculable-mouse-24065
i
No problemo. I would suggest to not try to do too many new things at the same time. Break the knowledge domains down into pieces, see which ones should come first, tackle those, then combine. Thats generally advisable and works for everything but marriage 😉
h
It's all new to me, so yeah.
i
I would go about it like this 1. figure out how to build your code locally 2. figure out how to do it with github actions 3. figure out how to deploy it with pulumi locally 4. figure out how to deploy it with pulumi using github actions Then you’ll always have something local to fall back on for debugging. Nothing sucks more than having to wait for a pipeline just to run into an error 😉
b
h
Hi @billowy-army-68599, I really like the article you shared above. I'm mostly going with that or a very similar structure. Thanks!