Hi all. Is there any documentation/code available ...
# general
m
Hi all. Is there any documentation/code available that describes the differences between Pulumi outputs and Pulumi configurations, and when to use what? I've had some discussions with my colleagues on how approach certain things: • We did split our infrastructure in separate Pulumi projects so they can be executed separately. • However, there are some dependencies between these projects: ◦ The first project creates a GCP project ◦ the second projects creates basic networking components (VPC, subnets, etc.). • Higher projects might require some information that was created/updated in other projects. My colleagues propose to use the automation API and to write the information to the dependent projects' configuration so that they will retrieve the updated config during the next run. I see a few downsides here: • As far as I understand the Pulumi config, it resides in a .yaml file so I expect that it should always be read from there and implies that it requires a Git/code commit/push/pull before it's shared. This could create weird syncing issues. • It feels like the opposite way to manage dependencies, as the code delivering the shared data should know it's dependencies My stance would be that you should retrieve provisioning information from the Pulumi project's state, by using the get_ functions, or by reading it from the outputs (which requires you to export it). The purpose of the Pulumi config/secrets is different than the state/outputs but I have trouble putting it into the right words to convince them. Any tips, thoughts, links that can help us to settle the discussion? (I'm also happy to be wrong here :)) (Please let me know if I'd rather need to repost this in a specific channel)
v
We use stackReferences here at my place to leverage the stack outputs. its the neatest way we have found, its also handy that its programatic so changes will be detected automatically when you run pulumi next. One challenge is obviously if a dependent service needs updating, is triggering a pulumi run of that service. Something in my personal backlog is to investigate using the pulumi stack graph command to leverage automated pulumi ups from dependent stacks
pulumi config is completely different and should be used for project specific configurations such as environment, region, etc etc
m
Is StackReferences specific to AWS? We're using GCP/Python. Currently, we retrieve stack information using:
Copy code
stack = automation.select_stack(
        stack_name=get_stack(),
        project_name=project_name,
        program=lambda: None,
    )
Which returns an object that includes things like the outputs.
In respect of updating dependent projects, that might depend a bit on the deployment setup. To keep things a bit manageable (for now), we try to keep our deployment pipeline monolithic. We now have one single pipeline that executes the pulumi updates for each project, one after another. That might change in the future of course. We're in the midst of a cloud migration from on-prem Docker Swarm to K8s/GKE/GCP.
b
@victorious-church-57397 https://github.com/gitfool/Pulumi.Dungeon does something like that. Stack orchestration basically.
v
i dont think we are talking about the same thing @bored-activity-40468 - we have multiple stacks in the aws accounts and need outputs from each of them, they are stored in separate repos. This looks like wrapping all the pulumi resources you need in a class in one repo
and doing multiple environments, which we already do using pulumi stacks 🙂 thanks for the link though, will check it out further
b
That's true, I was more talking about how with automation-api that stackref or stack dependency enforcement can be built in. Even though that example has multiple stacks in the same repo, since we get to use a real programming language, the dependency ref could just as easily be a link to another repo.