https://pulumi.com logo
#general
Title
# general
b

brainy-napkin-70882

03/28/2024, 6:26 AM
I'm running a loop in a bash script that runs
pulumi config refresh --stack prod
on every micro-stack I have in a mono-repo. On occasion, I will get the following error and it varies in terms of which stack outputs this error on each run:
error: no previous deployment
The state is stored in s3 and all stacks have initialized previously. I'm not getting any further error messages to help debug this issue. What do I do?
s

stale-twilight-75626

03/28/2024, 6:40 AM
Have you tried turning the logging up with -v ?
b

brainy-napkin-70882

03/28/2024, 6:52 AM
I have, something like -v=4, and I don't get any more verbose logs
s

stale-twilight-75626

03/28/2024, 6:55 AM
What happens if you run the refresh command on the same stack twice?
One of the ones that produces the error that is
If it works after a retry I would suspect you are probably getting rate limited by the Pulumi API and the HTTP 429 isn't making it back to you
I would consider putting in some sleeps as an experiment
if it doesn't work after a retry, then something is probably up with your iteration logic.
I'm heading to bed but lmk how it goes
b

brainy-napkin-70882

03/28/2024, 1:45 PM
I put in some sleeps, as high as 60s even, and it would still give the same error (not on the same stack each time). For a bit more context, I'm trying to run this script in gitlab CI where each stack refreshes based on the stack configurations stored in S3, then run
pulumi preview
on each stack. I ended up throwing each of the stack's configuration file (
Pulumi.<stack>.yaml
) file into my git repo too, and removed the
pulumi config refresh --stack prod
command from the script, and the script runs fine afterwards. So the issue lies with the
pulumi config refresh
command, but I can't tell what exactly.
s

stale-twilight-75626

03/28/2024, 4:52 PM
Yeah I would highly recommend staying with the stack.yaml configs. The configuration for each stack is kept in version control and all changes must go through your repo and automation. Having a single source of truth is critical for infrastructure as code to work.
b

brainy-napkin-70882

03/28/2024, 5:11 PM
Thanks, I was a bit iffy about what the workflow would look like if I store the stack.yaml files in git. Wouldn't this mean I would have to always init the stack locally, then commit the generated stack.yaml file for any CI automation involving previews or deploys to run? Initializing the stack locally would mean a developer needs write access to the backend (being S3). Vs. if I do not store the stack.yaml files in git, I can either run pulumi config refresh to get the latest config from the backend if it exists or create the stack if it doesn't exist, both through CI automation. I'm trying to understand if these are the common workflows.
s

stale-twilight-75626

03/28/2024, 5:27 PM
If you pull down the repo with all your pulumi files in it then you don't need to init the stack again.
Why are your stack.yaml files changing so much? I hardly touch mine
But yes, all your developers will need write permission to the backend
or you can set up CI to run all of the
pulumi up
commands
but that iteration loop can be pretty painful when you are first getting started
b

brainy-napkin-70882

03/28/2024, 5:38 PM
The already initialized stack.yaml files aren't changing. I'm thinking about in the event of having to create a new stack, a developer would need to initialize the stack locally (which requires write) then commit the stack.yaml file. My initial intention is to use CI to initialize/create new stacks so that adhoc write permissions don't need to be granted to developers.
s

stale-twilight-75626

03/28/2024, 6:15 PM
Yeah that can work but I'm curious why you are consider write access a special case? If a dev can't make changes to the backend they can't even get a preview of what the pulumi changes would be. I don't know how they would be able to work effectively. Am I missing something?
b

brainy-napkin-70882

03/28/2024, 9:43 PM
I'm using Pulumi for managing snowflake infrastructure and I only have 1 environment, being a production account. Developers having continuous write permissions is a no go for security reasons, but temporary elevated privileges are possible. My thought was to preview changes on opening a merge request and CI will run a pipeline that runs the necessary commands. This felt like the easiest path of adoption for a tool that we haven't used before.
s

stale-twilight-75626

03/28/2024, 9:55 PM
yeah that's a good idea. Then deploy the changes when you merge to the "production" branch
you will probably have to troubleshoot it for a while
once stable, then I would merge it to main