brave-angle-3325710/16/2018, 7:14 PM
bitter-oil-4608110/16/2018, 9:57 PM
(which the upstream terraform provider uses to manage dependencies) and
(which Pulumi uses to manage dependencies) makes it hard to ingest updates from upstream. Ideally we'd have an automated process to do this, but the problem is that every-time we've gone to do this we've had to have manual intervention to get something that works. @stocky-spoon-28903 Is going to start looking at how we can actually stop having this class of problems, but for now the problem is basically that we can't take in new code from upstream so we can't run tfgen over it. The problem is that tfgen and tfbrdige don't work on the terraform provider binary. Instead, we need the correct set of terraform sources (plus dependencies) as well as some pulumi sources to both build tfgen and the pulumi version of the provider and getting a consistent set of sources is not easy.
brave-angle-3325710/16/2018, 10:09 PM
brave-angle-3325710/16/2018, 10:30 PM
brave-angle-3325710/18/2018, 8:12 PM
bitter-oil-4608110/19/2018, 12:40 AM
I was thinking we could have our automations deploy using standard cloud backend but then maybe export the state file and save it in a versioned object store after each deploy. Is that a feasible failsafe?That would work. We do something similar for our key services where after an update we export the checkpoint and store it in a seperate versioned S3. In the disaster recovery case we'd pull down the checkpoint from the bucket, use
to connect to the local backend and then pulumi stack import.
pulumi login --local
brave-angle-3325710/19/2018, 1:09 AM