Any suggestions for understanding why pulumi wants...
# general
f
Any suggestions for understanding why pulumi wants to
replace
some instances? In AWS, got a cluster of machines stood up over a year ago, on Pulumi 0.16.8 I think. Now on Pulumi 1.14 and it wants to replace all the instances.
It looks like Pulumi assigns data different now or something. No change to the resources, there should be no disconnect.
Wondering if there is some undocumented migration of state that should be applied?
@big-piano-35669 Hey Joe, you reached out on twitter, I appreciate that. I’ve got no gist to reproduce. It’s an elasticsearch cluster stood up in AWS. Used 0.16.8 to stand it up. Nothing has changed (except some stop/starts thanks to AWS). I upgraded Pulumi (1.14) and then do a
pulumi preview
and it wants to trash the instances. I should note I’ve made no changes, I was just wanted to confirm everything is cool. Unfortunately I cannot seem to downgrade and test, it complained about unknown checkpoint version. Being elasticsearch, trashing the instances is the last thing i want to do. Not sure what else I can provide, I could provide the diff maybe? But I’ve hand edited the state even and gotten to the point it shows no diff, but it still wants to replace them.
b
@microscopic-florist-22719 @white-balloon-205 @faint-table-42725 Any thoughts on this? We've definitely had some state and diffing logic changes since 0.16.8 -- is there any trick for a smooth upgrade?
f
If nothing’s changed and you expected no changes, I would expect a
pulumi refresh
followed by
pulumi preview
to generate no diff.
I would try something like:
pulumi stack export > backup.json
so you have your state file in case
then attempt a
pulumi refresh
see if things look sane, accept the changes to your state, then
pulumi preview
afterward to ensure nothing will change
f
That’s the path i followed. The refresh change a few things (i can’t recall what now) but nothing substantial. The diff has weird things in it, most of it is fields that disappeared like
cpuCoreCount
,
ebsOptimized
,
id
, etc. Also, the
userData
went from some form of hash to the literal value.
let me see if i can sanitize a diff and post it
f
Sure — thanks, that’ll be helpful. Or feel free to DM me if you don’t want it in channel
f
DM’d it. I think it’s sanitized, but you know how the internet is. 🙂
f
thanks
f
Ok, after battling my nemesis Python and versioning, I’m back to a happy state. 1. The primary issue was an updated Pulumi CLI (1.14) with outdated SDK (0.16.4) and AWS provider (0.16.2). Is there some way I should have known it needed upgrading? 2. Even after upgrading the SDK and AWS provider, for some reason the security groups were ultimately missing from the original state when they shouldn’t have been. Maybe a bug that was fixed over time? After hand editing it’s all good. 3. Lee was a tremendous help in solving this. Not only did he discover the mismatched versioning, but his python recommendations were spot on and worked to get me running. Thank you!
f
Awesome! Great to hear you’re back in a good state!
Re: #1 — that’s a great question. We can probably do a better job of checking for dated dependencies and warning on them. I actually thought we did this for CLI-SDK (though not providers)
Do you mind filing an issue?
f
Sure thing
Hope that is enough information
f
Yup — thanks so much!
b
Woohoo! 🎉