I have kind of an interesting question I'd like to...
# general
h
I have kind of an interesting question I'd like to ask here or any other spot anyone might have advice. We want to update some mongodb cloud resources so we can set a new property, "defaultWriteConcern: majority". The problem is we're on an old version of the provider so we can't quite set that field. So today, we're trying to upgrade the pulumi/mongodbatlas packages from 1.7.0 to 3.3.0 and we're seeing that some connectionString property here changed from a string to connectionString[]. We are trying to upgrade the package, run a pulumi refresh, and we are hoping to get the existing cluster resource updated so we could access the connectionStrings array and store values in a kubernetes secret later in our project. That pulumi refresh after upgrade doesn't seem to be update the shape of the resource for us though. we still see connectionStrings looking like it used to and not the array of values the new package/provider says we should be using. Any advice on what we might do to get the resource updated so we can set that value?
p
Resources have a provider attached which is version specific. Update the provider, then code and run an
up
with a refresh Does this help?
h
I will give this a try thank you.
I think what happened is we updated the code and provider, tried to run the refresh and just got the error message about connectionString not being an array.
p
Thanks, Kevin. Are you still having an issue with the error message?
h
Hi Matthew, sorry it's been a while since I've tried this, but it didn't actually help in this case. What happened here is that there's a field in the outputs cluster.connectionStrings.privateSrv that used to be supported on provider version 1.7. a change turned connectionStrings into an array that we have to try to pull the field from. That said, we're already saving that old privateSrv output to our k8s cluster in a secret so our apps can use it. so the pulumi fails even when I try to upgrade the provider because of this change.
Copy code
Diagnostics:
  pulumi:pulumi:Stack (mongo-dev):
    Found incompatible versions of @pulumi/pulumi. Differing major versions are not supported.
      Version 3.5.1 referenced at node_modules/@pulumi/kubernetes/node_modules/@pulumi/pulumi/package.json
      Version 2.24.1 referenced at node_modules/@pulumi/pulumi/package.json
 
    error: Running program '/home/circleci/project' failed with an unhandled exception:
    TSError: ⨯ Unable to compile TypeScript:
    secrets.ts(24,44): error TS2339: Property 'privateSrv' does not exist on type 'Output<ClusterConnectionString[]>'.
I've exported the stack and confirmed it's still on provider version 1.7 like you mentioned. Right now, Im honestly going to have to see if I cant' save then manipulate the cluster outputs in the stack's state file.
p
Hi Kevin - hope you had a nice weekend. I'm passing this along internally and will circle back.
Hi Kevin, does this help? If not we may want to get a call together. • Pinning the version in the package.json (puts the stack back into a state whereby it runs cleanly with no changes) • Run an npm update to grab the new package • Update the code in your Pulumi program to satisfy the new types • Add aliases where needed if the provider needs a refresh
h
So I think the problem for us is more specific. It's because we grab and use a resource output already. We literally take that connection string off the mongo resource output and put it in a kubernetes secret all of our apps can read in on start up.
The refresh didn't really help because we gave that a try first. I was starting the work last week to try to run the pulumi up with the new provider version and reverted things and still bumped into that error. I'm thinking we'll likely hard code the secret value as a string, run the pulumi up with the new provider to change the resource shape, then in another commit pull the connection string back out and repopulate the secret.
So I think I’m almost through the problem, but I’ve again run into another issue that doesn’t seem like it’s been accounted for during the upgrade process. i get the cluster resource updated with the settings I want tagged with a new resource provider, but I get an “initialization error”
Copy code
mongodbatlas:index:Cluster (main-cluster):
    error: 1 error occurred:
    	* updating urn:pulumi:dev::mongo::mongodbatlas:index/cluster:Cluster::main-cluster: 1 error occurred:
    	* Invalid address to set: []string{"snapshot_backup_policy", "0", "policies", "0", "id"}
The problem here is, the latest mongo resource doesn’t require an ID set in policies, but the old resource had ids set. and it’s not a configurable option as far as I can tell on a cluster resource. so I’m again at a loss of how to get my project update to the latest and greatest, even ignoring the output problem that I’m trying to work around.
opening up the stack state, I definitely see the error.
Copy code
"initErrors": [
                    "updating urn:pulumi:dev::mongo::mongodbatlas:index/cluster:Cluster::main-cluster: 1 error occurred:\n\t* Invalid address to set: []string{\"snapshot_backup_policy\", \"0\", \"policies\", \"0\", \"id\"}\n\n"
                ],
as best i can tell. the mongo cluster resource sets a default snapshot backup policy with an iD that’s no longer required in the latest provider.
and I just deleted the initErrors because it looked like the resource got updated properly, and that was enough to do the trick. Dev is now green and updated.
I had to remove the cluster address output and manually edit the stack state to remove the init errors.
This was definitely a harder upgrade and I’m thinking there’s a little bit of terraform and mongo nuances that caused some of this heartburn for us.
p
Hi Kevin, Apologies but I had lost track of this thread. Thank you for writing this up and I'll pass it along.
h
Thanks Matthew. All that said I think I still have a lingering problem with this pulumi mongodbatlas provider. I replied into an open bug in that GitHub repo.
Basically I can get the project to work now but only if I set the cloud backup flag on my clusters to false.
This looks like the last lingering problem for my upgrade to work. I learned that if you use outputs like I do at all you're really open to upgrades failing if the resource changes it's shape in any way.