Ok, weird pulumi issue, if I try to run a second p...
# getting-started
Ok, weird pulumi issue, if I try to run a second pulimi task (action? idk the specific vocabulary but if I run pulumi.Run() a second time) it runs through most of it well but at the end errors with
error: Duplicate resource URN 'urn:pulumi:masterapi::masterapi::pulumi:pulumi:Stack::masterapi-masterapi'; try giving it a unique name
which, after hours of troubleshooting, I realised was NOT a duplicate resource I made in equinix (my provider) but something in pulumi trying to make a second stack so I ask, why is pulumi trying to make the stack? and how can I tell it not to
can you share your code?
Sorry for not replying, I'll share a snippet in a few
https://imagen.click/p/90b569cb this is the pulumi.run section that's run every time the http request is called @billowy-army-68599
I have an answer for this 🙂
Copy code
// This is horrible, why does pulumi require their own types
you're running this from automation API? is there anything else missing?
for example, what's in here?
Copy code
dbout := db.Create(&nodeInfo)
		if dbout.Error != nil {
			log.Errorln("Error inserting node info into DB: ", err)
			return err
Sorry for the poor scheduling again, that code snippet is run upon a GET HTTP request. As the code I work on isn't formally open source, I do have to be cautious with releasing it. My organization has no policy on code sharing, so I'm taking the liberty of saying if I redact important information it's ok
The db object is just a gorm database which is currently not being read from (as I haven't gotten a successful query to have anything to read) and the nodeInfo struct is part of a protobuf, but for now I'm just using as a struct inside golang.
https://imagen.click/p/ff42becf @billowy-army-68599 The entire function, there is no data processing elsewhere and pulumi is not called anywhere else for now
The only reason this could be happening is because you're calling the stack up twice at the same time. By that I mean the Pulumi.Run part
It looks like you're using automation api?
I don't think I know what you mean by automation API?
And yes, pulumi.Run is run twice in parallel, that is required for my implementation
It seems like pulumi only wants to be run once, but that seems like a very unhelpful configuration, since I would then ask "why is this inside of a golang script instead of just directly in terraform or similar"
I will also point out, not with the intent to blame nor shame, only for transparency, but the time I have to get at least a working demo of this is coming up short, and by the end of the weekend (real weekend, not holiday weekend) I do need to either have pulumi cooperating or I need to ditch pulumi entirely and just move to the pure equinix API.
If you're running it twice in parallel with the same stack configuration that's your problem.
It's reading like their is a misunderstanding of how pulumi is meant to be executed. You can't invoke pulumi.run in parallel within the same process if you are not using Automation API, and even with Automation API you can't invoke in parallel against the same stack.
☝️ 1
I'm not super familiar with the golang API, but it looks like you are trying to invoke the standard pulumi entrypoint in an HTTP endpoint. That pulumi entrypoint is only meant to be used in short lived console apps exclusively invoked by the pulumi CLI. If you want to call pulumi programmatically from an HTTP endpoint, you need to use the Automation API that exists for that purpose.
Here's an example of an golang pulumi inline program using Automation API, which looks like what you are trying to do: https://github.com/pulumi/automation-api-examples/tree/main/go/inline_program Here is a getting started guide: https://www.pulumi.com/docs/guides/automation-api/getting-started-automation-api/
Ok so what I see is a few things: I should modify my API to allow multiple nodes to be requested at once Every time I run pulimi I need to make a new stack (and somehow manage that...) The automation API is a requirement
I don't have time today to actually apply the new API, but this has all been useful and I'll be back in a few days if I have any other issues
I should modify my API to allow multiple nodes to be requested at once
I'm not sure what you mean by this
Everytime I run pulumi I need to make a new stack
Not necessarily, only if your HTTP action is only doing green deployments. If you are updating existing infrastructure than you will need to fetch an existing stack. But yes, you will need to manage your stacks somehow if the intention is to have multiple deployments.
The Automation API is a requirement
Yes if you want to invoke the pulumi engine programmatically instead of via the CLI that is what the Automation API is for.
Good luck!
I'll also mention here (instead of nowhere) that I do not have slack on my phone, so I only check it when I am on my laptop which is not necessarily a regular event, so I can't promise that I'll be super responsive here
I should modify my API to allow multiple nodes to be requested at once
As I have now, the code I have written creates one machine for every HTTP request made, and the intention was to just make multiple requests if you need multiple servers, but it seems that now I should not do that and instead allow for each request (for now just assuming I'll only be creating nodes) to spin up multiple machines
I get that saying "API" is a bit ambiguous
Oh I see. Yes I would make the desired number of nodes an input to the request and pipe that into your pulumi function, if your desire is for those nodes to belong to the same pulumi stack.
👍 1