This message was deleted.
# general
s
This message was deleted.
l
Generally, you don't need to check if a resource exists before creating it. The two important concepts (in my mind) are: 1. There is only one state for a given stack. If you are managing the stack from multiple machines, the machines should share state, whether that be in the Pulumi backend or a self-managed one like S3. 2. Existing resources (created outside of Pulumi) should be imported first, then the import code should be removed, and the resources can be maintained within Pulumi and its state from then on. There are situations where you might do things differently, but these are the general guidelines.
(Also, general tip when pasting errors etc.: copied text is much easier to deal with than a copied picture of text.)
👍 1
b
@little-cartoon-10569 the state in my case is not shared; each project clone deploys a fresh copy of infrastructure. And that's where I run into problems with service roles. If the first project clone creates the service role (and only 1 can be created), the second project clone must somehow do a check whether it needs to create it or not.
l
If it's the same infrastructure, then the state should be shared. If it's different infrastructure, then do you know why the resources are conflicting? Can you make them non-conflicting, perhaps by putting the name of the project / stack into the name of the resource?
It may be that you need a separate shared project for the one-and-only-one resources, like this service role.
And then use StackReferences in all the other projects to find the service role.
That's the pattern I've adopted: I have a shared-resources project that gets deployed once per AWS account, and a project for all the env-specific (dev, test, preprod etc.) resources. All those resources use StackReferences to get shared things like RDS instances, AD, VPCs, security groups, etc.
b
That's an interesting approach 🤔
b
i agree with @little-cartoon-10569. you have 2 options here (because service linked roles are global and you can only have one) - separate the service linked role into its own project and use stack references - make it configurable:
Copy code
if config.enable_service_linked_role {
  // create role here
}
b
Ok that gives me something to think about, thanks for your suggestions! ❤️
👍 1
@billowy-army-68599 @little-cartoon-10569 sorry but I still don't see how your suggestions helps: if I have a clone of my project on my PC and my laptop, those are 2 completely decoupled copies of the project. The
.pulumi
state lives on separate computers. And I can't share that state over S3, because I may work on different branches on different machines, so they must have their own copies of infrastructure. Running the first deploy on each machine will result in Pulumi attempting to create that service role, but since I'm using the same AWS account, it will obviously fail. I was hoping to check if service role already exists in my AWS account, and only create it if it doesn't. That works - partially. The problem is that the machine that initially created that service role will have the role's state in its
.pulumi
folder, and second deploy will try to remove the service role if I don't return it from
dependsOn
. And that's where this weird error I originally posted shows up.
l
Why does the state live on different computers? The most common way of working is to have a shared state, in app.pulumi.com or S3. It isn't possible to have Pulumi manage one resource from multiple states: you would need to implement the logic around managing those resources yourself.
b
are you using local state?
b
local state yes
you would use shared state for CI in production, sure. But during development, and within the team, each developer has his own account, and his own state locally on his machine.
l
Then isn't the service role separate too? One per account?
b
no, you only have one per account
l
Yes, that's what I meant..
If each developer has their own account, then why is there duplication of service roles?
b
1 per account, exactly. And that's what I'm fighting - I work on laptop and my home PC - 1 account.
but we are all cloning the same github repo, with same base Pulumi setup.
I just clone the project, and run
pulumi up
. Doing it on 2 of my machines causes the problems since it's 1 account.
l
Ah. Then you could share the state with yourself, maybe via OneDrive? Or S3, or whatever.
b
Sometimes we have 4-5 clones of the same repo but different branches,
l
You could also split the project into two, and manage the service role only from one computer..
b
you'll have to remove the service linked role from the ES project, there's no other way. you can't even do a
get
in this particular case because it's a promise and isn't resolved until runtime
put the service linked role in its own project and only run it from 1 place, and just assume it exists. this is why shared state exists
l
Could you use the SDK to check for the existence of the service role, instead of Pulumi?
b
@little-cartoon-10569 I think so, I don't have any limitations regarding using aws-sdk directly.
b
you can do that, but it's a lot of hoop jumping.
b
and so is separating a project for service roles, which again will hit the same wall on 2 of my own machines :D
l
It coukd work. As @billowy-army-68599 it's more work, and I think it would be more brittle (synchronous code dependent on network connections, etc.) but it might solve your problem.
b
not mocking your suggestion in any way, it all just sounds too complicated.
I think I'll have to try whatever we have available.
b
yeah you're kind of boxed in my using local state on multiple machines though, this is a unique resource which can only exist once. if it's not in the state, there's not much you can do
l
I recommend changing the way you work to use shared state, and just have one shared state per developer (in Pulumi, probably...). But if you can't, then you might consider the SDK...
b
I would really recommend setting up a shared state for your dev branches and using stacks per developer instead
💯 1
l
In addition to solving this problem, it will protect you against a pile of similar problems you haven't yet encountered...
👍 1
b
yeah, this is the first one I encountered 😞
good stuff guys, thanks a lot for your time 🚀
👍 1
I'll discuss it with my team tomorrow, see what are the options for us.