https://pulumi.com logo
Title
f

full-continent-98866

04/27/2022, 11:36 AM
Hi all, 🙂👋 I'm using Pulumi to deploy resources on AWS, and I'm running into a limitation. When I want to create, for example, 15 new WorkSpaces in AWS using Pulumi, Pulumi deploy the first 6 WorkSpaces, then once these 6 are finished creating the next 6, and then the last 3 to complete the 15. Not just for WorkSpaces, the same behavior is true for any other type of resource in AWS. I'm using the AWS Classic provider. In Terraform I don't have this limitation. I understand that the classic providers use the Terraform bridge that the Terraform community built from when the cloud providers didn't have CRUD interfaces. I would like to know if there is any way to pass in Pulumi the same flags that we set in Terraform and allow us past that limit. Any guidance would be great! Thanks everyone!!!
b

billowy-army-68599

04/27/2022, 2:21 PM
@full-continent-98866 what error do you get?
can you share your code?
f

full-continent-98866

04/27/2022, 2:40 PM
Hi jaxxstorm, I don't get any errors. I just find this limitation in creating the resources in parallel, and I can't increase that number. I'm making a program that uses the Pulumi Automation APIs to deploy my infrastructure and new resources without the CLI. I'm working with python. In this way I initialize the stack and configure my environment to deploy the new resources.
print("successfully initialized stack")

# for inline programs, we must manage plugins ourselves
print("installing plugins...")
stack.workspace.install_plugin("aws", "v5.2.0")
print("plugins installed")

# set stack configuration specifying the AWS region to deploy
print("setting up config")
stack.set_config("aws:region", auto.ConfigValue(value=AWS_REGION))
# stack.set_config("aws:profile", auto.ConfigValue(value=AWS_PROFILE))
print("config set")

print("refreshing stack...")
stack.refresh(on_output=print)
print("refresh complete")

if destroy:
    print("destroying stack...")
    stack.destroy(on_output=print)
    print("stack destroy complete")
    sys.exit()

print("updating stack...")
up_res = stack.up(parallel=100, on_output=print)
print(f"update summary: \n{json.dumps(up_res.summary.resource_changes, indent=4)}")
I tried it with the Pulumi.up command adding the paraparallel flag, but it doesn't work. I've set that parameter to 1, for example, and this causes Pulumi create one resource at a time, which is fine. When I increase this value, to for example 100, Pulumi doesn't create a number greater than 6 resources at the same time.
This is the behavior I'm trying to describe. I need to increase the number of resources created at the same time as much as possible, because otherwise my implementation becomes obsolete, since it makes the process of creating the resources very slow.
b

billowy-army-68599

04/27/2022, 3:00 PM
huh, interesting. I'll check with the engineering team
l

lemon-agent-27707

04/27/2022, 3:17 PM
I'm curious how you determined that only six resources at a time are being created? By watching the output logs? I'd be curious to see the difference in runtime between the automation api program, and the same program running via
pulumi up
f

full-continent-98866

04/27/2022, 3:23 PM
When the program must create the AWS workspaces that I described at the beginning, in the AWS console I see how 6 Workspaces are created at first, once approximately 20 min have passed (time it takes to create the workspaces), it continues with another 6 , and so on until all are created
l

lemon-agent-27707

04/30/2022, 1:54 PM
This would be unexpected. If you have a minimal repro, would be great to open a bug github.com/pulumi/pulumi/issues/new
f

full-continent-98866

05/02/2022, 7:07 PM
I have done a new test with my code, without using the Automation API, but rather executed it through the CLI. And what I could check is that through the CLI it does not have the same behavior, more than 6 WorkSpaces are created at the same time.
b

billowy-army-68599

05/02/2022, 7:13 PM
@full-continent-98866 can you please open a github issue? github.com/pulumi/pulumi
f

full-continent-98866

05/02/2022, 8:50 PM
Hello everyone! I have been able to solve the inconvenience that I was having, which limited my deploy to the resources created in parallel. Fortunately, the problem was neither in Pulumi nor in the AWS provider. Thank you all for your time!
b

billowy-army-68599

05/04/2022, 1:45 PM
@full-continent-98866 what was the issue?
f

full-continent-98866

05/04/2022, 1:46 PM
I was using Pulumi together with AWS Codebuild, and the problem was in the CodeBuild configuration
b

billowy-army-68599

05/04/2022, 1:47 PM
What specifically?
f

full-continent-98866

05/09/2022, 5:59 PM
Sorry Jaxxstorm, I just saw the message. The solution was to modify the instance type in Codebuild, to a higher performance one, with more ram and vcpu