Hey everyone! I've had a good first run at using ...
# getting-started
h
Hey everyone! I've had a good first run at using Pulumi to manage AWS resources with Python so far. However, yesterday we ran into a problem that has us perplexed. I'm using the automation API to provision ECS resources from within a Lambda. That has worked hundreds of times but now it seems to hang forever trying to run this line:
vpc = aws.ec2.get_vpc(id=vpc_id)
. I've turned on a ton of logging, and here is the final bits of it:
Copy code
EngineEvent(sequence=76, timestamp=1742319040, cancel_event=None, stdout_event=None, diagnostic_event=DiagnosticEvent(message='I0318 17:30:40.812898      69 provider.go:1818] tf.Provider[aws].Invoke(aws:ec2/getVpc:getVpc) executing\n\n', color='never', severity='debug', stream_id=None, ephemeral=None, urn=None, prefix='debug: '), prelude_event=None, summary_event=None, resource_pre_event=None, res_outputs_event=None, res_op_failed_event=None, policy_event=None, start_debugging_event=None)
EngineEvent(sequence=77, timestamp=1742319040, cancel_event=None, stdout_event=None, diagnostic_event=DiagnosticEvent(message='I0318 17:30:40.812926      69 rpc.go:292] Unmarshaling property for RPC[tf.Provider[aws].Invoke(aws:ec2/getVpc:getVpc).args]: id={vpc-004240e0ec2083e37}\n\n', color='never', severity='debug', stream_id=None, ephemeral=None, urn=None, prefix='debug: '), prelude_event=None, summary_event=None, resource_pre_event=None, res_outputs_event=None, res_op_failed_event=None, policy_event=None, start_debugging_event=None)
EngineEvent(sequence=78, timestamp=1742319040, cancel_event=None, stdout_event=None, diagnostic_event=DiagnosticEvent(message='I0318 17:30:40.813145      69 schema.go:649] Created Terraform input: id = vpc-004240e0ec2083e37\n\n', color='never', severity='debug', stream_id=None, ephemeral=None, urn=None, prefix='debug: '), prelude_event=None, summary_event=None, resource_pre_event=None, res_outputs_event=None, res_op_failed_event=None, policy_event=None, start_debugging_event=None)
EngineEvent(sequence=79, timestamp=1742319040, cancel_event=None, stdout_event=None, diagnostic_event=DiagnosticEvent(message='I0318 17:30:40.813294      69 schema.go:659] Terraform input id = "vpc-004240e0ec2083e37"\n\n', color='never', severity='debug', stream_id=None, ephemeral=None, urn=None, prefix='debug: '), prelude_event=None, summary_event=None, resource_pre_event=None, res_outputs_event=None, res_op_failed_event=None, policy_event=None, start_debugging_event=None)
EngineEvent(sequence=80, timestamp=1742319040, cancel_event=None, stdout_event=None, diagnostic_event=DiagnosticEvent(message='I0318 17:30:40.813412      69 schema.go:659] Terraform input __defaults = []interface {}{}\n\n', color='never', severity='debug', stream_id=None, ephemeral=None, urn=None, prefix='debug: '), prelude_event=None, summary_event=None, resource_pre_event=None, res_outputs_event=None, res_op_failed_event=None, policy_event=None, start_debugging_event=None)
Things I've tried so far: 1. destroy and delete stack 2. revert pulumi and pulumi_aws to a previous version (although this has worked using the most recent) 3. Set parallelism to 1 For completeness here are the automation API calls:
Copy code
stack = auto.create_or_select_stack(
        stack_name=stack_name,
        project_name=project_name,
        program=model_services_stack(model_list),
        opts=auto.LocalWorkspaceOptions(
            secrets_provider=SECRET_PROVIDER,
            project_settings=auto.ProjectSettings(
                name=project_name,
                runtime="python",
                backend=auto.ProjectBackend(BACKEND_URL),
            ),
            stack_settings={
                stack_name: auto.StackSettings(secrets_provider=SECRET_PROVIDER)
            },
        ),
    )

    stack.workspace.install_plugin("aws", PULUMI_AWS_VERSION)
    stack.set_config("aws:region", auto.ConfigValue(value=AWS_REGION))
    stack.refresh(on_output=print)
    up_res = stack.up(
            on_output=print,
            on_event=print,
            diff=SHOW_DIFF,
            continue_on_error=CONTINUE_ON_ERROR,
            log_flow=True,
            log_to_std_err=True,
            debug=LOG_LEVEL == "DEBUG",
            suppress_progress=True,
            log_verbosity=1000 if LOG_LEVEL == "DEBUG" else 0,
            parallel=PARALLEL,
    )