When running the code below, `pulumi up` had to be...
# general
b
When running the code below,
pulumi up
had to be run twice in order to create the resources. First time it creates the table and the second time it creates it the item inside the table.
Copy code
import json
import pulumi
from pulumi_aws import dynamodb

table_default_name = dynamodb.Table(
    'pulumi-poc-test-table',
    hash_key='id',
    attributes=[
        {'name': 'id', 'type': 'S'}
    ],
    write_capacity=1,
    read_capacity=1
)

debug_object_1 = \
table_default_name.name.apply(lambda table_name: \
    dynamodb.TableItem(
        'test_item',
        hash_key='id',
        item=json.dumps({'id': {'S': 'yes'}}),
        table_name=table_name
    )
)
b
Any specific reason you are using an apply for the second resource?
b
I want it to be "dependent" on the creation of the first one. i.e. don't insert until this table is there. I need it for a test in my case, but it might also be a legitimate use case (e.g. some configuration table)
This is the log of what I am doing
b
So by using “table_name=table_default_name.name” in the table item, you are forcing a dependency between them
That should be enough for the item to be created after the table is created
Worst case, you can use CustomResourceOptions and pass a depends_on to the TableItem
b
in the log above (sorry the png format) it shows that I need to do
pulumi up
twice in order for the actual creation to happen (although it does detect it the first time around)
b
I’m almost 100% sure that’s because you are using the apply
By removing that and making it explicit on the table_name then you should be ok and it should happen in 1 run
b
I see what you mean. I will try that.
b
so like this:
Copy code
import json
import pulumi
from pulumi_aws import dynamodb

table_default_name = dynamodb.Table(
    'pulumi-poc-test-table',
    hash_key='id',
    attributes=[
        {'name': 'id', 'type': 'S'}
    ],
    write_capacity=1,
    read_capacity=1
)

item = dynamodb.TableItem(
        'test_item',
        hash_key='id',
        item=json.dumps({'id': {'S': 'yes'}}),
        table_name=table_default_name.name
    )
)
like that
b
right, that worked! (no apply).
b
(sorry was just taking off on a flight)
perfect 🙂
b
now about the lambda thing: how should we use it going forward? it seems to generate a dependency correctly but it requires two applies. in this case it was not necessary but I have a case where I believe it to be
b
and it should be 1 run right?
b
one run yes
b
well, the apply is more for being able to grab pulumi outputs and then transforming them to something else
it's not really required for the creation of resources unless you really need to transform the output of something like a list and loop over it inside the lambda
b
processing_job = \ bucket.bucket.apply(lambda bucket_name: \ entrypoint_script.key.apply(lambda script_key: \ ... command={ 'name': 'pythonshell', 'pythonVersion':'3', 'scriptLocation': f's3://{bucket_name}/{script_key}' },
would something like that be a valid use case? it was the only way I figured out how to pass a script location
and it works correctly (no two step apply needed)
b
yes 🙂
exactly that
b
I must admit I am not really sure what advice to give the team except to only use lambda they have a good reason to 🙂
but thanks a lot!
b
it's for doing out of band things - FWIW, I think the shelling out would be able to be wrapped in a DynamicResource, but using apply to run other resource managements will cause it to have to run a second time as it's not part of the plan 🙂
it's done as an after affair
b
sorry to come back to this. but do you know what's the relevant difference betweent the original dynamodb example and the following glue one? Is it the resource type or something in how I use it? In both cases I start from one resource and use apply to define another one. Is it because in the second case I use a string formatting operation to manipulate it before passing it? Is it because the parameter type on the new resource is different in the lambda case?
Copy code
processing_job = \
    bucket.bucket.apply(lambda bucket_name: \
    entrypoint_script.key.apply(lambda script_key: \
    framework_lib.key.apply(lambda framework_lib_key: \
    pulumi.Output.all(*[dep.key for dep in glue_deps]).apply(lambda dep_keys: \
        glue.Job(**<http://name.me|name.me>('pulumi-poc-process-file'),
                role_arn=glue_role.arn,
                command={
                    'name': 'pythonshell',
                    'pythonVersion':'3',
                    'scriptLocation': f's3://{bucket_name}/{script_key}'
                    },
                opts=pulumi.ResourceOptions(depends_on=[entrypoint_script, bucket])
                ,
                default_arguments={
                    # [f's3://{bucket_name}/{dep_key}' for dep_key in dep_keys] + 
                    '--extra-py-files': ','.join([f's3://{bucket_name}/{framework_lib_key}'])
                }
                )
                        
))))
b
you mean this?
s3://{bucket_name}/{script_key}
b
yes. I put the entire thing there to emphasize the enitre usage pattern is similar to the dynamodb one. I understand it's needed here and not needed in the other code. I just can't figure out the guidance of when I am explictly not allowed to use
.apply
or it will behave strange.
b
so apply is used when you want to transform an output of 1 resource to be an input of another resource
e.g.
Copy code
const fooAlertChannel = new newrelic.AlertChannel("foo", {
        configuration: {
            include_json_attachment: "1",
            recipients: "<mailto:foo@example.com|foo@example.com>",
        },
        type: "email",
    });

const fooAlertPolicy = new newrelic.AlertPolicy("foo", {});

const fooAlertPolicyChannel = new newrelic.AlertPolicyChannel("foo", {
    channelId: fooAlertChannel.id.apply(id => parseInt(id)),
    policyId: fooAlertPolicy.id.apply(id => parseInt(id)),
});
We are using apply here to transform an int to a string for use in another resource
b
right, so if the expected input is not a string but a pulumi resource that would not work. maybe it would be a good feature request for this to fail instead of resulting in a two step apply? I can think of testing scenarios were this would end up being confusing. but for now I think I understand it enough to avoid it
thank you!
b
👍 glad I could kinda help 🙂