can someone link me some python code showing usage...
# general
p
can someone link me some python code showing usage of stackreference/get_output to get the outputs from a previously created stack as input to another resource…. I have usecase of trying to create an AWS IAM policy in stack2 requiring an arn and url string exported in stack1
however, if you're using if for an AWS IAM policy that won't work, it'll need to be done inside an apply, because an AWS IAM policy only takes a string
p
what is annoying about that is, stack1 is created and already exists so im referencing a resource that has current values for the bits that i need as strings
stack1 already has outputs created which have string values in the state file
b
yeah, the value is going to be eventually resolved, because it needs to be grabbed from stack1, so it becomes an output
just because the values exist in the AWS console, doesn't mean the values exist in the context of the stack you're creating. the path is: • retrieve the values from stack1 • wait for that value to be returned • then, do something with that value
p
be nicer if there was a get_current_value_of_output_as_str() function
b
that's the thing, it can't ever be a string. Anything that is not known at runtime (ie, when the pulumi program runs) is always going to be an output
if you have to query some remote API value in Pulumi, it'll be an output
p
but at runtime im referencing an external stack that already exists and has static values
so why the reason to wait for the result that is already there ?
b
does the stack you're executing know the values that are in the remote stack?
p
thats what im trying to achieve…. i could just go read them from the stack1 statefile directly with python i guess as i know the config strings i need are in there
from an orchestration persoective i always build stack1 then build stack2
thus when stack2 is building, all stack1 resources already have static values
b
I'm not really following what the desired outcome is here, is this to just avoid having to use an apply?
p
im seeking to understand why it is needed as i regularly have this issue with outputs that i need to be strings
my assumption is that it is catering to the situation where both stack1 and stack2 are being created at the same time and thus the values may not be ready for use yet in stack1
anyhow im wrapping it in an apply with still not a firm grasp on why this level of complexity is needed
b
that's not true, no. Any value that Pulumi has to retrieve from a remote API, whether it's the result of creating a resource, retrieving a stack reference or doing a
.get
function lookup will be an output. That's just how Pulumi works, it turns any remote value into an output. It does this because the amount of time it make take to retrieve that value is anywhere between 1ms to <amount of time it takes to provision an EKS cluster> So while the value exists in your stack file somewhere, it could take 1ms or it could take 5 hours to retrieve. Pulumi uses
apply
to deal with that wait time. You can't "convert" an output into a string, what you can do is wait for the output value to resolve, or be known, and then deal with that result. Using an apply is like saying "once this value has been returned" (whether that's an EKS arn or a stack reference) do something with the result. Anything inside the apply happens once the output has been resolved. I wrote a blog post about this earlier in the year which lots of folks have told me helps their understanding: https://www.leebriggs.co.uk/blog/2021/05/09/pulumi-apply.html
p
appreciate the reading which i will review promptly once i have this code working with the apply 😉
b
happy to help if you want to share, based on what you've asked, I assume you're doing something EKS OIDC?
p
Copy code
ekscreate_stackref = pulumi.StackReference(f"{pulumi_stack_info['name']}-create")

longhorn_backup_role = aws.iam.Role(
    f'longhorn-{pulumi_stack_info["region"]}-backup-role', 
    description = 'IAM role to allow longhorn to backup to s3 regional bucket',
    force_detach_policies = True,
    assume_role_policy=pulumi.Output.all(ekscreate_stackref.get_output('oidc_provider_arn'), ekscreate_stackref.get_output('oidc_provider_url')).apply(
            lambda args: json.dumps(
                {
                    "Version": "2012-10-17",
                    "Statement": [
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "Federated": args[0],
                            },
thats the top of the current chunk for the role creation
the problem is in assume_role_policy which i need to reference the oidc bits created in the
-create
stack
aaaaand my pulumi deploy node just barfed with oome . . . might need to upsize memory
b
@steep-sunset-89396 mind jumping in here to help out, I need to logout for the day. you'll need to do:
Copy code
ekscreate_stackref = pulumi.StackReference(f"{pulumi_stack_info['name']}-create")
ekscreate_stackref.get_output('oidc_provider_url'))

ekscreate_stackref.apply(
  // create your role inside the apply
)
s
Hey folks, sorry I was catching up on the whole thread.
Hey Brett, how are you doing mate ?
p
hi buddy doing ok
except my ec2 node that i deploy pulumi from keeps barfing
Copy code
Diagnostics:
  pulumi:providers:aws (eu-west-1):
    error: could not read plugin [/home/bmeehan/.pulumi/plugins/resource-aws-v4.28.0/pulumi-resource-aws] stdout: EOF
 
  pulumi:pulumi:Stack (itplat-ipd-eks-use1-configure):
    fatal error: runtime: out of memory
    runtime stack:
    runtime.throw(0x9606908, 0x16)
time to scale up i think
s
What instance type are you using ?
p
i think its t3medium
s
I'm surprised you're running OOM. 4GB should be plenty. Do you have anything else running on it ?
p
yeah lots of mandatory environment stuff
i killed some and got a run in, . . . i guess my apply is almost right
Copy code
Diagnostics:
  aws:iam:Role (longhorn-us-east-1-backup-role):
    error: 1 error occurred:
        * 1 error occurred:
        * creating inline policy (s3_policy): MalformedPolicyDocument: Actions/Condition can contain only one colon.
        status code: 400, request id: 4eb8b37f-587a-4b92-a4b8-e8ee0b2fc29a
just checking the changeset
im using
Copy code
"Condition": {
                            "StringLike": {f"{args[1]}:aud".replace('https://', ''): "<http://sts.amazonaws.com|sts.amazonaws.com>"},
                        },
where args[1] is ekscreate_stackref.get_output(‘oidc_provider_url’)
results in
Copy code
<snip>\"Condition\": {\"StringLike\": {\"<http://oidc.eks.us-east-1.amazonaws.com/id/<the_id_redacted>:aud\|oidc.eks.us-east-1.amazonaws.com/id/<the_id_redacted>:aud\>": \"<http://sts.amazonaws.com|sts.amazonaws.com>\"}}}]}"
s
yes, the http 400 indicate the content of the policy is incorrect, but your logic seems right.
If someone reads this and wonders how this got solved...
"Action": ["s3:PutObject", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:DeleteObject" "s3:ListMultipartUploadParts"]
vs
"Action": ["s3:PutObject", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:DeleteObject", "s3:ListMultipartUploadParts"]
Yes, there was a missing coma which is correct for Python and resulted in a string concatenation. Which generated the following policy
Copy code
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "GrantLonghornBackupstoreAccess0",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:ListBucketMultipartUploads",
        "s3:DeleteObjects3:ListMultipartUploadParts"
      ],
      "Resource": [
        "arn:aws:s3:::aureq-us-east-1/*",
        "arn:aws:s3:::aureq-us-east-1"
      ]
    }
  ]
}
And AWS didn't like it.
l
I blame AWS.
😂 1