Hello there, We know we can export values programm...
# python
Hello there, We know we can export values programmatically by using
pulumi.export("ip", "")
, but is there a way to get that exported value from that key using Python?
I think you're asking about StackReferences
Thanks @millions-furniture-75402, If it's in the same stack, is this the way to go?
If it's in the same stack, why wouldn't you already have the variable for use?
You can, but I don't think you should. There is a smell here. You should have a local variable that you're exporting as a stack output.
👍 1
In my script, I am creating a one resource that is dependent on the creation of another resource, and I want to pass the dependency resource's resource id as a parameter into the dependent resource creation. And also those resource creations are handled by various modules in my code base.
That's why I am looking for such a feature
If you want to use the output of one resource as the input of another, that conversion happens automatically - you do not need to write any additional code to handle this.
Thanks @stocky-restaurant-98004, Okay, but I'm not sure how to tell the dependent job to use that exported value.
I'll explain my question little bit more. In my scripts, I have a python module that generates gcp taxonomies. In this function, I will export the taxonomy resource ID along with the known key for later use. In addition, I have another module that is used to generate policy tags for taxonomies. Consider the following scenario: I need to create a policy tag for a previously created taxonomy. As a result, I must pass the taxonomy resource id to that policy tag function. That can only be obtained from the outputs. Can you give me an example of how I can do it? I want to get the taxonomy resource id from the exported key programmatically to pass it into policy tag creation module.
This is all within a single stack, correct?
Yes @stocky-restaurant-98004
You can just return the values you need from one function to another.
is for a stack output, which you'd either consume from the command line outside of Pulumi, or in another stack via a StackReference.
Pulumi itself will keep track of the dependencies with Inputs and Outputs.
I am not sure how I can mention those dependencies. Can you send me an example?
Time to time, I might have to provision different resources, sometime I might need to point previously created resource to new resource. In such a case, how can I use this Inputs and outputs?
In this example:
Copy code
web_bucket = s3.Bucket('s3-website-bucket',

content_dir = "www"
for file in os.listdir(content_dir):
    filepath = os.path.join(content_dir, file)
    mime_type, _ = mimetypes.guess_type(filepath)
    obj = s3.BucketObject(file,
is an Output of
. It becomes the input of
. This happens automatically. Pulumi figured out what needs to be created in what order.
For those provisions, scripts are running few times, so can't return a value from function to another
Pulumi is declarative. If you run
pulumi up
to completion, and run it again, it will not create anything new because it knows things are already in the state you declared.
Got it. Thanks a lot @stocky-restaurant-98004 for the support. Will try to resolve my problem based on ur valuable details
Have you gone through any of our workshops? They might help. What cloud are you using?
🙏 1
Not yet, went through some YouTube tutorials and the Pulumi official docs
currently I am using GCP
I'm working on a GCP Python tutorial that should be released shortly.
🙌 1
Great 🙏
Found an alternative way to get exported values for later use.
resource_id = subprocess.run(['pulumi', 'stack', 'output', f'{exported_key_name}'], stdout=subprocess.PIPE).stdout.decode('utf-8').strip()
that's a more fragile way of doing it than the automation api, fyi
yeah, it would be more preferable if we can enrich the pulumi library by adding a functionality to get those exported values.
Right, that's stack references, or refactoring your code.
yeah, got it. couldn't try stack references, will try that also