Hi, is there a way to specify dependencies on a fu...
# general
a
Hi, is there a way to specify dependencies on a function? For example,
pulumi_aws.iam.get_policy_document
? I would love to be able to combine multiple policy documents, since resources like SQS and SQS can only have a single policy, but I don't see a way to be able to do that. A particular example is if I have an SNS topic and two SQS queues, each with their own KMS key. SNS has to know everything about the queues and the keys beforehand, so that it can set the policy at the same time. This is not great on a dynamic environment.
d
You can wrap the functions in output apply calls, or use the
_output
variant of functions. This lets you use outputs from other resources. https://www.pulumi.com/docs/concepts/inputs-outputs/
a
That's not the problem for me. The problem is being able to delay the creation of the policy resource until I have the combined result of all those policy documents.
Say that I have a module that creates an SNS topic
Then I have other modules that create various SQS queues
The policy of the SNS topic needs to know about all the queues, so that I can have those permissions set correctly
d
If the policy is using the outputs from the queues, then it'll wait until the queues are created before providing the final policy
a
I know, but the problem is that I don't know which queues will exist.
To clarify even further, I'm using Python and I create the SNS topic with a specific class
Then I have a method that subscribes an SQS queue to the topic
Setting the topic of the SNS topic can only be done at the very end, once every queue has subscribed
d
Hmm. So the problem you're facing is the sqs queue method that sets up the subscription can be called an arbitrary amount of times, and you need a way to create the TopicPolicy resource after all calls have been made? + it's in python
a
Correct
This all stems from the fact that SNS topics, SQS queues and KMS keys only allow a single policy, unlike IAM roles
d
This is an interesting one. I know a workaround for typescript using promises. I'm going to grab a laptop and think about how I'd do it in python
a
And you can't "attach" statements
Thanks, I appreciate it
I'll read about promises as well
My guess is that this will boil to a language feature (or lack thereof), not a Pulumi thing
d
Yeah, promises are a language feature of javascript
a
I'm also going to read on async/await with Python, looks promising
d
async/await is a possibility, and you can use them inside output
.apply
methods. What I'm thinking is having an output that "finalises" when you call another method after you've finished defining all your sqs queues
Might not need async actually. What you can do is have the method just create the queue + queue policy. Then at the end once you're sure all methods have been called, call a final method that makes the TopicSubscriptions + TopicPolicy
something like this:
Copy code
import pulumi
import pulumi_aws as aws


class QueueFactory:
    def __init__(self, topic: aws.sns.Topic):
        self._topic = topic
        self._queues: dict[str, aws.sqs.Queue] = {}
    
    def add(self, name: str, **other):
        queue = self._queues[name] = aws.sqs.Queue(name, aws.sqs.QueueArgs(
            policy="",  # whatever this needs to be
        ))
        return queue

    def subscribe(self):
        policy = aws.sns.TopicPolicy(self._topic._name, aws.sns.TopicPolicyArgs(
            arn=self._topic.arn,
            policy="",  # build policy here based on self._queues
        ))
        for qn, queue in self._queues.items():
            aws.sns.TopicSubscription(qn, aws.sns.TopicSubscriptionArgs(
                endpoint=queue.arn,
                protocol="sqs",
                topic=self._topic.arn,
            ), pulumi.ResourceOptions(depends_on=[policy]))



sns_topic = aws.sns.Topic("snsTopic")

qf = QueueFactory(sns_topic)
qf.add("queue1")
qf.add("queue2")
qf.subscribe()
a
Hi @dry-keyboard-94795 thanks for this. I actually have some similar, but I needed a way to automatically finalise everything and trigger the creation of the resources. I finally found what I was looking for in Python. As a reminder, I wanted to be able to subscribe various queues to a topic, without having to be stuck with policy limitations. Here's my proposed solution: 1. There is an
SNS
class to create an SNS topic. 2. That
SNS
class has a method
subscribe
that allows a queue to subscribe to it. 3. The
subscribe
method creates the subscription resource, but does not actually set a policy. Instead it will add that information to a variable that will be used in the future to create the policy resource. Let's call that variable
policy_statements
. Eventually we will have many policy statements, but we can only create the actual policy at the very end. 4. On the
SNS
class we also define a method
create_policy
that will create a policy resource based on the policy statements in
policy_statement
. 5. We use the
atexit
Python package and register the
create_policy
method with it. 6. The
atexit
package guarantees that the
create_policy
will run at the end of the Python program, which means the policy will be created with all the right information because now we have all the information. I haven't tested this with Pulumi yet, but if it works it's a very simple solution for this family of problems and will make me very happy indeed. I'll post here when I have actually tested it.
d
I don't think atexit will work, as the python program keeps running after the stack is constructed to allow injection via output's
.apply
method
a
Unfortunately you are right regarding `atexit`:
Copy code
....
      File "/usr/lib64/python3.11/asyncio/tasks.py", line 659, in ensure_future
        return _ensure_future(coro_or_future, loop=loop)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib64/python3.11/asyncio/tasks.py", line 680, in _ensure_future
        return loop.create_task(coro_or_future)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/lib64/python3.11/asyncio/base_events.py", line 434, in create_task
        self._check_closed()
      File "/usr/lib64/python3.11/asyncio/base_events.py", line 519, in _check_closed
        raise RuntimeError('Event loop is closed')
    RuntimeError: Event loop is closed
I believe this error happens because Pulumi's event loop is finished and has moved on, so if you try to do anything else it's already too late. What I needed was an atexit specific for Pulumi.
Back to the drawing board
d
It's probably easier to just have your own finaliser at the end of your
____main__.py
file