Hi, I’m trying to deploy multiple resources of the...
# python
w
Hi, I’m trying to deploy multiple resources of the same kind with the dynamic provider. I thought thta each resource should be treated by an instance of my provider class. It seems that this does not work like a ‘normal’ python program: • I tried to write logs into multiple files, one for each of the resources that I create. Pulumi seems to ignore the code in the create method of the provider class, hence in the
create
method, I start get a logger with
logger = logging.getLogger()
. each of these loggers gets its own configuration with an individual file. However, only one file is created and everything is written there (as if only the last of the
logging.getLogger()
instances is really created. This is extremely confusing. • When using an
AuthorizationManagementClient
from
azure.mgmt.authorization
, I get really confusing
409 Conflict
responses that make me suspect that a similar thing like with the logger happens here as well. Could anyone with more experience with dynamic providers give me some insights?
e
Dynamic providers take the provider code and use dill to transmit it to a new resource provider process. So there should only be one copy of the provider code running, but it will be in a separate process to your pulumi program.
w
I’m not sure whether I understand this correctly: Does that mean, I can use a dynamic resource provider to create only a single instance of a resource? What do I do if I want to deploy multiple instances with different configurations?
e
Does that mean, I can use a dynamic resource provider to create only a single instance of a resource?
No. One provider instance can handle multiple resource instances. Have you read the various docs we have on dynamic providers? Might help: https://www.pulumi.com/docs/intro/concepts/resources/dynamic-providers/ https://www.pulumi.com/blog/dynamic-providers/ https://github.com/pulumi/examples/tree/master/aws-py-dynamicresource
w
I’ve read the docs, that’s how I implemented the provider in the for my case (Azure PIM eligibility / assignment requests). However, I can’t find anything in the docs that concerns the behavior that I observed with the logfiles and the 409 errors that I described above. In a python progam, I expect each instance of an object to behave independently of other instances. In your first reply, you wrote that “there should only be one copy of the provider code running”. This is what got me confused Anyway, I’m still stuck with the question of how to write mutliple log files / avoid the 409 error. The 409 persists during the runtime of the pulumi program. If I abort that and restart it, the second resource gets created directly without a hassle. However, I don’t consider planning to abort and restart IaC programs in a pipeline an option.
Thanks for your replies and taking care of me, btw! 🙏
e
However, only one file is created and everything is written there (as if only the last of the
logging.getLogger()
instances is really created. This is extremely confusing.
So if that's in the provider "create" call then it will get called multiple times (once for each instance that's created) but the behaviour for getLogger() is to always return the same root logger instance. So that's why your only seeing one file.
w
Seems the
409
problems were caused by something only existing once that I thought was existing mutliple times: I initialized an ID as
uuid.uuid4()
as the default of a function parameter. This is a bad idea as such a code only gets executed once, when defining the function and not each time the function is called. The logging problems were caused by the fact that the
logging.basicConfig
is somehow global.