I folks, I am trying to create a depends_on to a r...
# azure
v
I folks, I am trying to create a depends_on to a resource with a different api version and it is not working. Has anyone had issues with dependancies to resources with different api versions?
Copy code
from pulumi_azure_native.machinelearningservices import OnlineDeployment
from pulumi_azure_native.machinelearningservices.v20221201preview import OnlineEndpoint
Copy code
endpoint_variant = OnlineEndpoint(
    f"{environment}-roberta-base-cola-endpoint",
    opts=ResourceOptions(depends_on=[cola_version]),
    online_endpoint_properties={
        "auth_mode": "Key",
        "public_network_access": "disabled",
        "egress_public_network_access": "enabled"
    },
    endpoint_name=f"{environment}-cola-endpoint",
    workspace_name=f"{environment}-machinelearningWorkspace",
    resource_group_name=resource_group.name,
    location="westus2",
    kind="Managed",
    identity=azure_native.machinelearningservices.IdentityArgs(
        type="UserAssigned",
        user_assigned_identities={
            f"/subscriptions/{subscription_id}/resourceGroups/{stack_name}-machinelearning/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{stack_name}-ml-identity": {}
        }
    ),
)

deployment_roberta_base_cola = machinelearningservices.OnlineDeployment(
    "deployment-roberta-base-cola",
    opts=ResourceOptions(protect=False, depends_on=[endpoint_variant]),
a
Is it possible to use
v20221201preview
only?
Sharing the error message would be helpful.
v
No error. Just that the resource creation starts before its dependency is done creating. I believe they start at the same time.
Like the dependency is ignored.
a
interesting. It’s worth creating a bug ticket.
v
okay. any workarounds? maybe adding a delay?
a
Well, I guess you could, but how do you make sure the delay interval is enough? Does the OnlineDeployment take any args? You could create an implicit depends by referencing some the OnlineEndpoint as OnlineDeployment args, which forces output resolution.
v
oh good point. I will check endpoint outputs.
and see if I can plug something into deployment args.