b

    better-pencil-34948

    3 months ago
    I've had to destroy a k8s cluster from under pulumi. pulumi really wants to refresh state by checking things on the pods which don't exist. Is there a way to get a toposorted list of resources which depend on another resource so I can just run down the list and delete all of the k8s dependent ones?
    b

    billowy-army-68599

    3 months ago
    pulumi stack --show-urns
    should help
    you can also export the full state using
    pulumi stack export
    b

    better-pencil-34948

    3 months ago
    1. That's just a list without any specific guarantees. 2. I really am not thrilled about having to reconstruct the tree/dag and implement my own algorithms on this.
    so, yeah, if I have to, I'll bust out the ole Algorithms memories and go. was hoping there'd be a canned way to handle this common case.
    b

    billowy-army-68599

    3 months ago
    just to make sure I'm understanding: ā€¢ you have a program with a kubernetes cluster and resources in it ā€¢ you deleted the cluster manually? ā€¢ pulumi now doesn't know the cluster has gone and wants to refresh the resources
    is that right?
    b

    better-pencil-34948

    3 months ago
    yep
    so it attempts to refresh, times out, errors.
    (ideally this is something I can tell pulumi: "if you error when refreshing, mark it as deleted, ok")
    b

    billowy-army-68599

    3 months ago
    what is in the pulumi stack other than the kubernetes resources and the cluster?
    b

    better-pencil-34948

    3 months ago
    variety of cloud resources - networks, database, etc
    s

    sparse-park-68967

    3 months ago
    are you actually invoking a
    pulumi refresh
    ?
    b

    better-pencil-34948

    3 months ago
    I did. Multiple times. šŸ™‚
    s

    sparse-park-68967

    3 months ago
    Yeah - we used to treat failure resolving the cluster as the cluster being deleted but that caused other issues so the default behavior was made more conservative here: https://github.com/pulumi/pulumi-kubernetes/pull/1522. If you open a feature-request to opt-in to the assume-deleted behavior, happy to pursue it.
    b

    better-pencil-34948

    3 months ago
    fantastic
    s

    sparse-park-68967

    3 months ago
    In essence you would want to delete resources that depend on the affected provider . In theory this would have worked with a
    pulumi destroy -t <provider URN>
    but we also seem to have gotten more conservative with the delete in the case of failure to resolve the cluster. So the above approach should enable both to work.
    b

    better-pencil-34948

    3 months ago
    Yes, basically I want to have cascading deletes / taints on-demand.
    anyway, for now I'll bust out the Algorithms course and write a toposort for the json DAG for precise manual deletion
    s

    sparse-park-68967

    3 months ago
    if you search for the provider urn, just trimming out any resources with a reference to that URN from the stack state should do it.
    b

    better-pencil-34948

    3 months ago
    šŸ˜¬
    s

    sparse-park-68967

    3 months ago
    Thanks for opening the issue, I agree with the utility there.
    b

    better-pencil-34948

    3 months ago
    Tangentially: dunno how common this is, but that wouldn't really work 100% for our case. I generate an eks cluster, then deploy an ingress into the cluster. Ingress generates (under the hood) an aws elb. I then go in with the aws provider and bind a DNS to the new ELB.
    consequentially, I have to essentially apply a taint to a point in the dag and cascade it down to the leaf resources for deletion
    in our specific use case, I anticipate some number of providers (think 10-50) for some of our work. they'll bound together to deliver functionality.
    s

    sparse-park-68967

    3 months ago
    Is the ELB being cleaned up when you nuke the cluster?
    b

    better-pencil-34948

    3 months ago
    should be, heh. I think that the state today for me is so hairy that šŸ¤·
    looks like the answer is no, but that's almost certainly because the actual teardown code for the ingress didn't run to do the teardown. its not managed by pulumi (although I go in to get the information from it after ingress finishes standing up)
    b

    broad-dog-22463

    3 months ago
    Hi folks, I have an initial PR out for review for this - https://asciinema.org/a/0Xwo1OaCs4YWfxL07V4F3xZck