https://pulumi.com logo
Docs
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
python
  • i

    incalculable-sundown-82514

    02/08/2019, 8:29 PM
    pulumi destroy
    deletes everything in your stack, if that’s what you want
  • i

    incalculable-sundown-82514

    02/08/2019, 8:29 PM
    I’m not sure what’s going on with the provider (will open an issue)
  • l

    little-river-49422

    02/08/2019, 8:29 PM
    it often cant
  • i

    incalculable-sundown-82514

    02/08/2019, 8:29 PM
    why can’t it?
  • i

    incalculable-sundown-82514

    02/08/2019, 8:30 PM
    what happens?
  • l

    little-river-49422

    02/08/2019, 8:31 PM
    not sure, need to scroll. right now it decided to work so my console is blocked 🙂
  • i

    incalculable-sundown-82514

    02/08/2019, 8:31 PM
    it should always work 😛
  • l

    little-river-49422

    02/08/2019, 8:31 PM
    well, in ideal world...
  • i

    incalculable-sundown-82514

    02/08/2019, 8:32 PM
    specifically, i meant that if it doesn’t, you should file bugs on us
  • i

    incalculable-sundown-82514

    02/08/2019, 8:32 PM
    we designed it for the exact purpose you’re describing of “clean slate”
  • l

    little-river-49422

    02/08/2019, 8:32 PM
    ok, give me 30 seconds 😉
  • i

    incalculable-sundown-82514

    02/08/2019, 8:32 PM
    no worries!
  • i

    incalculable-sundown-82514

    02/08/2019, 8:32 PM
    we just want
    pulumi destroy
    to work since it’s really important 😛
  • l

    little-river-49422

    02/08/2019, 8:34 PM
    for example, I've deleted the kubernetes cluster that was used by stack (I have 2 "phase" stack. first phase installs kubernetes, then I can run second phase to create application copies inside kubernetes), but the application stack is not aware of that and when it tries to do the deletion it just throws:
    > pulumi destroy
    Previewing destroy (feature-server-deployment):
    
         Type                               Name                                           Plan
     -   pulumi:pulumi:Stack                azure-py-kubernetes-feature-server-deployment  delete
     -   ├─ kubernetes:core:Secret          storageacct                                    delete
     -   ├─ kubernetes:core:Namespace       featureserverdeployment                        delete
     -   ├─ pulumi:providers:kubernetes     application_provider                           delete
     -   ├─ azure:signalr:Service           signalr                                        delete
     -   ├─ azure:storage:Account           storage                                        delete
     -   ├─ kubernetes:core:ServiceAccount  orleans-rbac                                   delete
     -   └─ azure:core:ResourceGroup        rg-branch                                      delete
    
    Resources:
        - 8 to delete
    
    Do you want to perform this destroy? yes
    Destroying (feature-server-deployment):
    
         Type                       Name                                           Status                  Info
         pulumi:pulumi:Stack        azure-py-kubernetes-feature-server-deployment                          3 messages
     -   └─ kubernetes:core:Secret  storageacct                                    **deleting failed**     1 error
    
    Diagnostics:
      kubernetes:core:Secret (storageacct):
        error: Plan apply failed: the cache has not been filled yet
    
      pulumi:pulumi:Stack (azure-py-kubernetes-feature-server-deployment):
        E0208 23:32:34.891272    8220 memcache.go:126] couldn't get current server API group list; will keep using cached value. (Get <https://dns-8920cff4.hcp.westeurope.azmk8s.io:443/api?timeout=32s>: dial tcp: lookup <http://dns-8920cff4.hcp.westeurope.azmk8s.io|dns-8920cff4.hcp.westeurope.azmk8s.io>: no such host)
        warning: Cluster failed to report its version number; falling back to 1.9%!(EXTRA bool=false)
        E0208 23:32:35.124311    8220 memcache.go:126] couldn't get current server API group list; will keep using cached value. (Get <https://dns-8920cff4.hcp.westeurope.azmk8s.io:443/api?timeout=32s>: dial tcp: lookup <http://dns-8920cff4.hcp.westeurope.azmk8s.io|dns-8920cff4.hcp.westeurope.azmk8s.io>: no such host)
    
    Permalink: <https://app.pulumi.com/4c74356b41/azure-py-kubernetes/feature-server-deployment/updates/37>
    error: update failed
  • l

    little-river-49422

    02/08/2019, 8:35 PM
    I thought I had another case, but appears I'm tripping
  • l

    little-river-49422

    02/08/2019, 8:35 PM
    I can only find this
  • i

    incalculable-sundown-82514

    02/08/2019, 8:35 PM
    oh, yeah - you should delete your kubernetes resource stack before deleting the cluster
  • g

    gorgeous-egg-16927

    02/08/2019, 8:35 PM
    The
    Cluster failed to report its version number
    means it can't access the k8s cluster
  • l

    little-river-49422

    02/08/2019, 8:36 PM
    well, i really think you should have force delete resources option
  • l

    little-river-49422

    02/08/2019, 8:36 PM
    @gorgeous-egg-16927 i know. its deleted, lol
  • i

    incalculable-sundown-82514

    02/08/2019, 8:36 PM
    We have
    pulumi state delete
    for that purpose: https://pulumi.io/reference/cli/pulumi_state_delete.html
  • i

    incalculable-sundown-82514

    02/08/2019, 8:36 PM
    If you want to just remove stuff from your stack
  • g

    gorgeous-egg-16927

    02/08/2019, 8:37 PM
    let me see if we have an issue for that situation. it seems like people will run into that if they're managing the cluster in a separate stack
  • l

    little-river-49422

    02/08/2019, 8:37 PM
    ok, can you add
    --all
    parameter? 🙂
    i
    • 2
    • 1
  • l

    little-river-49422

    02/08/2019, 8:38 PM
    i dont think it makes any sense to manage kubernetes and applications on kubernetes in the same stack
  • l

    little-river-49422

    02/08/2019, 8:38 PM
    its just unmanageable
  • l

    little-river-49422

    02/08/2019, 8:38 PM
    what if I want new namespace for each branch?
  • l

    little-river-49422

    02/08/2019, 8:38 PM
    what if i have 50 applications each with 20 namespaces? 🙂
  • i

    incalculable-sundown-82514

    02/08/2019, 8:40 PM
    Pulumi isn’t opinionated at all, you can totally do that if you want
    l
    • 2
    • 10
  • i

    incalculable-sundown-82514

    02/08/2019, 8:41 PM
    The question here is what should the kubernetes provider do if it is asked to delete a resource and it can’t talk to the cluster
Powered by Linen
Title
i

incalculable-sundown-82514

02/08/2019, 8:41 PM
The question here is what should the kubernetes provider do if it is asked to delete a resource and it can’t talk to the cluster
View count: 1