chilly-photographer-60932
07/05/2019, 10:59 PMRefreshing (MINDBODY-Platform/naveen):
Permalink: <https://app.pulumi.com/MINDBODY-Platform/aws-arcus-kitchensink/naveen/updates/51>
error: the current deployment has 1 resource(s) with pending operations:
* urn:pulumi:naveen::aws-arcus-kitchensink::arcus:cluster$eks:index:Cluster$eks:index:NodeGroup$aws:ec2/launchConfiguration:LaunchConfiguration::monitoring-nodeLaunchConfiguration, interrupted while deleting
These resources are in an unknown state because the Pulumi CLI was interrupted while
waiting for changes to these resources to complete. You should confirm whether or not the
operations listed completed successfully by checking the state of the appropriate provider.
For example, if you are using AWS, you can confirm using the AWS Console.
Once you have confirmed the status of the interrupted operations, you can repair your stack
using 'pulumi stack export' to export your stack to a file. For each operation that succeeded,
remove that operation from the "pending_operations" section of the file. Once this is complete,
use 'pulumi stack import' to import the repaired stack.
refusing to proceed
pulumi cancel
didn’t help.white-balloon-205
chilly-photographer-60932
07/06/2019, 10:25 PMaws:ec2:LaunchConfiguration (monitoring-nodeLaunchConfiguration):
error: pre-step event returned an error: failed to verify snapshot: resource urn:pulumi:naveen::aws-arcus-kitchensink::arcus:cluster$eks:index:Cluster$kubernetes:core/v1:ConfigMap::local-dev-naveen-nodeAccess refers to unknown provider urn:pulumi:naveen::aws-arcus-kitchensink::arcus:cluster$eks:index:Cluster$pulumi:providers:kubernetes::local-dev-naveen-eks-k8s::0f92a33a-3eea-4721-b1e7-423c3686666b
pulumi:pulumi:Stack (aws-arcus-kitchensink-naveen):
error: update failed
white-balloon-205
pulumi destroy
yourself, presumably after doing a pulumi cancel
. If you can't successfully delete any resources (I'd love to understand what these are), then you can maually delete in your coud provider and run https://www.pulumi.com/docs/reference/cli/pulumi_state_delete/ to remove from your Pulumi stack so you can continue to destroy other resources.
You can also force delete the stack yourself with pulumi stack rm -f
- though that will then orphan all resources (and you will then want to clean them out manually in your cloud provider).