sparse-intern-71089
07/05/2019, 10:59 PMwhite-balloon-205
chilly-photographer-60932
07/06/2019, 10:25 PMaws:ec2:LaunchConfiguration (monitoring-nodeLaunchConfiguration):
error: pre-step event returned an error: failed to verify snapshot: resource urn:pulumi:naveen::aws-arcus-kitchensink::arcus:cluster$eks:index:Cluster$kubernetes:core/v1:ConfigMap::local-dev-naveen-nodeAccess refers to unknown provider urn:pulumi:naveen::aws-arcus-kitchensink::arcus:cluster$eks:index:Cluster$pulumi:providers:kubernetes::local-dev-naveen-eks-k8s::0f92a33a-3eea-4721-b1e7-423c3686666b
pulumi:pulumi:Stack (aws-arcus-kitchensink-naveen):
error: update failed
chilly-photographer-60932
07/06/2019, 10:26 PMchilly-photographer-60932
07/06/2019, 10:26 PMchilly-photographer-60932
07/06/2019, 10:26 PMchilly-photographer-60932
07/06/2019, 10:26 PMchilly-photographer-60932
07/08/2019, 7:18 PMwhite-balloon-205
pulumi destroy
yourself, presumably after doing a pulumi cancel
. If you can't successfully delete any resources (I'd love to understand what these are), then you can maually delete in your coud provider and run https://www.pulumi.com/docs/reference/cli/pulumi_state_delete/ to remove from your Pulumi stack so you can continue to destroy other resources.
You can also force delete the stack yourself with pulumi stack rm -f
- though that will then orphan all resources (and you will then want to clean them out manually in your cloud provider).