This message was deleted.
# general
s
This message was deleted.
c
@early-musician-41645 pulumi can’t destroy it if you’re not auth’d as a user with permissions. You could try to destroy it from the console, but that would imply you do have a user account with permissions?
The bit at the end there where it’s saying that it couldn’t find the cluster version is troubling, but might be related to the fact that the you don’t have access to the cluster?
cc @gorgeous-egg-16927
e
Suppose Someone created a stack with an EKS cluster, but never granted cluster access to anyone else. Now that person is gone, and we need to tear down the cluster and all pulumi-managed resources in that stack. How can that be done?
The
destroy
is failing when trying to delete charts and configmaps, but there's no access to those. We can delete the cluster just fine from the AWS console if needed.
c
If you don't have permission for the kubernetes resources, then you need to delete from the console. We create the eks cluster and aws-auth configmap with a shared admin role, through an assumed provider, so everybody who can assume that role is able to destroy the cluster and the resources, but we don't use that role in kubernetes, we use another one, which does not have permission for any aws resource, only exist to assume it and the RBAC rules will decide what that role can do in kubernetes
c
@early-musician-41645 Ah, I take your question to not actually be about permissions, but instead about how you reconcile the Pulumi state?
I’d just delete it in the console, and then run
pulumi refresh
e
I finally got it resolved doing a bit of export+modify+import magic
just took the k8s chart and configmap resources out of the state