Next question - it seems that I'm no longer author...
# general
Next question - it seems that I'm no longer authorized to modify an EKS cluster I created and I'm seeing errors like this:
Copy code
Do you want to perform this destroy? yes
Destroying (tableau/online-vnext-10az):

     Type                                Name                              Status                  Info
     pulumi:pulumi:Stack                 eks-cluster-online-vnext-10az                             2 messages
 -   ├─ kubernetes:core:ConfigMap        kube-system/online-splunk-config  **deleting failed**     1 error
 -   └─ kubernetes:extensions:DaemonSet  kube-system/splunk-forwarder      **deleting failed**     1 error

  kubernetes:core:ConfigMap (kube-system/online-splunk-config):
    error: Plan apply failed: Unauthorized

  pulumi:pulumi:Stack (eks-cluster-online-vnext-10az):
    warning: Cluster failed to report its version number; falling back to 1.9%!(EXTRA bool=false)
    warning: Cluster failed to report its version number; falling back to 1.9%!(EXTRA bool=false)

  kubernetes:extensions:DaemonSet (kube-system/splunk-forwarder):
    error: Plan apply failed: Unauthorized

Permalink: <>
error: update failed
Any ideas on how I can modify/destroy it?
@early-musician-41645 pulumi can’t destroy it if you’re not auth’d as a user with permissions. You could try to destroy it from the console, but that would imply you do have a user account with permissions?
The bit at the end there where it’s saying that it couldn’t find the cluster version is troubling, but might be related to the fact that the you don’t have access to the cluster?
cc @gorgeous-egg-16927
Suppose Someone created a stack with an EKS cluster, but never granted cluster access to anyone else. Now that person is gone, and we need to tear down the cluster and all pulumi-managed resources in that stack. How can that be done?
is failing when trying to delete charts and configmaps, but there's no access to those. We can delete the cluster just fine from the AWS console if needed.
If you don't have permission for the kubernetes resources, then you need to delete from the console. We create the eks cluster and aws-auth configmap with a shared admin role, through an assumed provider, so everybody who can assume that role is able to destroy the cluster and the resources, but we don't use that role in kubernetes, we use another one, which does not have permission for any aws resource, only exist to assume it and the RBAC rules will decide what that role can do in kubernetes
@early-musician-41645 Ah, I take your question to not actually be about permissions, but instead about how you reconcile the Pulumi state?
I’d just delete it in the console, and then run
pulumi refresh
I finally got it resolved doing a bit of export+modify+import magic
just took the k8s chart and configmap resources out of the state