# kubernetes


02/03/2024, 7:11 AM
Hi all! We're in a weird position in one of our stacks. We've manually deleted the Kubernetes cluster in the Azure Portal (to see how well we can recover in case that happens). Normally, we would then use
in a preview step to remove the 'dangling' resources in the Pulumi state, however this does not seem to work anymore. For example:
Copy code
❯ PULUMI_K8S_DELETE_UNREACHABLE=true pulumi refresh --target "urn:pulumi:k8s-pr-959::kaas-ts::kubernetes:core/v1:ServiceAccount::k8s-pr-959-external-secrets-identity-service-account"
Previewing refresh (k8s-pr-959):
     Type                                  Name                                                  Plan        Info
     pulumi:pulumi:Stack                   kaas-ts-k8s-pr-959                                                1 error
 ~   └─ kubernetes:core/v1:ServiceAccount  k8s-pr-959-external-secrets-identity-service-account  refresh     1 error; 1 warning

  pulumi:pulumi:Stack (kaas-ts-k8s-pr-959):
    error: preview failed

  kubernetes:core/v1:ServiceAccount (k8s-pr-959-external-secrets-identity-service-account):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<>": EOF
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"
Of course, I can manually edit the state to remove the resources myself, however this setting did work before. Any ideas on how I can fix this?
Apparently the Kubernetes Provider had a deleteUnreachable setting in the provider config in the state. Manually overwriting that setting to true actually allowed the refresh to continue. Shouldn't the environment variable have preference over the provider option?
Okay, I figured out why the provider option was set. Apparently, when you use the
environment variable as false, it gets persisted in the provider config on every mutation (which happened in our pipelines). Conditionally setting the environment variable (and leaving it empty on 'false') helped here. I think this either warrants a change or should be documented a bit better