<@UE1LN7A22> What's a good way to handle pre-exist...
# kubernetes
@gorgeous-egg-16927 What's a good way to handle pre-existing resources that I don't want to import to be pulumi managed but that I want to delete to clear the way for pulumi managed resources? Specifically, looking at https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html, which is kind of a mess:
cluster role,
cluster role binding, and
service account already exist after standing up a new eks cluster, while the
deployment does not exist, but the cluster role is different from the guide's download link and is missing config map access (so needs modification anyway), ... so I want to delete the lot iff they exist (and not managed by pulumi) and create new ones with consistent
names throughout, which would leave duplicates for cluster role and cluster role binding and clashes with the service account if I can't delete the pre-existing ones first. 🤔
Effectively, I want to do a one-off "delete if exists and not managed by pulumi", that should then be a no-op during subsequent runs so it's safe to leave this code in, and in fact it will still be needed if the eks cluster is ever rebuilt.
What don’t you just use the k8s sdk to do that ? I had a similar issue with updating the existing aws-auth in an eks cluster and that’s how I solved it.
Yeah, don’t have computer in front of me to get the link, but the EKS lib does something like that using a dynamic provider to update an existing configmap
You could likely use a dynamic provider to wrap that logic based on a native SDK or kubectl
We do have an open issue for that as well - you can search for “kubectl apply -f” in the k8s provider issues.
Yeah I commented on that issue, ~1 year ago now, with a need for read-modify-write of external resources (not managed by pulumi)
Also, I'm doing everything in dotnet/c# now and AFAIK dynamic resources are not supported yet
So somehow I need to run a conditional side effect and have a pulumi managed resource depend on that side effect having run before itself running.
(I avoid the issue with modifying the aws auth config map by waiting for the api server before creating the nodes, so the config map doesn't exist before I create it with pulumi.)