i am trying to deploy `<https://github.com/kuberne...
# kubernetes
i am trying to deploy
as a pulumi_kubernetes.helm.v3.Chart but it fails because
<http://csidrivers.storage.k8s.io|csidrivers.storage.k8s.io> "<http://efs.csi.aws.com|efs.csi.aws.com>" already exists
…. this happens because the chart includes a pre-install helm hook to remove the existing efs-csi driver and pulumi doesnt support helm-hooks (
) . . . what is the best way for me to work around this ?
I haven’t tested it for your specific case, but you may be able to use the SkipCRDRendering option that was added in v3.2.0: https://github.com/pulumi/pulumi-kubernetes/pull/1572
@gorgeous-egg-16927 i could use some additional details to understand how this option would help me or how to use it. i searched through docs but nada. Not much detail in the PR comments either
currently im getting around it by calling kubernetes api directly to remove the existing csi driver like :-
Copy code
configuration = kubernetes.config.load_kube_config(config_file='~/.kube/config', context='arn:aws:eks:us-east-1:6299999991:cluster/itplat-eks-cluster')
    with kubernetes.client.ApiClient(configuration) as api_client:
        k8s_api_instance = kubernetes.client.StorageV1beta1Api(api_client)
        csi_read_response = k8s_api_instance.read_csi_driver("<http://efs.csi.aws.com|efs.csi.aws.com>")
        print(f'the efs.csi api_version was {csi_read_response.api_version}')
        if csi_read_response.api_version == '<http://storage.k8s.io/v1|storage.k8s.io/v1>':
            csi_delete_response = k8s_api_instance.delete_csi_driver("<http://efs.csi.aws.com|efs.csi.aws.com>")
but it feels like a super shitty workaround because pulumi isnt supporting helm hooks
so back on this… i would like to give some context. we are creating a single pulumi stack which deploys all the things needed for an eks cluster, then creates the cluster, then populates it with essential components like crossplane pieces and CSI drivers and etc that our application_service_helm charts require to pre-exist
helm deployment uses a preinstall hook to remove the existing efs csi driver and install it’s own which does not work in pulumi because https://github.com/pulumi/pulumi-kubernetes/issues/555
so im trying to work around/emulate that csi driver removal using
csi_delete_response = k8s_api_instance.delete_csi_driver("<http://efs.csi.aws.com|efs.csi.aws.com>")
but im having issue referencing the kube/config created in the same stack
trying things like
configuration = kubernetes.config.load_kube_config(config_file='~/.kube/config.pulumi', context='arn:aws:eks:us-east-1:6299999521:cluster/itplat-eks-cluster')
@billowy-army-68599 im using the kubeconfig code at
which you pointed me at… any suggestions about how to pass that same config resource to
or a suggestion about a better design
the issue im having is that the kube config doesnt exist at preview stage so i cant use it as valid config for kubernetes.config.load_kube_config
there isn't a great solution for this I'm afraid - it's part of the reason I suggested using the EKS provider - it fixes a lot of these issues (which are AWS specific) your best bet is to build a dynamic provider for it: https://www.pulumi.com/blog/dynamic-providers/