purple-plumber-90981
05/28/2021, 2:16 AM<https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/charts/aws-efs-csi-driver>
as a pulumi_kubernetes.helm.v3.Chart but it fails because <http://csidrivers.storage.k8s.io|csidrivers.storage.k8s.io> "<http://efs.csi.aws.com|efs.csi.aws.com>" already exists
…. this happens because the chart includes a pre-install helm hook to remove the existing efs-csi driver and pulumi doesnt support helm-hooks (<https://github.com/pulumi/pulumi-kubernetes/issues/555>
) . . . what is the best way for me to work around this ?gorgeous-egg-16927
05/28/2021, 4:04 PMpurple-plumber-90981
05/31/2021, 3:16 AMconfiguration = kubernetes.config.load_kube_config(config_file='~/.kube/config', context='arn:aws:eks:us-east-1:6299999991:cluster/itplat-eks-cluster')
with kubernetes.client.ApiClient(configuration) as api_client:
k8s_api_instance = kubernetes.client.StorageV1beta1Api(api_client)
csi_read_response = k8s_api_instance.read_csi_driver("<http://efs.csi.aws.com|efs.csi.aws.com>")
print(f'the efs.csi api_version was {csi_read_response.api_version}')
if csi_read_response.api_version == '<http://storage.k8s.io/v1|storage.k8s.io/v1>':
csi_delete_response = k8s_api_instance.delete_csi_driver("<http://efs.csi.aws.com|efs.csi.aws.com>")
<https://github.com/kubernetes-sigs/aws-efs-csi-driver>
helm deployment uses a preinstall hook to remove the existing efs csi driver and install it’s own which does not work in pulumi because https://github.com/pulumi/pulumi-kubernetes/issues/555csi_delete_response = k8s_api_instance.delete_csi_driver("<http://efs.csi.aws.com|efs.csi.aws.com>")
configuration = kubernetes.config.load_kube_config(config_file='~/.kube/config.pulumi', context='arn:aws:eks:us-east-1:6299999521:cluster/itplat-eks-cluster')
<https://github.com/pulumi/examples/blob/master/aws-py-eks/utils.py>
which you pointed me at… any suggestions about how to pass that same config resource to kubernetes.config.load_kube_config
?billowy-army-68599
06/03/2021, 5:42 AM