actually no, it’s an ENI that was attached to a no...
# general
d
actually no, it’s an ENI that was attached to a now-deleted EKS worker
w
I’ve seen this most commonly with ENIs that the AWS VPC CNI allocates. They have had a lot of fixes in the last few releases of the CNI to address issues related to this. If you are on the latest version of the Pulumi-EKS package, it should pull in the latest CNI by default which had some fixes that addressed similar failures we had seen occasionally in our own CI.
d
yep, I found this, and it sounds like what I’m seeing https://github.com/pulumi/pulumi-eks/issues/194
sadly I’m already using the latest version of
pulumi-eks
w
See also https://github.com/pulumi/pulumi-eks/pull/214 which cross references some of the relevant CNI fixes.
d
yep, had a look through that - not sure what the fix is though, if any?
is there a way to disable vpc-cni installation with
pulumi-eks
? For my purposes it seems like more trouble than it’s worth
w
My understanding is that EKS more or less requires VPC CNI. See for example https://github.com/aws/amazon-vpc-cni-k8s/issues/176.
d
afaik vpc-cni isn’t required, and you can use a “standard” overlay network instead https://github.com/aws/amazon-vpc-cni-k8s/issues/176
mainly I’m just wondering what my options are, as I seem to have fallen at the first hurdle here. Run Pulumi to create a vanilla EKS cluster with workers, install nginx-ingress, uninstall nginx-ingress, run pulumi destroy, hit error. This doesn’t seem very edge-casey to me
(this is obviously more an issue with EKS’s general suckiness, not Pulumi)