For my `helm.v3.Chart`, I now have: ``` { pro...
# general
s
For my
helm.v3.Chart
, I now have:
Copy code
{
    providers: { kubernetes: eksCluster.provider },
    dependsOn: [eksCluster, istioIngress],
  },
I'm getting a bunch of this:
Copy code
error: Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: Get "...": x509: certificate is not valid for any names, but wanted to match ...
I've tried this with only
dependsOn
and only
providers
as well, thinking there may be a conflict. Based on the speed of the run, I thought maybe this was because it's not waiting for the k8s cluster to come up, but perhaps there's something else going on. Any ideas?
b
how are you defining
eksCluster.provider
?
s
Copy code
const eksCluster = new eks.Cluster(eksClusterName, {
  // ...
});
Using
@pulumi/eks
b
okay define the provider explicitly using the kubeconfig output instead
s
The cluster was deploying ok by itself.
Ok, I'll give that a shot, thanks.
@billowy-army-68599 Unfortunately, getting the same thing with that. Here's what I'm doing:
Copy code
const eksClusterProvider = new k8s.Provider(`${deployTag}-cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig.apply((k) => k.json),
});
b
you shouldn’t need that apply
I’ll try put an example together tomorrow
s
And:
Copy code
{
  providers: { kubernetes: eksClusterProvider },
},
Ok, I can ditch it. Just
JSON.stringify(eksCluster.kubeconfig)
then?
Well... that's not working either 😉
b
no just pass
eksCluster.kubeconfig
s
Just tried that, same.
Copy code
const eksClusterProvider = new k8s.Provider(`${deployTag}-cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig,
});
b
can you share the full code please
s
Copy code
const eksClusterName = `${deployTag}-cluster`;
const eksCluster = new eks.Cluster(eksClusterName, {
  // Put the custer in the new VPC created earlier
  vpcId: eksVpc.vpcId,
  // Public subnets will be used for load balancers
  publicSubnetIds: eksVpc.publicSubnetIds,
  // Private subnets will be used for cluster nodes
  privateSubnetIds: eksVpc.privateSubnetIds,
  // Change configuration values to change any of the following settings
  instanceType: eksNodeInstanceType,
  desiredCapacity: desiredClusterSize,
  minSize: minClusterSize,
  maxSize: maxClusterSize,
  // Do not give the worker nodes public IP addresses
  nodeAssociatePublicIpAddress: false,
  // Uncomment the next two lines for a private cluster (VPN access required)
  // endpointPrivateAccess: true,
  // endpointPublicAccess: false
});

const eksClusterProvider = new k8s.Provider(`${deployTag}-cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig,
});

// Exports
export const kubeconfig = eksCluster.kubeconfig;

// Deploy the AWS EFS CSI driver chart
const awsEfsCsiDriver = new k8s.helm.v3.Chart(
  'aws-efs-csi-driver',
  {
    namespace: 'kube-system',
    chart: 'aws-efs-csi-driver',
    fetchOpts: {
      repo: '<https://kubernetes-sigs.github.io/aws-efs-csi-driver/>',
    },
  },
  {
    providers: { kubernetes: eksClusterProvider },
    //dependsOn: eksCluster,
  },
);
b
this worked for me:
Copy code
import * as aws from '@pulumi/aws';
import * as eks from '@pulumi/eks';
import * as k8s from '@pulumi/kubernetes';
import * as awsx from '@pulumi/awsx';

const vpc = new awsx.ec2.Vpc('vpc', {})

const eksCluster = new eks.Cluster("foo", {
  // Put the custer in the new VPC created earlier
  vpcId: vpc.vpcId,
  // Public subnets will be used for load balancers
  publicSubnetIds: vpc.publicSubnetIds,
  // Private subnets will be used for cluster nodes
  privateSubnetIds: vpc.privateSubnetIds,
  // Change configuration values to change any of the following settings
  instanceType: "t3.medium",
  desiredCapacity: 1,
  minSize: 1,
  maxSize: 1,
  // Do not give the worker nodes public IP addresses
  nodeAssociatePublicIpAddress: false,
  // Uncomment the next two lines for a private cluster (VPN access required)
  // endpointPrivateAccess: true,
  // endpointPublicAccess: false
});

const eksClusterProvider = new k8s.Provider(`cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig,
});

// Exports
export const kubeconfig = eksCluster.kubeconfig;

// Deploy the AWS EFS CSI driver chart
const awsEfsCsiDriver = new k8s.helm.v3.Chart(
  'aws-efs-csi-driver',
  {
    namespace: 'kube-system',
    chart: 'aws-efs-csi-driver',
    fetchOpts: {
      repo: '<https://kubernetes-sigs.github.io/aws-efs-csi-driver/>',
    },
  }, { provider: eksClusterProvider },
);
however you need to use
k8s.helm.v3.Release
for that chart because it contains helm hooks
s
Ok, right on, thanks!