https://pulumi.com logo
#general
Title
# general
s

salmon-musician-36333

04/23/2023, 9:28 PM
For my
helm.v3.Chart
, I now have:
Copy code
{
    providers: { kubernetes: eksCluster.provider },
    dependsOn: [eksCluster, istioIngress],
  },
I'm getting a bunch of this:
Copy code
error: Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: Get "...": x509: certificate is not valid for any names, but wanted to match ...
I've tried this with only
dependsOn
and only
providers
as well, thinking there may be a conflict. Based on the speed of the run, I thought maybe this was because it's not waiting for the k8s cluster to come up, but perhaps there's something else going on. Any ideas?
b

billowy-army-68599

04/23/2023, 9:45 PM
how are you defining
eksCluster.provider
?
s

salmon-musician-36333

04/23/2023, 9:50 PM
Copy code
const eksCluster = new eks.Cluster(eksClusterName, {
  // ...
});
Using
@pulumi/eks
b

billowy-army-68599

04/23/2023, 9:53 PM
okay define the provider explicitly using the kubeconfig output instead
s

salmon-musician-36333

04/23/2023, 9:53 PM
The cluster was deploying ok by itself.
Ok, I'll give that a shot, thanks.
@billowy-army-68599 Unfortunately, getting the same thing with that. Here's what I'm doing:
Copy code
const eksClusterProvider = new k8s.Provider(`${deployTag}-cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig.apply((k) => k.json),
});
b

billowy-army-68599

04/23/2023, 9:59 PM
you shouldn’t need that apply
I’ll try put an example together tomorrow
s

salmon-musician-36333

04/23/2023, 9:59 PM
And:
Copy code
{
  providers: { kubernetes: eksClusterProvider },
},
Ok, I can ditch it. Just
JSON.stringify(eksCluster.kubeconfig)
then?
Well... that's not working either 😉
b

billowy-army-68599

04/23/2023, 10:03 PM
no just pass
eksCluster.kubeconfig
s

salmon-musician-36333

04/23/2023, 10:03 PM
Just tried that, same.
Copy code
const eksClusterProvider = new k8s.Provider(`${deployTag}-cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig,
});
b

billowy-army-68599

04/23/2023, 10:03 PM
can you share the full code please
s

salmon-musician-36333

04/23/2023, 10:04 PM
Copy code
const eksClusterName = `${deployTag}-cluster`;
const eksCluster = new eks.Cluster(eksClusterName, {
  // Put the custer in the new VPC created earlier
  vpcId: eksVpc.vpcId,
  // Public subnets will be used for load balancers
  publicSubnetIds: eksVpc.publicSubnetIds,
  // Private subnets will be used for cluster nodes
  privateSubnetIds: eksVpc.privateSubnetIds,
  // Change configuration values to change any of the following settings
  instanceType: eksNodeInstanceType,
  desiredCapacity: desiredClusterSize,
  minSize: minClusterSize,
  maxSize: maxClusterSize,
  // Do not give the worker nodes public IP addresses
  nodeAssociatePublicIpAddress: false,
  // Uncomment the next two lines for a private cluster (VPN access required)
  // endpointPrivateAccess: true,
  // endpointPublicAccess: false
});

const eksClusterProvider = new k8s.Provider(`${deployTag}-cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig,
});

// Exports
export const kubeconfig = eksCluster.kubeconfig;

// Deploy the AWS EFS CSI driver chart
const awsEfsCsiDriver = new k8s.helm.v3.Chart(
  'aws-efs-csi-driver',
  {
    namespace: 'kube-system',
    chart: 'aws-efs-csi-driver',
    fetchOpts: {
      repo: '<https://kubernetes-sigs.github.io/aws-efs-csi-driver/>',
    },
  },
  {
    providers: { kubernetes: eksClusterProvider },
    //dependsOn: eksCluster,
  },
);
b

billowy-army-68599

04/23/2023, 10:46 PM
this worked for me:
Copy code
import * as aws from '@pulumi/aws';
import * as eks from '@pulumi/eks';
import * as k8s from '@pulumi/kubernetes';
import * as awsx from '@pulumi/awsx';

const vpc = new awsx.ec2.Vpc('vpc', {})

const eksCluster = new eks.Cluster("foo", {
  // Put the custer in the new VPC created earlier
  vpcId: vpc.vpcId,
  // Public subnets will be used for load balancers
  publicSubnetIds: vpc.publicSubnetIds,
  // Private subnets will be used for cluster nodes
  privateSubnetIds: vpc.privateSubnetIds,
  // Change configuration values to change any of the following settings
  instanceType: "t3.medium",
  desiredCapacity: 1,
  minSize: 1,
  maxSize: 1,
  // Do not give the worker nodes public IP addresses
  nodeAssociatePublicIpAddress: false,
  // Uncomment the next two lines for a private cluster (VPN access required)
  // endpointPrivateAccess: true,
  // endpointPublicAccess: false
});

const eksClusterProvider = new k8s.Provider(`cluster-provider`, {
  kubeconfig: eksCluster.kubeconfig,
});

// Exports
export const kubeconfig = eksCluster.kubeconfig;

// Deploy the AWS EFS CSI driver chart
const awsEfsCsiDriver = new k8s.helm.v3.Chart(
  'aws-efs-csi-driver',
  {
    namespace: 'kube-system',
    chart: 'aws-efs-csi-driver',
    fetchOpts: {
      repo: '<https://kubernetes-sigs.github.io/aws-efs-csi-driver/>',
    },
  }, { provider: eksClusterProvider },
);
however you need to use
k8s.helm.v3.Release
for that chart because it contains helm hooks
s

salmon-musician-36333

04/23/2023, 10:55 PM
Ok, right on, thanks!
3 Views