For people creating ec2 instance templates with eb...
# aws
f
For people creating ec2 instance templates with ebs volumes: do you create the filesystem and mount the drive using userdata, using
mkfs
,
mount
, etc. as described here? Or is there a streamlined way of doing all that?
l
That's generally used only for mounting shared volumes. If you want a new volume, the template allows you to create the volume at instance creating time. The reference code at the top of https://www.pulumi.com/registry/packages/aws/api-docs/ec2/launchtemplate/ shows all the parameters.
f
So, yes, I create a launch template as follows:
Copy code
new aws.ec2.LaunchTemplate(..., {
      blockDeviceMappings: [
        {
          deviceName: '/dev/sdf',
          ebs: {
            volumeSize: 100,
          },
        },
      ],
When I create an instance from that template, it does create the ebs volume. But that volume doesn't have a filesystem and it isn't mounted — it's those parts I was asking about.
l
Ah sorry, yes, my mistake. To put a particular filesystem on it, I've always snapshotted my preferred "empty" volume and used its snapshotId. I don't know if there's another way to do it.
g
An empty snapshot is a cool idea! Without provisioner (ansible, packer), I used
userdata
Copy code
const pgDataVolume = new aws.ebs.Volume(
      rcName("pgdata"),
      {
        availabilityZone: subnetOutput.availabilityZone,
        size: 20,
        type: "gp3",
        encrypted: true,
      },
      { parent },
    );

    new aws.ec2.VolumeAttachment(
      rcName("pgdata-attachment"),
      {
        deviceName: "/dev/sdf",
        instanceId: this.instance.id,
        volumeId: pgDataVolume.id,
      },
      { parent },
    );
Copy code
echo "Mounting PGDATA volume"
VOLUME_NAME=$(lsblk | grep disk | awk '{print $1}' | while read disk; do echo -n "$disk " && ebsnvme-id -b /dev/$disk; done | grep /dev/sdf | awk '{print $1}')
echo "VOLUME_NAME - $VOLUME_NAME"

MOUNT_POINT=$(lsblk -o MOUNTPOINT -nr /dev/$VOLUME_NAME)
echo "MOUNT_POINT - $MOUNT_POINT"
# if the volume is not mounted, mount it
if [[ -z "$MOUNT_POINT" ]]
then
  MOUNT_POINT=/data
  FILE_SYSTEM=$(lsblk -o FSTYPE -nr /dev/$VOLUME_NAME)
  echo "FILE_SYSTEM - $FILE_SYSTEM"

  if [[ $FILE_SYSTEM != 'xfs' ]]
  then
      mkfs -t xfs /dev/$VOLUME_NAME
  fi

  mkdir -p $MOUNT_POINT
  mount /dev/$VOLUME_NAME $MOUNT_POINT

  cp /etc/fstab /etc/fstab.orig
  VOLUME_ID=$(lsblk -o UUID -nr /dev/$VOLUME_NAME)

  if [[ ! -z $VOLUME_ID ]]
  then
    tee -a /etc/fstab <<EOF
    UUID=$VOLUME_ID  $MOUNT_POINT  xfs  defaults,nofail  0  2
EOF
  fi
fi
echo "Mounting finished: $MOUNT_POINT"

export MOUNT_POINT=/data
export POSTGRES_DIR=$MOUNT_POINT/postgresql
f
@great-sunset-355 that's essentially what I ended up doing. Was hoping for something less messy, but it seems like that's what's available
l
The snapshot solution is just a little bit tidier 🙂
At least in code.
g
Cloud init is another option but it may end up even more painful because it is awful yaml, gosh I do NOT miss servers! https://stackoverflow.com/a/53194483/3580789