I'm using an EFS volume on my EC2 instance (Using Amazon linux AMI). I am able to mount the volume easily. If I try to shell into the server and run something like:
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-xxxxxxxxx.efs.us-southwest-2.amazonaws.com:/efs
But when I add a shell script inside the user data section of my instance and then boot it, nothing displays there, how do I find this bug/problem? Are there some logs or something in the filesystem I can take reference from? I don't get to see any errors, just no mounted drive. Any help is appreciated.
I'm using the following shell script:
#!/bin/bash
# Make sure all packages are up-to-date
yum update -y
# Make sure that NFS utilities and AWS CLI utilities are available
yum install -y jq nfs-utils python27 python27-pip awscli
pip install --upgrade awscli
# Name of the EFS filesystem (match what was created in EFS)
EFS_FILE_SYSTEM_NAME="xxxx.efs.ap-southeast-2.amazonaws.com"
# Gets the EC2 availability zone for the current ECS instance
EC2_AVAIL_ZONE="us-west-1a"
# Gets the EC2 region for the current ECS instance
EC2_REGION="US West(N.Claifornia)"
# Creates the mount-point for the EFS filesystem
DIR_TGT="efs"
mkdir "${DIR_TGT}"
# Get the EFS filesystem ID.
EFS_FILE_SYSTEM_ID="$(/usr/local/bin/aws efs describe-file-systems --region "${EC2_REGION}" | jq '.FileSystems[]' | jq "select(.Name==\"${EFS_FILE_SYSTEM_NAME}\")" | jq -r '.FileSystemId')"
if [ -z "${EFS_FILE_SYSTEM_ID}" ]; then
echo "ERROR: variable not set" 1> /etc/efssetup.log
exit
fi
# Create the mount source path
DIR_SRC="${EC2_AVAIL_ZONE}.${EFS_FILE_SYSTEM_ID}.efs.${EC2_REGION}.amazonaws.com"
# Actually mount the EFS filesystem
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,soft,timeo=600,retrans=2 "${DIR_SR