DevOps has become the gold standard in modern IT. But what is DevOps? Well, DevOps is a collaboration between development and operation teams, which enables continuous delivery of applications and services to our end users. It is a process that is achieved with the help of multiple tools. One such tool is Ansible.

For those acquainted with Ansible, its capabilities are not unknown. It gives you the power to automate deployments and configurations on not just a single server but often across a whole network.
Today, we’ll be discussing how to mount an AWS Elastic File System(EFS) onto multiple Elastic Compute Cloud(EC2) across multiple availability zones in a region.
Ansible Playbook
Requirement: Mount an already existing EFS to multiple EC2s across availability zones.
How To:
- Fetch the name of the EFS and the path on which it is to be mounted, from EC2 tags
- With the help of the names fetched in the above step, retrieve the file system id of the EFS
- With the help of the file system id, mount the EFS on to the EC2s
Assumptions:
- The below playbook is being called from a parent playbook that has all the environment/inventory related details
- The user(s)/resource(s) running commands on the server via ansible has appropriate permissions to access resources on the EC2 servers
- The flavour of the operating system used is RHEL5 or later
- The region in which the EC2 servers are located is known to the deployer
- The file system on AWS(EFS) has already been created via terraform/cloudformation/console or any other method of choice
- The file system name and the path on which it is to be mounted on the EC2 have been added as tags to the EC2s
---
#Exact command to install the nfs will depend on the flavor of the OS used.
- name: "Install nfs utils"
yum:
name: nfs-utils
state: present
register: package_install
- debug:
var: package_install.stdout_lines
- name: "get efs name from ec2 tags"
shell: |
aws ec2 describe-instances \
--filters "Name=private-ip-address,Values=<Value>" \
--query "Reservations[*].Instances[*].{Name:Tags[?Key=='<tag-name>'].Value" \
--output text | cut -f2
register: fs_name
-set_fact:
file_system_name: "{{ fs_name.stdout_lines[0] }}"
- name: "get mount path from ec2 tags"
shell: |
aws ec2 describe-instances \
--filters "Name=private-ip-address,Values=<Value>" \
--query "Reservations[*].Instances[*].{Name:Tags[?Key=='<tag-name'].Value" \
--output text | cut -f2
register: fs_mount_path
- set_fact:
efs_mount_dir: "{{ fs_mount_path.stdout_lines[0] }}"
- name: "get-file-systems-id"
shell: |
aws efs describe-file-systems --query \
'FileSystems[?Name == '{{ file_system_name }}'.FileSystemId' --output text
register: file_system_id_var
- set_fact:
file_system_id: "{{ file_system_id_var.stdout[0] }}"
- name: "Ansible check directory"
stat:
path: "{{ efs_mount_dir }}"
register: efs_dir
- name: "Check if directory already exists"
debug:
msg: "EFS mount already exists"
when: efs_dir.stat.exists
- name: "Unmounting existing EFS mount"
shell: |
unmount "{{ efs_mount_dir }}"
when: efs_dir.stat.exists
#elevate permissions in case insufficient (Optional)
become: yes
become_user: root
become_method: sudo
register: unmount_log
- name: "Create EFS mount"
file:
path: "{{ efs_mount_dir }}"
state: directory
mode: 0755
owner: '{{ <app-user> }}'
group: '{{ <app-user-group> }}'
#elevate permissions in case insufficient (Optional)
become: yes
become_user: root
become_method: sudo
when: not efs_dir.stat.exists
- name: "Mount EFS"
shell: |
mount -t nfs -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport "{{
file_system_id }}".efs.<region>.amazonaws.com:/ "{{ efs_mount_dir }}"
when: not efs_dir.stat.exists
#elevate permissions in case insufficient(Optional)
become: yes
become_user: root
become_method: sudo
register: mount_efs_var
- name: "Check if EFS is properly mounted or not"
when: mount_efs_var.rc == 0
debug: "EFS is mounted properly on '{{ efs_mount_dir }}'"