Ansible: Automating Server Provisioning and Configuration
Ansible is an open-source software tool used to automate the provisioning, configuration management, and deployment of servers and applications. You can use Ansible to automate tasks on one or more servers or run a distributed application across multiple servers. For a multi-server setup, completing the Initial Server Setup for each server can be time-consuming. Using Ansible will speed up the process with automation playbooks.
The software Ansible is agentless, so you do not need to install any Ansible component on the servers you want to run Ansible against. Those servers are the Ansible hosts and must run Python 3 and OpenSSH, both of which are pre-installed on Ubuntu 22.04 and all Linux distributions. The Ansible control node is the machine that will initiate automation, which can run any compatible Unix-like operating system or Windows, if Windows Subsystem for Linux (WSL) is installed.
In this tutorial, you’ll use Ansible to automate the initial server setup of multiple Ubuntu 22.04 servers. You’ll accomplish the following initial setup tasks on all the servers:
- Updating installed packages
- Adding a non-root user with admin privileges
- Enabling SSH access for that non-root user
- Enabling the firewall
- Changing the port for SSH access and using the firewall to protect against brute-force attacks and boost the overall security of the servers
- Disabling remote login for the root account
- Making sure critical services are active
- Removing package dependencies that are no longer required
Because you’ll use Ansible to run a comprehensive playbook defining each task, those tasks will be completed using just one command and without you needing to log in to the servers individually. You can run an optional secondary playbook to automate server management after the initial server setup.
Prerequisites
To complete this tutorial, you will need:
- Ansible installed on a machine that will act as your control node, which can be your local machine or a remote Linux server. To install and configure Ansible on Ubuntu 22.04, you can refer to the official Ansible installation guide as needed for other operating systems.
- If your control node is a remote Ubuntu 22.04 server, be sure to set it up using the Initial Server Setup and create its SSH key pair as well.
- Git installed on the control node. Install Git for popular Linux distributions.
- Two or more Ubuntu 22.04 servers and the public IPv4 address of each server. No prior setup is required as you’ll use Ansible to automate setup in Step 5, but you must have SSH access to these servers from the Ansible control node mentioned above.
- If your control node is a remote Ubuntu 22.04 server, be sure to use ssh-copy-id to connect the key pair to the hosts.
Step 1 — Modifying the SSH Client Configuration File on Your Control Node
You’ll modify a directive in your control node’s SSH client configuration file in this step. After making this change, you’ll no longer be prompted to accept the SSH key fingerprint of remote machines, as they will be accepted automatically. Manually accepting the SSH key fingerprints for each remote machine can be tedious, so this modification solves a scaling issue when using Ansible to automate the initial setup of multiple servers.
While you can use Ansible’s known_hosts module to accept the SSH key fingerprint for a single host automatically, this tutorial deals with multiple hosts, so it is more effective to modify the SSH client configuration file on the control node (typically, your local machine).
To begin, launch a terminal application on your control node and, using nano or your favorite text editor, open the SSH client configuration file:
sudo nano /etc/ssh/ssh_config
Find the line that contains the StrictHostKeyChecking
directive. Uncomment it and change the value so that it reads as follows:
StrictHostKeyChecking accept-new
Save and close the file. You do not need to reload or restart the SSH daemon because you only modified the SSH client configuration file.
Note: If you do not wish to change the value of the StrictHostKeyChecking
from ask
to accept-new
permanently, you can revert it to the default after running the playbook in Step 7. While changing the value will mean your system accepts SSH key fingerprints automatically, it will reject subsequent connections from the same hosts if the fingerprints change. This feature means the accept-new
change is not as much of a security risk as changing that directive’s value to no
.
Now that you have updated the SSH directive, you’ll begin the Ansible configuration, which you’ll do in the next steps.
Step 2 — Configuring the Ansible Hosts File
The Ansible hosts file (also called the inventory file) contains information on Ansible hosts. This information may include group names, aliases, domain names, and IP addresses. The file is located by default in the /etc/ansible
directory. In this step, you’ll add the IP addresses of the Ansible hosts you spun up in the Prerequisites section so that you can run your Ansible playbook against them.
To begin, open the hosts file using nano or your favorite text editor:
sudo nano /etc/ansible/hosts
After the introductory comments in the file, add the following lines:
host1 ansible_host=host1-public-ip-address
host2 ansible_host=host2-public-ip-address
host3 ansible_host=host3-public-ip-address
[initial]
host1
host2
host3
[ongoing]
host1
host2
host3
host1
, host2
, and host3
are aliases for each host upon which you want to automate the initial server setup. Using aliases makes it easier to reference the hosts elsewhere. ansible_host
is an Ansible connection variable and, in this case, points to the IP addresses of the target hosts.
[initial]
and [ongoing]
are sample group names for the Ansible hosts. Choose group names that will make it easy to know what the hosts are used for. Grouping hosts in this manner makes it possible to address them as a unit. Hosts can belong to more than one group. The hosts in this tutorial have been assigned to two different groups because they’ll be used in two different playbooks: [initial]
for the initial server setup in Step 6 and [ongoing]
for the later server management in Step 8.
hostN-public-ip-address
is the IP address for each Ansible host. Be sure to replace host1-public-ip-address
and the subsequent lines with the IP addresses for the servers that will be part of the automation.
When you’re finished modifying the file, save and close it.
Defining the hosts in the inventory file helps you to specify which hosts will be set up with Ansible automation. In the next step, you’ll clone the repository with sample playbooks to automate multi-server setup.
Step 3 — Cloning the Ansible Ubuntu Initial Server Setup Repository
In this step, you’ll clone a sample repository from GitHub containing the necessary files for this automation.
This repo contains three files for a sample multi-server automation: initial.yml
, ongoing.yml
, and vars/default.yml
. The initial.yml
file is the main playbook that contains the plays and tasks you’ll run against the Ansible hosts for initial setup. The ongoing.yml
file contains tasks you’ll run against the hosts for ongoing maintenance after initial server setup. The vars/default.yml
file contains variables that will be called in both playbooks in Step 6 and Step 8.
To clone the repo, type the following command:
git clone https://github.com/do-community/ansible-ubuntu.git
Alternatively, if you’ve added your SSH key to your GitHub account, you can clone the repo using:
git@github.com:do-community/ansible-ubuntu.git
You will now have a folder named ansible-ubuntu
in your working directory. Change into it:
cd ansible-ubuntu
That will be your working directory for the rest of this tutorial.
In this step, you acquired the sample files for automating multiple Ubuntu 22.04 servers using Ansible. To prepare the files with information specific to your hosts, you will next update the vars/default.yml
file to work with your system.
Step 4 — Modifying the Ansible Variables
This playbook will reference some information for automation that may need to be updated with time. Placing that information in one variable file and calling the variables in the playbooks will be more efficient than hard-coding them within the playbooks, so you will modify variables in the vars/default.yml
file to match your preferences and setup needs in this step.
To begin, open the file with nano or your favorite text editor:
nano vars/default.yml
You will review the contents of the file, which include the following variables:
create_user: sammy
ssh_port: 5995
copy_local_key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"
Setting Up Ansible Variables
The value of the create_user
variable should be the name of the sudo user that will be created on each host. In this case, it is sammy
, but you can name the user whatever you like.
The ssh_port
variable holds the SSH port you’ll use to connect to the Ansible hosts after setup. The default port for SSH is 22, but changing it will significantly reduce the number of automated attacks hitting your servers. This change is optional but will boost the security posture of your hosts. You should choose a lesser-known port that is between 1024 and 65535 and which is also not in use by another application on the Ansible hosts. In this example, you are using port 5995
.
Ensuring Port Availability
Note: If your control node is running a Linux distribution, pick a number higher than 1023 and grep for it in /etc/services
. For example, run grep 5995 /etc/services
to check if 5995 is being used. If there’s no output, then the port does not exist in that file and you may assign it to the variable. If your control node is not a Linux distribution and you don’t know where to find its equivalent on your system, you can consult the Service Name and Transport Protocol Port Number Registry.
Managing SSH Keys
The copy_local_key
variable references your control node’s SSH public key file. If the name of that file is id_rsa.pub
, then you don’t need to make any changes in that line. Otherwise, change it to match your control node’s SSH public key file. You can find the file under your control node’s ~/.ssh
directory. When you run the main playbook in Step 5 and after a user with sudo privileges is created, the Ansible controller will copy the public key file to the user’s home directory, which enables you to log in as that user via SSH after the initial server setup.
Finalizing Configuration
When you’re finished modifying the file, save and close it.
Now that you’ve assigned values to the variables in vars/default.yml
, Ansible will be able to call those variables while executing the playbooks in Step 6 and Step 8. In the next step, you’ll use Ansible Vault to create and secure the password for the user that will be created on each host.
Step 5 — Using Ansible Vault To Create An Encrypted Password File
Ansible Vault is used to create and encrypt files and variables that can be referenced in playbooks. Using Ansible Vault ensures that sensitive information is not transmitted in plaintext while executing a playbook. In this step, you’ll create and encrypt a file containing variables whose values will be used to create a password for the sudo user on each host. By using Ansible Vault in this manner, you ensure the password is not referenced in plaintext in the playbooks during and after the initial server setup.
Creating and Configuring the Vault File
Still in the ansible-ubuntu
directory, use the following command to create and open a vault file:
ansible-vault create secret
When prompted, enter and confirm a password that will be used to encrypt the secret file. This is the vault password. You’ll need the vault password while running the playbooks in Step 6 and Step 8, so do not forget it.
After entering and confirming the vault password, the secret file will open in the text editor linked to the shell’s EDITOR
environment variable. Add these lines to the file, replacing the values for type_a_strong_password_here
and type_a_salt_here
:
password: type_a_strong_password_here
password_salt: type_a_salt_here
The value of the password
variable will be the actual password for the sudo user you’ll create on each host. The password_salt
variable uses a salt for its value. A salt is any long, random value used to generate hashed passwords. You can use an alphabetical or alphanumeric string, but a numeric string alone may not work. Adding a salt when generating a hashed password makes it more difficult to guess the password or crack the hashing. Both variables will be called while executing the playbooks in Step 6 and Step 8.
Important Notes About the Salt
Note: In testing, we found that a salt made up of only numeric characters led to problems running the playbook in Step 6 and Step 8. However, a salt made up of only alphabetical characters worked. An alphanumeric salt should also work. Keep that in mind when you specify a salt.
Saving the Vault File and Next Steps
When you’re finished modifying the file, save and close it.
You have now created an encrypted password file with variables that will be used to create a password for the sudo user on the hosts. In the next step, you’ll automate the initial setup of the servers you specified in Step 2 by running the main Ansible playbook.
Step 6 — Running the Main Playbook Against Your Ansible Hosts
In this step, you’ll use Ansible to automate the initial server setup of as many servers as you specified in your inventory file. You’ll begin by reviewing the tasks defined in the main playbook. Then, you will execute the playbook against the hosts.
An Ansible playbook is made up of one or more plays with one or more tasks associated with each play. The sample playbook you’ll run against your Ansible hosts contains two plays with a total of 14 tasks.
Before you run the playbook, you’ll review each task involved in its setup process. To begin, open the file with nano or your favorite text editor:
nano initial.yml
Play 1:
The first section of the file contains the following keywords that affect the behavior of the play:
initial.yml
- name: Initial server setup tasks
hosts: initial
remote_user: root
vars_files:
- vars/default.yml
- secret
name
is a short description of the play, which will display in the terminal as the play runs. The hosts
keyword indicates which hosts are the play’s target. In this case, the pattern passed to the keyword is the group name of the hosts you specified in the /etc/ansible/hosts
file in Step 2. You use the remote_user
keyword to tell the Ansible controller the username to use to log in to the hosts (in this case, root
). The vars_files
keyword points to the files containing variables the play will reference when executing the tasks.
With this setup, the Ansible controller will attempt to log in to the hosts as the root user via SSH port 22. For each host it is able to log in to, it will report an ok
response. Otherwise, it will report the server is unreachable and will start executing the play’s tasks for whichever host can be logged in to. If you were completing this set up manually, this automation replaces logging into a host using ssh root@host-ip-address
.
Following the keywords section is a list of tasks to be executed sequentially. As with the play, each task starts with a name
that provides a short description of what the task will accomplish.
Task 1: Update cache
The first task in the playbook updates the package database:
initial.yml
- name: update cache
ansible.builtin.apt:
update_cache: yes
This task will update the package database using the ansible.builtin.apt
module, which is why it is defined with update_cache: yes
. This task accomplishes the same thing as when you log in to an Ubuntu server and type sudo apt update
, often a prelude to updating all installed packages.
Task 2: Update all installed packages
The second task in the playbook updates packages:
initial.yml
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
Like the first task, this task also calls the ansible.builtin.apt
module. Here, you make sure all installed packages are up to date using a wildcard to specify packages (name: "*"
) and state: latest
, which would be the equivalent of logging into your servers and running the sudo apt upgrade -y
command.
Task 3: Make sure NTP service is running
The third task in the playbook ensures the Network Time Protocol (NTP) Daemon is active:
initial.yml
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
This task calls the ansible.builtin.systemd
module to ensure that systemd-timesyncd
, the NTP daemon, is running (state: started
). This ensures that your servers keep the same time, which is helpful when running a distributed application on those servers.
Task 4: Make sure we have a sudo group
The fourth task in the playbook verifies that there’s a sudo group:
initial.yml
- name: Make sure we have a 'sudo' group
ansible.builtin.group:
name: sudo
state: present
This task calls the ansible.builtin.group
module to check that a group named sudo
exists on the hosts (state: present
). Because your next task depends on the presence of a sudo group on the hosts, this task ensures that sudo groups exist so the next task does not fail.
Task 5: Create a user with sudo privileges
The fifth task in the playbook creates your non-root user with sudo privileges:
initial.yml
- name: Create a user with sudo privileges
ansible.builtin.user:
name: "{{ create_user }}"
state: present
groups: sudo
append: true
create_home: true
shell: /bin/bash
password: "{{ password | password_hash('sha512', password_salt) }}"
update_password: on_create
Here, you create a user on each host by calling the ansible.builtin.user
module and appending the sudo
group to the user’s groups. The user’s name is derived from the value of the create_user
variable specified in the vars/default.yml
file. This task ensures that a home directory is created for the user and assigned with the proper shell.
Using the password
parameter and a combination of the password
and password_salt
variables, a function that calls the SHA-512 cryptographic hash algorithm generates a hashed password for the user. Paired with the secret vault file, the password is never passed to the controller in plaintext. With update_password
, you ensure that the hashed password is only set the first time the user is created. If you rerun the playbook, the password will not be regenerated.
Task 6: Set authorized key for remote user
The sixth task in the playbook sets the key for your user:
initial.yml
- name: Set authorized key for remote user
ansible.posix.authorized_key:
user: "{{ create_user }}"
state: present
key: "{{ copy_local_key }}"
With this task, you copy your public SSH key to the hosts by calling on the ansible.posix.authorized_key
module. The value of user
is the user’s name created on the hosts in the previous task, and key
points to the key to be copied. Both variables are defined in the vars/default.yml
file. This task has the same effect as running the ssh-copy-id
command manually.
Task 7: Disable remote login for root
The seventh task in the playbook disables remote login for the root user:
initial.yml
- name: Disable remote login for root
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^PermitRootLogin yes'
line: 'PermitRootLogin no'
This task calls the ansible.builtin.lineinfile
module. It searches for a line that starts with PermitRootLogin
in the /etc/ssh/sshd_config
file using a regular expression (regexp
) and replaces it with the value of line
. This ensures that remote login using the root account will fail after running the play. Only remote login with the user account created in Task 6 will succeed. Disabling remote root login ensures that only regular users may log in, and privilege escalation (e.g., using sudo
) will be required to gain admin privileges.
Task 8: Change the SSH port
The eighth task in the playbook changes the SSH port:
initial.yml
- name: Change the SSH port
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^#Port 22'
line: 'Port "{{ ssh_port }}"'
Because SSH listens on the well-known port 22, it tends to be subject to automated attacks targeting that port. By changing the port SSH listens on, you reduce the number of automated attacks hitting the hosts. This task uses the ansible.builtin.lineinfile
module to search for a line in the SSH daemon’s configuration file and change its value to the custom port number defined in the ssh_port
variable from the vars/default.yml
file. After restarting the SSH daemon, SSH connections on port 22 will no longer be possible.
Task 9: UFW – Allow SSH connections
The ninth task in the playbook allows SSH traffic:
initial.yml
- name: UFW - Allow SSH connections
community.general.ufw:
rule: allow
port: "{{ ssh_port }}"
This task calls the community.general.ufw
module to allow SSH traffic through the firewall. Notice that the SSH port is not 22 but the custom port number you specified earlier. This task is the equivalent of running the ufw allow 5995/tcp
command manually (where 5995
is an example of the custom SSH port).
Task 10: Brute-force attempt protection for SSH
The tenth task guards against brute-force attacks:
initial.yml
- name: Brute-force attempt protection for SSH
community.general.ufw:
rule: limit
port: "{{ ssh_port }}"
proto: tcp
This task uses the community.general.ufw
module with a rate-limiting rule to deny login access to an IP address that has failed six or more connection attempts to the SSH port within a 30-second timeframe. The proto
parameter specifies the protocol (in this case, TCP).
Task 11: UFW – Deny other incoming traffic and enable UFW
The eleventh task in the playbook enables the firewall and denies other incoming traffic:
initial.yml
- name: UFW - Deny other incoming traffic and enable UFW
community.general.ufw:
state: enabled
policy: deny
direction: incoming
This task calls the community.general.ufw
module to enable the firewall (state: enabled
) and set a default policy that denies all incoming traffic (policy: deny
) unless explicitly allowed, such as in Task 9 for SSH traffic.
Task 12: Remove dependencies that are no longer required
The twelfth task in this playbook cleans up package dependencies:
initial.yml
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
This task removes package dependencies that are no longer required on the server by calling the ansible.builtin.apt
module. It is equivalent to manually running the sudo apt autoremove
command.
Task 13: Restart the SSH daemon
The thirteenth task restarts the SSH daemon:
initial.yml
- name: Restart the SSH daemon
ansible.builtin.systemd:
state: restarted
name: ssh
This task calls the ansible.builtin.systemd
module to restart the SSH daemon. Restarting the SSH daemon applies the configuration changes made earlier in the playbook, such as changing the SSH port and disabling remote root login.
Play 2: Rebooting Hosts After Initial Setup
This play begins after all tasks in Play 1 have been successfully completed. The play’s configuration is defined by the following keywords:
initial.yml
- name: Rebooting hosts after initial setup
hosts: initial
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- secret
vars:
ansible_become_pass: "{{ password }}"
The hosts
keyword specifies the group of hosts targeted by this play, in this case, the initial
group from the inventory file created in Step 2. The port
keyword points to the custom SSH port configured in Step 4, as default port 22 is no longer accessible.
In Play 1, the Ansible controller logged in as the root
user. With remote root login now disabled, the remote_user
keyword specifies the sudo user created in Task 5 of Play 1. The become
keyword enables privilege escalation, allowing the Ansible controller to execute tasks requiring root privileges using the sudo mechanism.
The vars_files
keyword points to the variable files, including the secret file encrypted in Step 5, which contains the sudo password variable. And the ansible_become_pass
variable points to the stored password, ensuring secure privilege escalation.
Task 14: Reboot All Hosts
This task reboots all the hosts configured in the initial
group:
initial.yml
- name: Reboot all hosts
ansible.builtin.reboot:
Rebooting ensures that any updates to the kernel or system libraries take effect before additional configurations or applications are installed. This task calls the ansible.builtin.reboot
module to reboot all hosts defined in the play.
Full Playbook File
The complete playbook for initial server setup, including both plays, is as follows:
initial.yml
- name: Initial server setup tasks
hosts: initial
remote_user: root
vars_files:
- vars/default.yml
- secret
tasks:
- name: update cache
ansible.builtin.apt:
update_cache: yes
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
- name: Make sure we have a 'sudo' group
ansible.builtin.group:
name: sudo
state: present
- name: Create a user with sudo privileges
ansible.builtin.user:
name: "{{ create_user }}"
state: present
groups: sudo
append: true
create_home: true
shell: /bin/bash
password: "{{ password | password_hash('sha512', password_salt) }}"
update_password: on_create
- name: Set authorized key for remote user
ansible.builtin.authorized_key:
user: "{{ create_user }}"
state: present
key: "{{ copy_local_key }}"
- name: Disable remote login for root
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^PermitRootLogin yes'
line: 'PermitRootLogin no'
- name: Change the SSH port
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^#Port 22'
line: 'Port "{{ ssh_port }}"'
- name: UFW - Allow SSH connections
community.general.ufw:
rule: allow
port: "{{ ssh_port }}"
- name: Brute-force attempt protection for SSH
community.general.ufw:
rule: limit
port: "{{ ssh_port }}"
proto: tcp
- name: UFW - Deny other incoming traffic and enable UFW
community.general.ufw:
state: enabled
policy: deny
direction: incoming
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Restart the SSH daemon
ansible.builtin.systemd:
state: restarted
name: ssh
- name: Rebooting hosts after initial setup
hosts: initial
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- secret
vars:
ansible_become_pass: "{{ password }}"
tasks:
- name: Reboot all hosts
ansible.builtin.reboot:
Running the Playbook
Now you can run the playbook. First, check the syntax of the playbook file using the following command:
ansible-playbook --syntax-check --ask-vault-pass initial.yml
You’ll be prompted for the vault password you created in Step 5. If there are no errors in the YAML syntax, the output will be:
Output
playbook: initial.yml
Once the syntax check is successful, you may run the playbook using the following command:
ansible-playbook --ask-vault-pass initial.yml
Again, you’ll be prompted for the vault password. After successful authentication, the Ansible controller will log in to each host as the root user and execute all the tasks defined in the playbook. Instead of manually running ssh root@node-ip-address
on each server, Ansible will connect to all nodes specified in the /etc/ansible/hosts
file and run the tasks automatically.
Sample Output
For the sample hosts in this tutorial, it took approximately three minutes for Ansible to complete the tasks across three hosts. Upon completion, you will see an output like this:
Output
PLAY RECAP *****************************************************************************************************
host1 : ok=16 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=16 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=16 changed=11 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Each task and play keyword section that executes successfully will count toward the number in the ok
column. In this case, there are 14 tasks across two plays, and all were evaluated successfully, resulting in a count of 16. The changed
column indicates tasks that resulted in changes on the hosts, with 11 tasks leading to changes.
The unreachable
column shows the number of hosts the Ansible controller could not connect to, which is 0 in this case. Similarly, no tasks failed, so the failed
column is also 0.
A task is skipped
when its conditions (usually defined with the when
parameter) are not met. In this playbook, no tasks were skipped. The last two columns, rescued
and ignored
, relate to error handling and were not triggered in this run.
Verification
You have now successfully automated the initial server setup of multiple Ubuntu 22.04 servers using Ansible. To verify that the setup was successful, log in to one of the hosts and confirm that all expected configurations and updates are in place. This process is detailed in Step 7.
Step 7 — Checking the Server Setup Manually (Optional)
To confirm the output of the play recap at the end of the previous step, you can log in to one of your hosts to verify the setup manually. These actions are optional for learning purposes because the Ansible recap reports an accurate completion.
Start by logging in to one of the hosts using the following command:
ssh -p 5995 sammy@host1-public-ip-address
Here, the -p
option points to the custom SSH port number configured in Task 8 (e.g., 5995), and sammy
is the user created in Task 5. If you are able to log in to the host as that user via the custom port, you know that Ansible successfully completed those tasks.
Once logged in, check if you are able to update the package database:
sudo apt update
If you’re prompted for a password and can authenticate with the password configured for the user in Task 5, it confirms that Ansible successfully created the user and set their password.
Step 8 — Using Ansible for Ongoing Maintenance of the Hosts (Optional)
While the initial server setup playbook executed in Step 6 is scalable, it does not manage hosts beyond the initial configuration. For continued maintenance, you can run the ongoing.yml
playbook included in the repository cloned in Step 3. This step explains how to use that playbook for routine server maintenance.
Before running the playbook, open it with nano or your favorite text editor to review its contents:
nano ongoing.yml
Play 1: Ongoing Maintenance
The playbook contains a single play with tasks designed for server maintenance. The following keywords define the behavior of the play:
ongoing.yml
- hosts: ongoing
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- secret
vars:
ansible_become_pass: "{{ password }}"
Other than the group passed to the hosts
keyword, these are the same keywords used in the second play of the setup playbook.
After the keywords is a list of tasks to be executed sequentially. As in the setup playbook, each task in the maintenance playbook starts with a name that provides a short description of what the task will accomplish.
Task 1: Update Cache
The first task updates the package database:
ongoing.yml
- name: update cache
ansible.builtin.apt:
update_cache: yes
This task will update the package database using the ansible.builtin.apt
module, which is why it is defined with update_cache: yes
. It accomplishes the same thing as when you log in to an Ubuntu server and type sudo apt update
, which is often a prelude to installing a package or updating all installed packages.
Task 2: Update All Installed Packages
The second task updates packages:
ongoing.yml
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
Like the first task, this task also calls the ansible.builtin.apt
module. Here, you ensure that all installed packages are up to date using a wildcard to specify packages (name: "*"
) and state: latest
, which would be the equivalent of logging in to your servers and running the sudo apt upgrade -y
command.
Task 3: Make Sure NTP Service Is Running
The third task ensures the NTP Daemon is active:
ongoing.yml
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
Active services on a server might fail for a variety of reasons, so you want to make sure that such services remain active. This task calls the ansible.builtin.systemd
module to ensure that systemd-timesyncd
, the NTP daemon, remains active (state: started
).
Task 4: UFW – Is It Running?
The fourth task checks the status of the UFW firewall:
ongoing.yml
- name: UFW - Is it running?
ansible.builtin.command: ufw status
register: ufw_status
You can check the status of the UFW firewall on Ubuntu with the sudo ufw status
command. The first line of the output will either read Status: active
or Status: inactive
. This task uses the ansible.builtin.command
module to run the same command, then saves (register
) the output to the ufw_status
variable. The value of that variable will be queried in the next task.
Task 5: UFW – Enable UFW and Deny Incoming Traffic
The fifth task will re-enable the UFW firewall if it has been stopped:
ongoing.yml
- name: UFW - Enable UFW and deny incoming traffic
community.general.ufw:
state: enabled
when: "'inactive' in ufw_status.stdout"
This task calls the community.general.ufw
module to enable the firewall only when the term inactive
appears in the output of the ufw_status
variable. If the firewall is active, then the when
condition is not met, and the task is marked as skipped.
Task 6: Remove Dependencies That Are No Longer Required
The sixth task in this playbook cleans up package dependencies:
ongoing.yml
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
This task removes package dependencies that are no longer required on the server by calling the ansible.builtin.apt
module, which is the equivalent of running the sudo apt autoremove
command.
Task 7: Check If Reboot Is Required
The seventh task checks if a reboot is required:
ongoing.yml
- name: Check if reboot required
ansible.builtin.stat:
path: /var/run/reboot-required
register: reboot_required
On Ubuntu, a newly installed or upgraded package will signal that a reboot is required for changes to take effect by creating the /var/run/reboot-required
file. This task calls the ansible.builtin.stat
module to check for the existence of this file, then saves (register
) the output to the reboot_required
variable.
Task 8: Reboot If Required
The eighth task will reboot the server if necessary:
ongoing.yml
- name: Reboot if required
ansible.builtin.reboot:
when: reboot_required.stat.exists == true
By querying the reboot_required
variable from Task 7, this task calls the ansible.builtin.reboot
module to reboot the hosts only when the /var/run/reboot-required
file exists. If a reboot is required, the task is marked as changed; otherwise, it is marked as skipped.
Complete Ongoing Maintenance Playbook
The full playbook file for ongoing maintenance is as follows:
ongoing.yml
- hosts: ongoing
port: "{{ ssh_port }}"
remote_user: "{{ create_user }}"
become: true
vars_files:
- vars/default.yml
- secret
vars:
ansible_become_pass: "{{ password }}"
tasks:
- name: update cache
ansible.builtin.apt:
update_cache: yes
- name: Update all installed packages
ansible.builtin.apt:
name: "*"
state: latest
- name: Make sure NTP service is running
ansible.builtin.systemd:
state: started
name: systemd-timesyncd
- name: UFW - Is it running?
ansible.builtin.command: ufw status
register: ufw_status
- name: UFW - Enable UFW and deny incoming traffic
community.general.ufw:
state: enabled
when: "'inactive' in ufw_status.stdout"
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Check if reboot required
ansible.builtin.stat:
path: /var/run/reboot-required
register: reboot_required
- name: Reboot if required
ansible.builtin.reboot:
when: reboot_required.stat.exists == true
After reviewing the maintenance playbook, you can now proceed to run it. Start by checking the syntax of the playbook file with the following command:
ansible-playbook --syntax-check --ask-vault-pass ongoing.yml
When prompted, enter the vault password you created in Step 5. If there are no errors with the YAML syntax, the output will be:
Output
playbook: ongoing.yml
Once the syntax check is successful, you can run the playbook using the following command:
ansible-playbook --ask-vault-pass ongoing.yml
You will again be prompted for your vault password. After successful authentication, the Ansible controller will log in to each host as the user specified in the playbook (e.g., sammy
) and execute the tasks. Rather than running the ssh -p 5995 sammy@host_ip_address
command on each server manually, Ansible connects to the nodes specified by the ongoing
group in the inventory file and runs the tasks automatically.
Sample Output
If the playbook completes successfully, the output will look similar to this:
Output
PLAY RECAP *****************************************************************************************************
host1 : ok=7 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
host2 : ok=7 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
host3 : ok=7 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
The play recap shows the results of the playbook run:
- The
ok
column indicates the total number of tasks evaluated successfully. In this case, each host had 7 tasks evaluated. - The
changed
column shows tasks that made changes to the hosts. Here, 2 tasks led to changes. - The
unreachable
column shows the number of hosts the controller could not log in to. This value is 0, indicating no connection issues. - The
failed
column shows tasks that failed, so this value is also 0. - The
skipped
column shows tasks that were skipped because their conditions (set with thewhen
parameter) were not met. In this case, 2 tasks were skipped. - The
rescued
andignored
columns relate to error handling and were not triggered in this playbook run.
Conclusion
In this tutorial, you used Ansible to automate the initial setup and ongoing maintenance of multiple Ubuntu 22.04 servers. The initial server setup playbook allowed you to configure essential security settings, create users, and update software packages with a single command, saving time and reducing manual effort. The maintenance playbook enabled you to perform regular updates, ensure critical services are running, and streamline server management tasks.
Ansible’s ability to scale automation across multiple servers makes it an invaluable tool for managing distributed applications, clusters, or any environment that requires frequent configuration updates. By using variables, encrypted vaults, and modular playbooks, you can ensure that sensitive information is protected and configurations remain consistent across your infrastructure.
For more information on how to extend your automation workflows, consider exploring Ansible’s official documentation and community resources. Whether you’re deploying applications, managing configurations, or enforcing security policies, Ansible provides a robust and flexible platform to meet your needs.
If you have additional steps or tasks to automate, you can expand your playbooks or create new ones to address specific requirements. Automation not only improves efficiency but also reduces errors, making it a cornerstone of modern system administration.