Automating system administration by using RHEL system roles
Consistent and repeatable configuration of RHEL deployments across multiple hosts with Red Hat Ansible Automation Platform playbooks
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Introduction to RHEL system roles Copy linkLink copied to clipboard!
By using RHEL system roles, you can remotely manage the system configurations of multiple RHEL systems across major versions of RHEL.
The following describes important terms and concepts in an Ansible environment:
- Control node
A control node is the system from which you run Ansible commands and playbooks. Your control node can be an Ansible Automation Platform, Red Hat Satellite, or a RHEL host. For more information, see Preparing a control node on RHEL 9.
ImportantRHEL 9 contains
ansible-core2.14. This Ansible version supports managing RHEL 7, RHEL 8, and RHEL 9 nodes. To manage RHEL 10 nodes, you require a RHEL 10 control node.- Managed node
- Managed nodes are the servers and network devices that you manage with Ansible. Managed nodes are also sometimes called hosts. Ansible does not have to be installed on managed nodes. For more information, see Preparing a managed node.
- Ansible playbook
- In a playbook, you define the configuration you want to achieve on your managed nodes or a set of steps for the system on the managed node to perform. Playbooks are Ansible’s configuration, deployment, and orchestration language.
- Inventory
- In an inventory file, you list the managed nodes and specify information such as IP address for each managed node. In the inventory, you can also organize the managed nodes by creating and nesting groups for easier scaling. An inventory file is also sometimes called a hostfile.
- Available roles and modules on a Red Hat Enterprise Linux 9control node
Roles provided by the
rhel-system-rolespackage:-
ad_integration: Active Directory integration -
aide: Advanced Intrusion Detection Environment -
bootloader: GRUB boot loader management -
certificate: Certificate issuance and renewal -
cockpit: Web console installation and configuration -
crypto_policies: System-wide cryptographic policies -
fapolicy: File access policy daemon configuration -
firewall: Firewalld management -
ha_cluster: HA Cluster management -
journald: Systemd journald management -
kdump: Kernel Dumps management -
kernel_settings: Kernel settings management -
logging: Configuring logging -
metrics: Performance monitoring and metrics -
nbde_client: Network Bound Disk Encryption client -
nbde_server: Network Bound Disk Encryption server -
network: Networking configuration -
podman: Podman container management -
postfix: Postfix configuration -
postgresql: PostgreSQL configuration -
rhc: Subscribing RHEL and configuring Insights client -
selinux: SELinux management -
ssh: SSH client configuration -
sshd: SSH server configuration -
storage: Storage management -
systemd: Managing systemd units -
timesync: Time synchronization -
tlog: Terminal session recording -
vpn: Configuring IPsec VPNs
Roles provided by the
ansible-collection-microsoft-sqlpackage:-
microsoft.sql.server: Microsoft SQL Server
Modules provided by the
ansible-collection-redhat-rhel_mgmtpackage:-
rhel_mgmt.ipmi_boot: Setting boot devices -
rhel_mgmt.ipmi_power: Setting the system power state -
rhel_mgmt.redfish_command: Managing out-of-band controllers (OOB) -
rhel_mgmt.redfish_command: Querying information from OOB controllers -
rhel_mgmt.redfish_command: Managing BIOS, UEFI, and OOB controllers
-
Chapter 2. Preparing a control node and managed nodes to use RHEL system roles Copy linkLink copied to clipboard!
Before you can use individual RHEL system roles to manage services and settings, you must prepare the control node and managed nodes.
2.1. Preparing a control node on RHEL 9 Copy linkLink copied to clipboard!
Before using RHEL system roles, you must configure a control node. This system then configures the managed hosts from the inventory according to the playbooks.
Prerequisites
- The system is registered to the Customer Portal.
-
A
Red Hat Enterprise Linux Serversubscription is attached to the system. -
Optional: An
Ansible Automation Platformsubscription is attached to the system.
Procedure
Create a user named
ansibleto manage and run playbooks:useradd ansible
[root@control-node]# useradd ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to the newly created
ansibleuser:su - ansible
[root@control-node]# su - ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Perform the rest of the procedure as this user.
Create an SSH public and private key:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the suggested default location for the key file.
- Optional: To prevent Ansible from prompting you for the SSH key password each time you establish a connection, configure an SSH agent.
Create the
~/.ansible.cfgfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSettings in the
~/.ansible.cfgfile have a higher priority and override settings from the global/etc/ansible/ansible.cfgfile.With these settings, Ansible performs the following actions:
- Manages hosts in the specified inventory file.
-
Uses the account set in the
remote_userparameter when it establishes SSH connections to managed nodes. -
Uses the
sudoutility to execute tasks on managed nodes as therootuser. - Prompts for the root password of the remote user every time you apply a playbook. This is recommended for security reasons.
Create an
~/inventoryfile in INI or YAML format that lists the hostnames of managed hosts. You can also define groups of hosts in the inventory file. For example, the following is an inventory file in the INI format with three hosts and one host group namedUS:managed-node-01.example.com [US] managed-node-02.example.com ansible_host=192.0.2.100 managed-node-03.example.com
managed-node-01.example.com [US] managed-node-02.example.com ansible_host=192.0.2.100 managed-node-03.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the control node must be able to resolve the hostnames. If the DNS server cannot resolve certain hostnames, add the
ansible_hostparameter next to the host entry to specify its IP address.Install RHEL system roles:
On a RHEL host without Ansible Automation Platform, install the
rhel-system-rolespackage:dnf install rhel-system-roles
[root@control-node]# dnf install rhel-system-rolesCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command installs the collections in the
/usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/directory, and theansible-corepackage as a dependency.On Ansible Automation Platform, perform the following steps as the
ansibleuser:-
Define Red Hat automation hub as the primary source for content in the
~/.ansible.cfgfile. Install the
redhat.rhel_system_rolescollection from Red Hat automation hub:ansible-galaxy collection install redhat.rhel_system_roles
[ansible@control-node]$ ansible-galaxy collection install redhat.rhel_system_rolesCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command installs the collection in the
~/.ansible/collections/ansible_collections/redhat/rhel_system_roles/directory.
-
Define Red Hat automation hub as the primary source for content in the
Next step
- Prepare the managed nodes. For more information, see Preparing a managed node.
2.2. Preparing a managed node Copy linkLink copied to clipboard!
Managed nodes are the systems listed in the inventory and which will be configured by the control node according to the playbook. You do not have to install Ansible on managed hosts.
Prerequisites
- You prepared the control node. For more information, see Preparing a control node on RHEL 9.
You have SSH access from the control node.
ImportantDirect SSH access as the
rootuser is a security risk. To reduce this risk, you will create a local user on this node and configure asudopolicy when preparing a managed node. Ansible on the control node can then use the local user account to log in to the managed node and run playbooks as different users, such asroot.
Procedure
Create a user named
ansible:useradd ansible
[root@managed-node-01]# useradd ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control node later uses this user to establish an SSH connection to this host.
Set a password for the
ansibleuser:passwd ansible
[root@managed-node-01]# passwd ansible Changing password for user ansible. New password: <password> Retype new password: <password> passwd: all authentication tokens updated successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must enter this password when Ansible uses
sudoto perform tasks as therootuser.Install the
ansibleuser’s SSH public key on the managed node:Log in to the control node as the
ansibleuser, and copy the SSH public key to the managed node:ssh-copy-id managed-node-01.example.com
[ansible@control-node]$ ssh-copy-id managed-node-01.example.com /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub" The authenticity of host 'managed-node-01.example.com (192.0.2.100)' can't be established. ECDSA key fingerprint is SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.Copy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, connect by entering
yes:Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysCopy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, enter the password:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the SSH connection by remotely executing a command on the control node:
ssh managed-node-01.example.com whoami
[ansible@control-node]$ ssh managed-node-01.example.com whoami ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
sudoconfiguration for theansibleuser:Create and edit the
/etc/sudoers.d/ansiblefile by using thevisudocommand:visudo /etc/sudoers.d/ansible
[root@managed-node-01]# visudo /etc/sudoers.d/ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow The benefit of using
visudoover a normal editor is that this utility provides basic checks, such as for parse errors, before installing the file.Configure a
sudoerspolicy in the/etc/sudoers.d/ansiblefile that meets your requirements, for example:To grant permissions to the
ansibleuser to run all commands as any user and group on this host after entering theansibleuser’s password, use:ansible ALL=(ALL) ALL
ansible ALL=(ALL) ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow To grant permissions to the
ansibleuser to run all commands as any user and group on this host without entering theansibleuser’s password, use:ansible ALL=(ALL) NOPASSWD: ALL
ansible ALL=(ALL) NOPASSWD: ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Alternatively, configure a more fine-granular policy that matches your security requirements. For further details on
sudoerspolicies, see thesudoers(5)manual page.
Verification
Verify that you can execute commands from the control node on an all managed nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The hard-coded all group dynamically contains all hosts listed in the inventory file.
Verify that privilege escalation works correctly by running the
whoamiutility on all managed nodes by using the Ansiblecommandmodule:ansible all -m command -a whoami
[ansible@control-node]$ ansible all -m command -a whoami BECOME password: <password> managed-node-01.example.com | CHANGED | rc=0 >> root ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command returns root, you configured
sudoon the managed nodes correctly.
Chapter 3. Ansible vault Copy linkLink copied to clipboard!
You can use Ansible vault to encrypt sensitive data, such as passwords and API keys, in your playbooks.
Storing sensitive data in plain text in variables or other Ansible-compatible files is a security risk because any user with access to those files can read the sensitive data.
With Ansible vault, you can encrypt, decrypt, view, and edit sensitive information. They could be included as:
- Inserted variable files in an Ansible Playbook
- Host and group variables
- Variable files passed as arguments when executing the playbook
- Variables defined in Ansible roles
You can use Ansible vault to securely manage individual variables, entire files, or even structured data like YAML files. This data can then be safely stored in a version control system or shared with team members without exposing sensitive information.
Files are protected with symmetric encryption of the Advanced Encryption Standard (AES256), where a single password or passphrase is used both to encrypt and decrypt the data. Note that the way this is done has not been formally audited by a third party.
To simplify management, it makes sense to set up your Ansible project so that sensitive variables and all other variables are kept in separate files, or directories. Then you can protect the files containing sensitive variables with the ansible-vault command.
- Creating an encrypted file
The following command prompts you for a new vault password. Then it opens a file for storing sensitive variables using the default editor.
ansible-vault create vault.yml
# ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Viewing an encrypted file
The following command prompts you for your existing vault password. Then it displays the sensitive contents of an already encrypted file.
ansible-vault view vault.yml
# ansible-vault view vault.yml Vault password: <vault_password> my_secret: "yJJvPqhsiusmmPPZdnjndkdnYNDjdj782meUZcw"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Editing an encrypted file
The following command prompts you for your existing vault password. Then it opens the already encrypted file for you to update the sensitive variables using the default editor.
ansible-vault edit vault.yml
# ansible-vault edit vault.yml Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Encrypting an existing file
The following command prompts you for a new vault password. Then it encrypts an existing unencrypted file.
ansible-vault encrypt vault.yml
# ansible-vault encrypt vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password> Encryption successfulCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Decrypting an existing file
The following command prompts you for your existing vault password. Then it decrypts an existing encrypted file.
ansible-vault decrypt vault.yml
# ansible-vault decrypt vault.yml Vault password: <vault_password> Decryption successfulCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Changing the password of an encrypted file
The following command prompts you for your original vault password, then for the new vault password.
ansible-vault rekey vault.yml
# ansible-vault rekey vault.yml Vault password: <vault_password> New Vault password: <vault_password> Confirm New Vault password: <vault_password> Rekey successfulCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Basic application of Ansible vault variables in a playbook
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You read-in the file with variables (
vault.yml) in thevars_filessection of your Ansible Playbook, and you use the curly brackets the same way you would do with your ordinary variables. Then you either run the playbook with theansible-playbook --ask-vault-passcommand and you enter the password manually. Or you save the password in a separate file and you run the playbook with theansible-playbook --vault-password-file /path/to/my/vault-password-filecommand.
Chapter 4. Joining RHEL systems to an Active Directory by using RHEL system roles Copy linkLink copied to clipboard!
If your organization uses Microsoft Active Directory (AD) to centrally manage users, groups, and other resources, you can join your (RHEL) host to this AD. By using the ad_integration RHEL system role, you can automate the integration of Red Hat Enterprise Linux system into an Active Directory (AD) domain.
For example, if a host is joined to AD, AD users can then log in to RHEL and you can make services on the RHEL host available for authenticated AD users.
The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles.
4.1. Joining RHEL to an Active Directory domain by using the ad_integration RHEL system role Copy linkLink copied to clipboard!
You can use the ad_integration RHEL system role to automate the process of joining RHEL to an Active Directory (AD) domain.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed node uses a DNS server that can resolve AD DNS entries.
- Credentials of an AD account which has permissions to join computers to the domain.
Ensure that the required ports are open:
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:usr: administrator pwd: <password>
usr: administrator pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ad_integration_allow_rc4_crypto: <true|false>Configures whether the role activates the
AD-SUPPORTcrypto policy on the managed node. By default, RHEL does not support the weak RC4 encryption but, if Kerberos in your AD still requires RC4, you can enable this encryption type by settingad_integration_allow_rc4_crypto: true.Omit this the variable or set it to
falseif Kerberos uses AES encryption.ad_integration_timesync_source: <time_server>-
Specifies the NTP server to use for time synchronization. Kerberos requires a synchronized time among AD domain controllers and domain members to prevent replay attacks. If you omit this variable, the
ad_integrationrole does not use thetimesyncRHEL system role to configure time synchronization on the managed node.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ad_integration/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check if AD users, such as
administrator, are available locally on the managed node:ansible managed-node-01.example.com -m command -a 'getent passwd administrator@ad.example.com'
$ ansible managed-node-01.example.com -m command -a 'getent passwd administrator@ad.example.com' administrator@ad.example.com:*:1450400500:1450400513:Administrator:/home/administrator@ad.example.com:/bin/bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Configuring the GRUB boot loader by using RHEL system roles Copy linkLink copied to clipboard!
By using the bootloader RHEL system role, you can automate the configuration and management tasks related to the GRUB2 boot loader.
This role currently supports configuring the GRUB2 boot loader, which runs on the following CPU architectures:
- AMD and Intel 64-bit architectures (x86-64)
- The 64-bit ARM architecture (ARMv8.0)
- IBM Power Systems, Little Endian (POWER9)
5.1. Updating the existing boot loader entries by using the bootloader RHEL system role Copy linkLink copied to clipboard!
You can use the bootloader RHEL system role to update the existing entries in the GRUB boot menu in an automated fashion. This way you can efficiently pass specific kernel command-line parameters that can optimize the performance or behavior of your systems.
For example, if you leverage systems, where detailed boot messages from the kernel and init system are not necessary, use bootloader to apply the quiet parameter to your existing boot loader entries on your managed nodes to achieve a cleaner, less cluttered, and more user-friendly booting experience.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You identified the kernel that corresponds to the boot loader entry you want to update.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
kernel- Specifies the kernel connected with the boot loader entry that you want to update.
options- Specifies the kernel command-line parameters to update for your chosen boot loader entry (kernel).
bootloader_reboot_ok: true- The role detects that a reboot is required for the changes to take effect and performs a restart of the managed node.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.bootloader/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that your specified boot loader entry has updated kernel command-line parameters:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Collecting the boot loader configuration information by using the bootloader RHEL system role Copy linkLink copied to clipboard!
You can use the bootloader RHEL system role to gather information about the GRUB boot loader entries in an automated fashion. You can use this information to verify the correct configuration of system boot parameters, such as kernel and initial RAM disk image paths.
As a result, you can for example:
- Prevent boot failures.
- Revert to a known good state when troubleshooting.
- Be sure that security-related kernel command-line parameters are correctly configured.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.bootloader/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After you run the preceding playbook on the control node, you will see a similar command-line output as in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command-line output shows the following notable configuration information about the boot entry:
args- Command-line parameters passed to the kernel by the GRUB2 boot loader during the boot process. They configure various settings and behaviors of the kernel, initramfs, and other boot-time components.
id- Unique identifier assigned to each boot entry in a boot loader menu. It consists of machine ID and the kernel version.
root- The root filesystem for the kernel to mount and use as the primary filesystem during the boot.
Chapter 6. Requesting certificates from a CA and creating self-signed certificates by using RHEL system roles Copy linkLink copied to clipboard!
Many services, such as web servers, use TLS to encrypt connections with clients. By using the certificate RHEL system role, you can automate the generation of private keys on managed nodes. Additionally, the role configures the certmonger service to request a certificate from a CA.
For testing purposes, you can use the certificate role to create self-signed certificates instead of requesting a signed certificate from a CA.
6.1. Requesting a new certificate from an IdM CA by using the certificate RHEL system role Copy linkLink copied to clipboard!
By using the certificate RHEL system role, you can automate the process of creating a private key and letting the certmonger service request a certificate from the Identity Management (IdM) certificate authority (CA). By default, certmonger will also renew the certificate before it expires.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed node is a member of an IdM domain and the domain uses the IdM-integrated CA.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
name: <path_or_file_name>Defines the name or path of the generated private key and certificate file:
-
If you set the variable to
web-server, the role stores the private key in the/etc/pki/tls/private/web-server.keyand the certificate in the/etc/pki/tls/certs/web-server.crtfiles. If you set the variable to a path, such as
/tmp/web-server, the role stores the private key in the/tmp/web-server.keyand the certificate in the/tmp/web-server.crtfiles.Note that the directory you use must have the
cert_tSELinux context set. You can use theselinuxRHEL system role to manage SELinux contexts.
-
If you set the variable to
ca: ipa- Defines that the role requests the certificate from an IdM CA.
dns: <hostname_or_list_of_hostnames>-
Sets the hostnames that the Subject Alternative Names (SAN) field in the issued certificate contains. You can use a wildcard (
*) or specify multiple names in YAML list format. principal: <kerberos_principal>- Optional: Sets the Kerberos principal that should be included in the certificate.
run_before: <command>-
Optional: Defines a command that
certmongershould execute before requesting the certificate from the CA. run_after: <command>-
Optional: Defines a command that
certmongershould execute after it received the issued certificate from the CA.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.certificate/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the certificates that the
certmongerservice manages:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Requesting a new self-signed certificate by using the certificate RHEL system role Copy linkLink copied to clipboard!
If you require a TLS certificate for a test environment, you can use a self-signed certificate. By using the certificate RHEL system role, you can automate the process of creating a private key and letting the certmonger service create a self-signed certificate.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
name: <path_or_file_name>Defines the name or path of the generated private key and certificate file:
-
If you set the variable to
web-server, the role stores the private key in the/etc/pki/tls/private/web-server.keyand the certificate in the/etc/pki/tls/certs/web-server.crtfiles. If you set the variable to a path, such as
/tmp/web-server, the role stores the private key in the/tmp/web-server.keyand the certificate in the/tmp/web-server.crtfiles.Note that the directory you use must have the
cert_tSELinux context set. You can use theselinuxRHEL system role to manage SELinux contexts.
-
If you set the variable to
ca: self-sign- Defines that the role created a self-signed certificate.
dns: <hostname_or_list_of_hostnames>-
Sets the hostnames that the Subject Alternative Names (SAN) field in the issued certificate contains. You can use a wildcard (
*) or specify multiple names in YAML list format.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.certificate/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the certificates that the
certmongerservice manages:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Installing and configuring web console by using RHEL system roles Copy linkLink copied to clipboard!
With the cockpit RHEL system role, you can automatically deploy and enable the web console on multiple RHEL systems.
7.1. Installing the web console by using the cockpit RHEL system role Copy linkLink copied to clipboard!
You can use the cockpit system role to automate installing and enabling the RHEL web console on multiple systems.
You use the cockpit system role to:
- Install the RHEL web console.
-
Allow the
firewalldandselinuxsystem roles to configure the system for opening new ports. -
Set the web console to use a certificate from the
ipatrusted certificate authority instead of using a self-signed certificate.
You do not have to call the firewall or certificate system roles in the playbook to manage the firewall or create the certificate. The cockpit system role calls them automatically as needed.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
cockpit_manage_selinux: true-
Allow using the
selinuxsystem role to configure SELinux for setting up the correct port permissions on thewebsm_port_tSELinux type. cockpit_manage_firewall: true-
Allow the
cockpitsystem role to use thefirewalldsystem role for adding ports. cockpit_certificates: <YAML_dictionary>By default, the RHEL web console uses a self-signed certificate. Alternatively, you can add the
cockpit_certificatesvariable to the playbook and configure the role to request certificates from an IdM certificate authority (CA) or to use an existing certificate and private key that is available on the managed node.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.cockpit/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Setting a custom cryptographic policy by using RHEL system roles Copy linkLink copied to clipboard!
By using the crypto_policies RHEL system role, you can quickly and consistently configure custom cryptographic policies across many operating systems in an automated fashion.
Custom cryptographic policies are a set of rules and configurations that manage the use of cryptographic algorithms and protocols. These policies help you to maintain a protected, consistent, and manageable security environment across multiple systems and applications.
8.1. Enhancing security with the FUTURE cryptographic policy using the crypto_policies RHEL system role Copy linkLink copied to clipboard!
You can use the crypto_policies RHEL system role to configure the FUTURE cryptographic policy on your managed nodes.
The FUTURE policy helps to achieve, for example:
- Future-proofing against emerging threats: anticipates advancements in computational power.
- Enhanced security: stronger encryption standards require longer key lengths and more secure algorithms.
- Compliance with high-security standards: for example in healthcare, telco, and finance the data sensitivity is high, and availability of strong cryptography is critical.
Typically, FUTURE is suitable for environments handling highly sensitive data, preparing for future regulations, or adopting long-term security strategies.
Legacy systems or software does not have to support the more modern and stricter algorithms and protocols enforced by the FUTURE policy. For example, older systems might not support TLS 1.3 or larger key sizes. This could lead to compatibility problems.
Also, using strong algorithms usually increases the computational workload, which could negatively affect your system performance.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
crypto_policies_policy: FUTURE-
Configures the required cryptographic policy (
FUTURE) on the managed node. It can be either the base policy or a base policy with some sub-policies. The specified base policy and sub-policies have to be available on the managed node. The default value isnull. It means that the configuration is not changed and thecrypto_policiesRHEL system role will only collect the Ansible facts. crypto_policies_reboot_ok: true-
Causes the system to reboot after the cryptographic policy change to make sure all of the services and applications will read the new configuration files. The default value is
false.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Because the FIPS:OSPP system-wide subpolicy contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification, the system is less interoperable after you set it. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP, communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate.
Note that your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides hosted at the National Information Assurance Partnership (NIAP) website.
Verification
On the control node, create another playbook named, for example,
verify_playbook.yml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
crypto_policies_active-
An exported Ansible fact that contains the currently active policy name in the format as accepted by the
crypto_policies_policyvariable.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/verify_playbook.yml
$ ansible-playbook --syntax-check ~/verify_playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/verify_playbook.yml
$ ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { "crypto_policies_active": "FUTURE" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
crypto_policies_activevariable shows the active policy on the managed node.
Chapter 9. Restricting the execution of applications by using the fapolicyd RHEL system role Copy linkLink copied to clipboard!
By using the fapolicyd software framework, you can restrict the execution of applications based on a user-defined policy and the framework verifies the integrity of applications before execution. You can automate the configuration of fapolicyd by using the fapolicyd RHEL system role.
The fapolicyd service prevents only the execution of unauthorized applications that run as regular users, and not as root.
9.1. Preventing users from executing untrustworthy code by using the fapolicyd RHEL system role Copy linkLink copied to clipboard!
You can automate the installation and configuration of the fapolicyd service by using the fapolicyd RHEL system role.
With this role, you can remotely configure the service to allow users to execute only trusted applications, for example, the ones which are listed in the RPM database and in an allow list. Additionally, the service can perform integrity checks before it executes an allowed application.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
fapolicyd_setup_permissive: <true|false>-
Enables or disables sending policy decisions to the kernel for enforcement. Set this variable for debugging and testing purposes to
false. fapolicyd_setup_integrity: <type_type>Defines the integrity checking method. You can set one of the following values:
-
none(default): Disables integrity checking. -
size: The service compares only the file sizes of allowed applications. -
ima: The service checks the SHA-256 hash that the kernel’s Integrity Measurement Architecture (IMA) stored in a file’s extended attribute. Additionally, the service performs a size check. Note that the role does not configure the IMA kernel subsystem. To use this option, you must manually configure the IMA subsystem. -
sha256: The service compares the SHA-256 hash of allowed applications.
-
fapolicyd_setup_trust: <trust_backends>-
Defines the list of trust backends. If you include the
filebackend, specify the allowed executable files in thefapolicyd_add_trusted_filelist.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.fapolicyd.README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook ~/playbook.yml --syntax-check
$ ansible-playbook ~/playbook.yml --syntax-checkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Execute a binary application that is not on the allow list as a user:
ansible managed-node-01.example.com -m command -a 'su -c "/bin/not_authorized_application " <user_name>'
$ ansible managed-node-01.example.com -m command -a 'su -c "/bin/not_authorized_application " <user_name>' bash: line 1: /bin/not_authorized_application: Operation not permitted non-zero return codeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Configuring firewalld by using RHEL system roles Copy linkLink copied to clipboard!
RHEL system roles is a set of contents for the Ansible automation utility. This content together with the Ansible automation utility provides a consistent configuration interface to remotely manage multiple systems at once.
The rhel-system-roles package contains the rhel-system-roles.firewall RHEL system role. This role was introduced for automated configurations of the firewalld service.
With the firewall RHEL system role you can configure many different firewalld parameters, for example:
- Zones
- The services for which packets should be allowed
- Granting, rejection, or dropping of traffic access to ports
- Forwarding of ports or port ranges for a zone
10.1. Resetting the firewalld settings by using the firewall RHEL system role Copy linkLink copied to clipboard!
The firewall RHEL system role supports automating a reset of firewalld settings to their defaults. This efficiently removes insecure or unintentional firewall rules and simplifies management.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
previous: replacedRemoves all existing user-defined settings and resets the
firewalldsettings to defaults. If you combine theprevious:replacedparameter with other settings, thefirewallrole removes all existing settings before applying new ones.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.firewall/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run this command on the control node to remotely check that all firewall configuration on your managed node was reset to its default values:
ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'
# ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2. Forwarding incoming traffic in firewalld from one local port to a different local port by using the firewall RHEL system role Copy linkLink copied to clipboard!
You can use the firewall RHEL system role to remotely configure forwarding of incoming traffic from one local port to a different local port.
For example, if you have an environment where multiple services co-exist on the same machine and need the same default port, there are likely to become port conflicts. These conflicts can disrupt services and cause a downtime. With the firewall RHEL system role, you can efficiently forward traffic to alternative ports to ensure that your services can run simultaneously without modification to their configuration.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
forward_port: 8080/tcp;443- Traffic coming to the local port 8080 using the TCP protocol is forwarded to the port 443.
runtime: trueEnables changes in the runtime configuration. The default is set to
true.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.firewall/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the control node, run the following command to remotely check the forwarded-ports on your managed node:
ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports'
# ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports' managed-node-01.example.com | CHANGED | rc=0 >> port=8080:proto=tcp:toport=443:toaddr=Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Configuring a firewalld DMZ zone by using the firewall RHEL system role Copy linkLink copied to clipboard!
You can use the firewall RHEL system role to configure zone to allow certain traffic. For example, you can configure that the dmz zone with the enp1s0 interface allows HTTPS traffic to enable external users to access your web servers.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.firewall/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the control node, run the following command to remotely check the information about the
dmzzone on your managed node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 11. Configuring a high-availability cluster by using RHEL system roles Copy linkLink copied to clipboard!
With the ha_cluster system role, you can configure and manage a high-availability cluster that uses the Pacemaker high availability cluster resource manager.
11.1. Variables of the ha_cluster RHEL system role Copy linkLink copied to clipboard!
In an ha_cluster RHEL system role playbook, you define the variables for a high availability cluster according to the requirements of your cluster deployment.
The variables you can set for an ha_cluster RHEL system role are as follows:
ha_cluster_enable_repos-
A boolean flag that enables the repositories containing the packages that are needed by the
ha_clusterRHEL system role. When this variable is set totrue, the default value, you must have active subscription coverage for RHEL and the RHEL High Availability Add-On on the systems that you will use as your cluster members or the system role will fail. ha_cluster_enable_repos_resilient_storage-
(RHEL 9.4 and later) A boolean flag that enables the repositories containing resilient storage packages, such as
dlmorgfs2. For this option to take effect,ha_cluster_enable_reposmust be set totrue. The default value of this variable isfalse. ha_cluster_manage_firewall(RHEL 9.2 and later) A boolean flag that determines whether the
ha_clusterRHEL system role manages the firewall. Whenha_cluster_manage_firewallis set totrue, the firewall high availability service and thefence-virtport are enabled. Whenha_cluster_manage_firewallis set tofalse, theha_clusterRHEL system role does not manage the firewall. If your system is running thefirewalldservice, you must set the parameter totruein your playbook.You can use the
ha_cluster_manage_firewallparameter to add ports, but you cannot use the parameter to remove ports. To remove ports, use thefirewallsystem role directly.In RHEL 9.2 and later, the firewall is no longer configured by default, because it is configured only when
ha_cluster_manage_firewallis set totrue.ha_cluster_manage_selinux(RHEL 9.2 and later) A boolean flag that determines whether the
ha_clusterRHEL system role manages the ports belonging to the firewall high availability service using theselinuxRHEL system role. Whenha_cluster_manage_selinuxis set totrue, the ports belonging to the firewall high availability service are associated with the SELinux port typecluster_port_t. Whenha_cluster_manage_selinuxis set tofalse, theha_clusterRHEL system role does not manage SELinux.If your system is running the
selinuxservice, you must set this parameter totruein your playbook. Firewall configuration is a prerequisite for managing SELinux. If the firewall is not installed, the managing SELinux policy is skipped.You can use the
ha_cluster_manage_selinuxparameter to add policy, but you cannot use the parameter to remove policy. To remove policy, use theselinuxRHEL system role directly.ha_cluster_cluster_presentA boolean flag which, if set to
true, determines that HA cluster will be configured on the hosts according to the variables passed to the role. Any cluster configuration not specified in the playbook and not supported by the role will be lost.If
ha_cluster_cluster_presentis set tofalse, all HA cluster configuration will be removed from the target hosts.The default value of this variable is
true.The following example playbook removes all cluster configuration on
node1andnode2Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_start_on_boot-
A boolean flag that determines whether cluster services will be configured to start on boot. The default value of this variable is
true. ha_cluster_install_cloud_agents-
(RHEL 9.5 and later) A boolean flag that determines whether resource and fence agents for cloud environments are installed. These agents are not installed by default. Alternately, you can specify the packages for cloud environments by using the
ha_cluster_fence_agent_packagesandha_cluster_extra_packagesvariables. The default value of this variable isfalse. ha_cluster_fence_agent_packages-
List of fence agent packages to install. The default value of this variable is
fence-agents-all,fence-virt. ha_cluster_extra_packagesList of additional packages to be installed. The default value of this variable is no packages.
This variable can be used to install additional packages not installed automatically by the role, for example custom resource agents.
It is possible to specify fence agents as members of this list. However,
ha_cluster_fence_agent_packagesis the recommended role variable to use for specifying fence agents, so that its default value is overridden.ha_cluster_hacluster_password-
A string value that specifies the password of the
haclusteruser. Thehaclusteruser has full access to a cluster. To protect sensitive data, vault encrypt the password, as described in Encrypting content with Ansible Vault. There is no default password value, and this variable must be specified. ha_cluster_hacluster_qdevice_password-
(RHEL 9.3 and later) A string value that specifies the password of the
haclusteruser for a quorum device. This parameter is needed only if theha_cluster_quorumparameter is configured to use a quorum device of typenetand the password of thehaclusteruser on the quorum device is different from the password of thehaclusteruser specified with theha_cluster_hacluster_passwordparameter. Thehaclusteruser has full access to a cluster. To protect sensitive data, vault encrypt the password, as described in Encrypting content with Ansible Vault. There is no default value for this password. ha_cluster_corosync_key_srcThe path to Corosync
authkeyfile, which is the authentication and encryption key for Corosync communication. It is highly recommended that you have a uniqueauthkeyvalue for each cluster. The key should be 256 bytes of random data.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes.
If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_pacemaker_key_srcThe path to the Pacemaker
authkeyfile, which is the authentication and encryption key for Pacemaker communication. It is highly recommended that you have a uniqueauthkeyvalue for each cluster. The key should be 256 bytes of random data.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes.
If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_fence_virt_key_srcThe path to the
fence-virtorfence-xvmpre-shared key file, which is the location of the authentication key for thefence-virtorfence-xvmfence agent.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes. If the
ha_clusterRHEL system role generates a new key in this fashion, you should copy the key to your nodes' hypervisor to ensure that fencing works.If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_pcsd_public_key_srcr,ha_cluster_pcsd_private_key_srcThe path to the
pcsdTLS certificate and private key. If this is not specified, a certificate-key pair already present on the nodes will be used. If a certificate-key pair is not present, a random new one will be generated.If you specify a private key value for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If these variables are set,
ha_cluster_regenerate_keysis ignored for this certificate-key pair.The default value of these variables is null.
ha_cluster_pcsd_certificates(RHEL 9.2 and later) Creates a
pcsdprivate key and certificate using thecertificateRHEL system role.If your system is not configured with a
pcsdprivate key and certificate, you can create them in one of two ways:-
Set the
ha_cluster_pcsd_certificatesvariable. When you set theha_cluster_pcsd_certificatesvariable, thecertificateRHEL system role is used internally and it creates the private key and certificate forpcsdas defined. -
Do not set the
ha_cluster_pcsd_public_key_src,ha_cluster_pcsd_private_key_src, or theha_cluster_pcsd_certificatesvariables. If you do not set any of these variables, theha_clusterRHEL system role will createpcsdcertificates by means ofpcsditself. The value ofha_cluster_pcsd_certificatesis set to the value of the variablecertificate_requestsas specified in thecertificateRHEL system role. For more information about thecertificateRHEL system role, see Requesting certificates using RHEL system roles.
-
Set the
The following operational considerations apply to the use of the
ha_cluster_pcsd_certificatevariable:-
Unless you are using IPA and joining the systems to an IPA domain, the
certificateRHEL system role creates self-signed certificates. In this case, you must explicitly configure trust settings outside of the context of RHEL system roles. System roles do not support configuring trust settings. -
When you set the
ha_cluster_pcsd_certificatesvariable, do not set theha_cluster_pcsd_public_key_srcandha_cluster_pcsd_private_key_srcvariables. -
When you set the
ha_cluster_pcsd_certificatesvariable,ha_cluster_regenerate_keysis ignored for this certificate - key pair.
-
Unless you are using IPA and joining the systems to an IPA domain, the
The default value of this variable is
[].For an example
ha_clusterRHEL system role playbook that creates TLS certificates and key files in a high availability cluster, see Creating pcsd TLS certificates and key files for a high availability cluster.ha_cluster_regenerate_keys-
A boolean flag which, when set to
true, determines that pre-shared keys and TLS certificates will be regenerated. For more information about when keys and certificates will be regenerated, see the descriptions of theha_cluster_corosync_key_src,ha_cluster_pacemaker_key_src,ha_cluster_fence_virt_key_src,ha_cluster_pcsd_public_key_src, andha_cluster_pcsd_private_key_srcvariables. -
The default value of this variable is
false. ha_cluster_pcs_permission_listConfigures permissions to manage a cluster using
pcsd. The items you configure with this variable are as follows:-
type-userorgroup -
name- user or group name allow_list- Allowed actions for the specified user or group:-
read- View cluster status and settings -
write- Modify cluster settings except permissions and ACLs -
grant- Modify cluster permissions and ACLs -
full- Unrestricted access to a cluster including adding and removing nodes and access to keys and certificates
-
-
The structure of the
ha_cluster_pcs_permission_listvariable and its default values are as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_cluster_name-
The name of the cluster. This is a string value with a default of
my-cluster. ha_cluster_transport(RHEL 9.1 and later) Sets the cluster transport method. The items you configure with this variable are as follows:
-
type(optional) - Transport type:knet,udp, orudpu. Theudpandudputransport types support only one link. Encryption is always disabled forudpandudpu. Defaults toknetif not specified. -
options(optional) - List of name-value dictionaries with transport options. -
links(optional) - List of list of name-value dictionaries. Each list of name-value dictionaries holds options for one Corosync link. It is recommended that you set thelinknumbervalue for each link. Otherwise, the first list of dictionaries is assigned by default to the first link, the second one to the second link, and so on. -
compression(optional) - List of name-value dictionaries configuring transport compression. Supported only with theknettransport type. -
crypto(optional) - List of name-value dictionaries configuring transport encryption. By default, encryption is enabled. Supported only with theknettransport type.
-
For a list of allowed options, see the
pcs -h cluster setuphelp page or thesetupdescription in theclustersection of thepcs(8) man page. For more detailed descriptions, see thecorosync.conf(5) man page.The structure of the
ha_cluster_transportvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures a transport method, see Configuring Corosync values in a high availability cluster.ha_cluster_totem(RHEL 9.1 and later) Configures Corosync totem. For a list of allowed options, see the
pcs -h cluster setuphelp page or thesetupdescription in theclustersection of thepcs(8) man page. For a more detailed description, see thecorosync.conf(5) man page.The structure of the
ha_cluster_totemvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures a Corosync totem, see Configuring Corosync values in a high availability cluster.ha_cluster_quorum(RHEL 9.1 and later) Configures cluster quorum. You can configure the following items for cluster quorum:
-
options(optional) - List of name-value dictionaries configuring quorum. Allowed options are:auto_tie_breaker,last_man_standing,last_man_standing_window, andwait_for_all. For information about quorum options, see thevotequorum(5) man page. device(optional) - (RHEL 9.2 and later) Configures the cluster to use a quorum device. By default, no quorum device is used.-
model(mandatory) - Specifies a quorum device model. Onlynetis supported model_options(optional) - List of name-value dictionaries configuring the specified quorum device model. For modelnet, you must specifyhostandalgorithmoptions.Use the
pcs-addressoption to set a custompcsdaddress and port to connect to theqnetdhost. If you do not specify this option, the role connects to the defaultpcsdport on thehost.-
generic_options(optional) - List of name-value dictionaries setting quorum device options that are not model specific. heuristics_options(optional) - List of name-value dictionaries configuring quorum device heuristics.For information about quorum device options, see the
corosync-qdevice(8) man page. The generic options aresync_timeoutandtimeout. For modelnetoptions see thequorum.device.netsection. For heuristics options, see thequorum.device.heuristicssection.To regenerate a quorum device TLS certificate, set the
ha_cluster_regenerate_keysvariable totrue.
-
-
The structure of the
ha_cluster_quorumvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures cluster quorum, see Configuring Corosync values in a high availability cluster. For an exampleha_clusterRHEL system role playbook that configures a cluster using a quorum device, see Configuring a high availability cluster using a quorum device.ha_cluster_sbd_enabled-
(RHEL 9.1 and later) A boolean flag which determines whether the cluster can use the SBD node fencing mechanism. The default value of this variable is
false. For exampleha_clustersystem role playbooks that enable SBD, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster_node_options variable and Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable. ha_cluster_sbd_options(RHEL 9.1 and later) List of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environmentsection of thesbd(8) man page.Supported options are:
-
delay-start- defaults tofalse, documented asSBD_DELAY_START -
startmode- defaults toalways, documented asSBD_START_MODE -
timeout-action- defaults toflush,reboot, documented asSBD_TIMEOUT_ACTION -
watchdog-timeout- defaults to5, documented asSBD_WATCHDOG_TIMEOUT
-
Watchdog and SBD devices can be configured on a node to node basis in one of two variables:
-
ha_cluster_node_options, which you define in a playbook file (RHEL 9.5 and later). For an exampleha_clusterRHEL system role playbook that uses theha_cluster_node_optionsvariable to configure node by node SBD options, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster_node_options variable. -
ha_cluster, which you define in an inventory file. For an example procedure that configures node to node SBD options in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.
-
ha_cluster_cluster_propertiesList of sets of cluster properties for Pacemaker cluster-wide configuration. Only one set of cluster properties is supported.
The structure of a set of cluster properties is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no properties are set.
The following example playbook configures a cluster consisting of
node1andnode2and sets thestonith-enabledandno-quorum-policycluster properties.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_node_options(RHEL 9.4 and later) This variable defines settings which vary from one cluster node to another. It sets the options for the specified nodes, but does not specify which nodes form the cluster. You specify which nodes form the cluster with the
hostsparameter in an inventory or a playbook.The items you configure with this variable are as follows:
-
node_name(mandatory) - Name of the node for which to define Pacemaker node attributes. It must match a name defined for a node. -
pcs_address(optional) - (RHEL 9.5 and later) Address used bypcsto communicate with the node. You can specify a name, a FQDN or an IP address. You can specify a port as well. -
corosync_addresses(optional) - (RHEL 9.5 and later) List of addresses used by Corosync. All nodes must have the same number of addresses. The order of the addresses must be the same for all nodes, so that the addresses belonging to a particular link are specified in the same position for all nodes. -
sbd_watchdog_modules(optional) - (RHEL 9.5 and later) Watchdog kernel modules to be loaded, which create/dev/watchdog*devices. Defaults to an empty list if not set. -
sbd_watchdog_modules_blocklist(optional) - (RHEL 9.5 and later) Watchdog kernel modules to be unloaded and blocked. Defaults to an empty list if not set. -
sbd_watchdog(optional) - (RHEL 9.5 and later) Watchdog device to be used by SBD. Defaults to/dev/watchdogif not set. -
sbd_devices(optional) - (RHEL 9.5 and later) Devices to use for exchanging SBD messages and for monitoring. Defaults to an empty list if not set. Always refer to the devices using the long, stable device name (/dev/disk/by-id/). -
attributes(optional) - List of sets of Pacemaker node attributes for the node. Currently, only one set is supported. The first set is used and the rest are ignored. -
utilization(optional) - (RHEL 9.5 and later) List of sets of the node’s utilization. The field value must be an integer. Currently, only one set is supported. The first set is used and the rest are ignored.
-
The structure of the
ha_cluster_node_optionsvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no node options are defined.
For an example
ha_clusterRHEL system role playbook that includes node options configuration, see Configuring a high availability cluster with node attributes.ha_cluster_resource_primitivesThis variable defines pacemaker resources configured by the RHEL system role, including fencing resources. You can configure the following items for each resource:
-
id(mandatory) - ID of a resource. -
agent(mandatory) - Name of a resource or fencing agent, for exampleocf:pacemaker:Dummyorstonith:fence_xvm. It is mandatory to specifystonith:for STONITH agents. For resource agents, it is possible to use a short name, such asDummy, instead ofocf:pacemaker:Dummy. However, if several agents with the same short name are installed, the role will fail as it will be unable to decide which agent should be used. Therefore, it is recommended that you use full names when specifying a resource agent. -
instance_attrs(optional) - List of sets of the resource’s instance attributes. Currently, only one set is supported. The exact names and values of attributes, as well as whether they are mandatory or not, depend on the resource or fencing agent. -
meta_attrs(optional) - List of sets of the resource’s meta attributes. Currently, only one set is supported. -
copy_operations_from_agent(optional) - (RHEL 9.3 and later) Resource agents usually define default settings for resource operations, such asintervalandtimeout, optimized for the specific agent. If this variable is set totrue, then those settings are copied to the resource configuration. Otherwise, clusterwide defaults apply to the resource. If you also define resource operation defaults for the resource with theha_cluster_resource_operation_defaultsrole variable, you can set this tofalse. The default value of this variable istrue. operations(optional) - List of the resource’s operations.-
action(mandatory) - Operation action as defined by pacemaker and the resource or fencing agent. -
attrs(mandatory) - Operation options, at least one option must be specified. (RHEL 9.5 and later)
-
-
utilization(optional) - List of sets of the resource’s utilization. Thevaluefield must be an integer. Only one set is supported, so the first set is used and the rest are ignored.
-
The structure of the resource definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resources are defined.
For an example
ha_clusterRHEL system role playbook that includes resource configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_groupsThis variable defines pacemaker resource groups configured by the system role. You can configure the following items for each resource group:
-
id(mandatory) - ID of a group. -
resources(mandatory) - List of the group’s resources. Each resource is referenced by its ID and the resources must be defined in theha_cluster_resource_primitivesvariable. At least one resource must be listed. -
meta_attrs(optional) - List of sets of the group’s meta attributes. Currently, only one set is supported.
-
The structure of the resource group definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resource groups are defined.
For an example
ha_clusterRHEL system role playbook that includes resource group configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_clonesThis variable defines pacemaker resource clones configured by the system role. You can configure the following items for a resource clone:
-
resource_id(mandatory) - Resource to be cloned. The resource must be defined in theha_cluster_resource_primitivesvariable or theha_cluster_resource_groupsvariable. -
promotable(optional) - Indicates whether the resource clone to be created is a promotable clone, indicated astrueorfalse. -
id(optional) - Custom ID of the clone. If no ID is specified, it will be generated. A warning will be displayed if this option is not supported by the cluster. -
meta_attrs(optional) - List of sets of the clone’s meta attributes. Currently, only one set is supported.
-
The structure of the resource clone definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resource clones are defined.
For an example
ha_clusterRHEL system role playbook that includes resource clone configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_defaults(RHEL 9.3 and later) This variable defines sets of resource defaults. You can define multiple sets of defaults and apply them to resources of specific agents using rules. The defaults you specify with the
ha_cluster_resource_defaultsvariable do not apply to resources which override them with their own defined values.Only meta attributes can be specified as defaults.
You can configure the following items for each defaults set:
-
id(optional) - ID of the defaults set. If not specified, it is autogenerated. -
rule(optional) - Rule written usingpcssyntax defining when and for which resources the set applies. For information on specifying a rule, see theresource defaults set createsection of thepcs(8) man page. -
score(optional) - Weight of the defaults set. -
attrs(optional) - Meta attributes applied to resources as defaults.
-
The structure of the
ha_cluster_resource_defaultsvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures resource defaults, see Configuring a high availability cluster with resource and resource operation defaults.ha_cluster_resource_operation_defaults(RHEL 9.3 and later) This variable defines sets of resource operation defaults. You can define multiple sets of defaults and apply them to resources of specific agents and specific resource operations using rules. The defaults you specify with the
ha_cluster_resource_operation_defaultsvariable do not apply to resource operations which override them with their own defined values. By default, theha_clusterRHEL system role configures resources to define their own values for resource operations. For information about overriding these defaults with theha_cluster_resource_operations_defaultsvariable, see the description of thecopy_operations_from_agentitem inha_cluster_resource_primitives.Only meta attributes can be specified as defaults.
The structure of the
ha_cluster_resource_operations_defaultsvariable is the same as the structure for theha_cluster_resource_defaultsvariable, with the exception of how you specify a rule. For information about specifying a rule to describe the resource operation to which a set applies, see theresource op defaults set createsection of thepcs(8) man page.ha_cluster_stonith_levels(RHEL 9.4 and later) This variable defines STONITH levels, also known as fencing topology. Fencing levels configure a cluster to use multiple devices to fence nodes. You can define alternative devices in case one device fails and you can require multiple devices to all be executed successfully to consider a node successfully fenced. For more information on fencing levels, see Configuring fencing levels in Configuring and managing high availability clusters.
You can configure the following items when defining fencing levels:
-
level(mandatory) - Order in which to attempt the fencing level. Pacemaker attempts levels in ascending order until one succeeds. -
target(optional) - Name of a node this level applies to. You must specify one of the following three selections:
-
target_pattern- POSIX extended regular expression matching the names of the nodes this level applies to. -
target_attribute- Name of a node attribute that is set for the node this level applies to. -
target_attributeandtarget_value- Name and value of a node attribute that is set for the node this level applies to.
-
resouce_ids(mandatory) - List of fencing resources that must all be tried for this level.By default, no fencing levels are defined.
-
The structure of the fencing levels definition that you configure with the
ha_clusterRHEL system role is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures fencing defaults, see Configuring a high availability cluster with fencing levels.ha_cluster_constraints_locationThis variable defines resource location constraints. Resource location constraints indicate which nodes a resource can run on. You can specify a resources specified by a resource ID or by a pattern, which can match more than one resource. You can specify a node by a node name or by a rule.
You can configure the following items for a resource location constraint:
-
resource(mandatory) - Specification of a resource the constraint applies to. -
node(mandatory) - Name of a node the resource should prefer or avoid. -
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.score- Sets the weight of the constraint.-
A positive
scorevalue means the resource prefers running on the node. -
A negative
scorevalue means the resource should avoid running on the node. -
A
scorevalue of-INFINITYmeans the resource must avoid running on the node. -
If
scoreis not specified, the score value defaults toINFINITY.
-
A positive
-
By default no resource location constraints are defined.
The structure of a resource location constraint specifying a resource ID and node name is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The items that you configure for a resource location constraint that specifies a resource pattern are the same items that you configure for a resource location constraint that specifies a resource ID, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows:
-
pattern(mandatory) - POSIX extended regular expression resource IDs are matched against.
-
The structure of a resource location constraint specifying a resource pattern and node name is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource location constraint that specifies a resource ID and a rule:
resource(mandatory) - Specification of a resource the constraint applies to.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
rule(mandatory) - Constraint rule written usingpcssyntax. For further information, see theconstraint locationsection of thepcs(8) man page. - Other items to specify have the same meaning as for a resource constraint that does not specify a rule.
The structure of a resource location constraint that specifies a resource ID and a rule is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The items that you configure for a resource location constraint that specifies a resource pattern and a rule are the same items that you configure for a resource location constraint that specifies a resource ID and a rule, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows:
-
pattern(mandatory) - POSIX extended regular expression resource IDs are matched against.
-
The structure of a resource location constraint that specifies a resource pattern and a rule is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_colocationThis variable defines resource colocation constraints. Resource colocation constraints indicate that the location of one resource depends on the location of another one. There are two types of colocation constraints: a simple colocation constraint for two resources, and a set colocation constraint for multiple resources.
You can configure the following items for a simple resource colocation constraint:
resource_follower(mandatory) - A resource that should be located relative toresource_leader.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
resource_leader(mandatory) - The cluster will decide where to put this resource first and then decide where to putresource_follower.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.score- Sets the weight of the constraint.-
Positive
scorevalues indicate the resources should run on the same node. -
Negative
scorevalues indicate the resources should run on different nodes. -
A
scorevalue of+INFINITYindicates the resources must run on the same node. -
A
scorevalue of-INFINITYindicates the resources must run on different nodes. -
If
scoreis not specified, the score value defaults toINFINITY.
-
Positive
By default no resource colocation constraints are defined.
The structure of a simple resource colocation constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set colocation constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
id(optional) - Same values as for a simple colocation constraint. -
options(optional) - Same values as for a simple colocation constraint.
The structure of a resource set colocation constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_orderThis variable defines resource order constraints. Resource order constraints indicate the order in which certain resource actions should occur. There are two types of resource order constraints: a simple order constraint for two resources, and a set order constraint for multiple resources.
You can configure the following items for a simple resource order constraint:
resource_first(mandatory) - Resource that theresource_thenresource depends on.-
id(mandatory) - Resource ID. -
action(optional) - The action that must complete before an action can be initiated for theresource_thenresource. Allowed values:start,stop,promote,demote.
-
resource_then(mandatory) - The dependent resource.-
id(mandatory) - Resource ID. -
action(optional) - The action that the resource can execute only after the action on theresource_firstresource has completed. Allowed values:start,stop,promote,demote.
-
-
id(optional) - ID of the constraint. If not specified, it will be autogenerated. -
options(optional) - List of name-value dictionaries.
By default no resource order constraints are defined.
The structure of a simple resource order constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set order constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
id(optional) - Same values as for a simple order constraint. -
options(optional) - Same values as for a simple order constraint.
The structure of a resource set order constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_ticketThis variable defines resource ticket constraints. Resource ticket constraints indicate the resources that depend on a certain ticket. There are two types of resource ticket constraints: a simple ticket constraint for one resource, and a ticket order constraint for multiple resources.
You can configure the following items for a simple resource ticket constraint:
resource(mandatory) - Specification of a resource the constraint applies to.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
ticket(mandatory) - Name of a ticket the resource depends on. -
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.-
loss-policy(optional) - Action to perform on the resource if the ticket is revoked.
-
By default no resource ticket constraints are defined.
The structure of a simple resource ticket constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set ticket constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
ticket(mandatory) - Same value as for a simple ticket constraint. -
id(optional) - Same value as for a simple ticket constraint. -
options(optional) - Same values as for a simple ticket constraint.
The structure of a resource set ticket constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_acls(RHEL 9.5 and later) This variable defines ACL roles, users and groups.
You can configure the following items for
acl_roles:-
id(mandatory) - ID of an ACL role. -
description(optional) - Description of the ACL role. permissions(optional) - List of ACL role permissions.-
kind(mandatory) - The access being granted. Allowed values areread,write, anddeny. -
xpath(optional) - An XPath specification selecting an XML element in the CIB to which the permission applies. It is mandatory to specify exactly one of the items:xpathorreference. -
reference(optional) - The ID of an XML element in the CIB to which the permission applies. It is mandatory to specify exactly one of the items: xpath or reference. The ID must exist.
-
-
You can configure the following items for
acl_users:-
id(mandatory) - ID of an ACL user. -
roles(optional) - List of ACL Role IDs assigned to the user.
-
You can configure the following items for
acl_group:-
id(mandatory) - ID of an ACL group. -
roles(optional) - List of ACL Role IDs assigned to the group.
-
The structure of an ACL definition is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable ACLS in the cluster, you must configure the
enable-aclcluster property:ha_cluster_cluster_properties: - attrs: - name: enable-acl value: 'true'ha_cluster_cluster_properties: - attrs: - name: enable-acl value: 'true'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that creates a cluster with ACL roles, users, and groups, see Configuring a high availability cluster that implements access control lists (ACLS) by using the RHEL system role.ha_cluster_alerts(RHEL 9.5 and later) This variable defines Pacemaker alerts.
NoteThe
ha_clusterrole configures the cluster to call external programs to handle alerts. You must provide the programs and distribute them to cluster nodes.You can configure the following items for
alerts:-
id(mandatory) - ID of an alert -
ipath(mandatory) - Path to the alert agent executable. -
description(optional) - Description of the alert. -
instance_attrs(optional) - List of sets of the alert’s instance attributes.Only one set is supported. The first set is used and the rest are ignored. -
meta_attrs(optional) - List of sets of the alert’s meta attributes. Only one set is supported. The first set is used and the rest are ignored. *recipients(optional) - List of alert’s recipients.
-
You can configure the following items for
recipients:-
value(mandatory) - Value of a recipient. -
id(optional) - ID of the recipient. -
description(optional) - Description of the recipient. -
instance_attrs(optional) - List of sets of the recipient’s instance attributes. Only one set is supported. The first set is used and the rest are ignored. -
meta_attrs(optional) - List of sets of the recipient’s meta attributes. Only one set is supported. The first set is used and the rest are ignored.
-
The structure of an alert definition is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterRHEL system role playbook that configures a cluster with alerts, see Configuring alerts for a high availability cluster by using the ha_cluster RHEL system role.ha_cluster_qnetd(RHEL 9.2 and later) This variable configures a
qnetdhost which can then serve as an external quorum device for clusters.You can configure the following items for a
qnetdhost:-
present(optional) - Iftrue, configure aqnetdinstance on the host. Iffalse, removeqnetdconfiguration from the host. The default value isfalse. If you set thistrue, you must setha_cluster_cluster_presenttofalse. -
start_on_boot(optional) - Configures whether theqnetdinstance should start automatically on boot. The default value istrue. -
regenerate_keys(optional) - Set this variable totrueto regenerate theqnetdTLS certificate. If you regenerate the certificate, you must either re-run the role for each cluster to connect it to theqnetdhost again or runpcsmanually.
-
You cannot run
qnetdon a cluster node because fencing would disruptqnetdoperation.For an example
ha_clusterRHEL system role playbook that configures a cluster using a quorum device, see Configuring a cluster using a quorum device.
11.2. Specifying an inventory for the ha_cluster RHEL system role Copy linkLink copied to clipboard!
When configuring an HA cluster using the ha_cluster RHEL system role playbook, you configure the names and addresses of the nodes for the cluster in an inventory.
For each node in an inventory, you can optionally specify the following items:
-
node_name- the name of a node in a cluster. -
pcs_address- an address used bypcsto communicate with the node. It can be a name, FQDN or an IP address and it can include a port number. -
corosync_addresses- list of addresses used by Corosync. All nodes which form a particular cluster must have the same number of addresses. The order of the addresses must be the same for all nodes, so that the addresses belonging to a particular link are specified in the same position for all nodes.
The following example shows an inventory with targets node1 and node2. node1 and node2 must be either fully qualified domain names or must otherwise be able to connect to the nodes as when, for example, the names are resolvable through the /etc/hosts file.
In RHEL 9.1 and later, you can optionally configure watchdog and SBD devices for each node in an inventory. All SBD devices must be shared to and accessible from all nodes. Watchdog devices can be different for each node as well. For an example procedure that configures SBD node fencing in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.
11.3. Creating pcsd TLS certificates and key files for a high availability cluster Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to create Transport Layer Security (TLS) certificates and key files in a high availability cluster. When you run this playbook, the ha_cluster RHEL system role uses the certificate RHEL system role internally to manage TLS certificates.
The connection between cluster nodes is secured using TLS encryption. By default, the pcsd daemon generates self-signed certificates. For many deployments, however, you may want to replace the default certificates with certificates issued by a certificate authority of your company and apply your company certificate policies for pcsd.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
- RHEL 9.2 and later For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_pcsd_certificates: <certificate_properties>-
A variable that creates a self-signed
pcsdcertificate and private key files in/var/lib/pcsd. In this example, thepcsdcertificate has the file nameFILENAME.crtand the key file is namedFILENAME.key.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Configuring a high availability cluster running no resources Copy linkLink copied to clipboard!
You can use the ha_cluster system role to configure a basic cluster in a simple, automatic way. Once you have created a basic cluster, you can use the pcs command-line interface to configure the other cluster components and behaviors on a resource-by-resource basis.
This example configures a basic two-node cluster with no fencing configured using the minimum required parameters.
The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Configuring a high availability cluster with fencing and resources Copy linkLink copied to clipboard!
The specific components of a cluster configuration depend on your individual needs, which vary between sites. You can use the ha_cluster RHEL system role to configure a cluster with a fencing device, cluster resources, resource groups, and a cloned resource.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>-
A list of resource group definitions configured by the
ha_clusterRHEL system role. ha_cluster_resource_clones: <resource_clones>-
A list of resource clone definitions configured by the
ha_clusterRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Configuring a high availability cluster with resource and resource operation defaults Copy linkLink copied to clipboard!
In your cluster configuration, you can change the Pacemaker default values of a resource option for all resources. You can also change the default value for all resource operations in the cluster.
For information about changing the default value of a resource option, see Changing the default value of a resource option. For information about global resource operation defaults, see Configuring global resource operation defaults.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that defines resource and resource operation defaults.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
- RHEL 9.3 and later For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_defaults: <resource_defaults>- A variable that defines sets of resource defaults.
ha_cluster_resource_operation_defaults: <resource_operation_defaults>- A variable that defines sets of resource operation defaults.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.7. Configuring a high availability cluster with fencing levels Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure high availability clusters with fencing levels. With multiple fencing devices for a node, you need to define fencing levels for those devices to determine the order that Pacemaker will use the devices to attempt to fence a node.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- RHEL 9.4 and later
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>
cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml. This example playbook file configures a cluster running thefirewalldandselinuxservices.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_stonith_levels: <stonith_levels>- A variable that defines STONITH levels, also known as fencing topology, which configure a cluster to use multiple devices to fence nodes.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8. Configuring a high availability cluster with resource constraints Copy linkLink copied to clipboard!
When configuring a cluster, you can specify the behavior of the cluster resources to be in line with your application requirements. You can control the behavior of cluster resources by configuring resource constraints.
You can define the following categories of resource constraints:
- Location constraints, which determine which nodes a resource can run on. For information about location constraints, see Determining which nodes a resource can run on.
- Ordering constraints, which determine the order in which the resources are run. For information about ordering constraints, see Determing the order in which cluster resources are run.
- Colocation constraints, which specify that the location of one resource depends on the location of another resource. For information about colocation constraints, see Colocating cluster resources.
- Ticket constraints, which indicate the resources that depend on a particular Booth ticket. For information about Booth ticket constraints, see Multi-site Pacemaker clusters.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_constraints_location: <location_constraints>- A variable that defines resource location constraints.
ha_cluster_constraints_colocation: <colocation_constraints>- A variable that defines resource colocation constraints.
ha_cluster_constraints_order: <order_constraints>- A variable that defines resource order constraints.
ha_cluster_constraints_ticket: <ticket_constraints>- A variable that defines Booth ticket constraints.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9. Configuring Corosync values in a high availability cluster Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure Corosync values in high availability clusters.
The corosync.conf file provides the cluster parameters used by Corosync, the cluster membership and messaging layer that Pacemaker is built on. For your system configuration, you can change some of the default parameters in the corosync.conf file. In general, you should not edit the corosync.conf file directly. You can, however, configure Corosync values by using the ha_cluster RHEL system role.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - RHEL 9.1 and later
- The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_transport: <transport_method>- A variable that sets the cluster transport method.
ha_cluster_totem: <totem_options>- A variable that configures Corosync totem options.
ha_cluster_quorum: <quorum_options>- A variable that configures cluster quorum options.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10. Exporting a cluster configuration to create a RHEL system role playbook Copy linkLink copied to clipboard!
As of RHEL 9.6, you can use the ha_cluster RHEL system role to export the Corosync configuration of a cluster into ha_cluster variables that can be fed back to the role to recreate the same cluster.
If you did not use ha_cluster to create your cluster, or if you do not have access to the original playbook for the cluster, you can use this feature to build a new playbook for creating the cluster.
When you export a cluster’s configuration by using the ha_cluster RHEL system role, not all of the variables are exported. You must manually modify the configuration to account for these variables.
The following variables are present in the export:
-
ha_cluster_cluster_present -
ha_cluster_start_on_boot -
ha_cluster_cluster_name -
ha_cluster_transport -
ha_cluster_totem -
ha_cluster_quorum -
ha_cluster_node_options- Only thenode_name,corosync_addressesandpcs_addressoptions are present.
The following variables are not present in the export:
-
ha_cluster_hacluster_password- This is a mandatory variable for the role but it cannot be extracted from existing clusters. -
ha_cluster_corosync_key_src,ha_cluster_pacemaker_key_srcandha_cluster_fence_virt_key_src- These variables should contain paths to files with Corosync and Pacemaker keys. Since the keys themselves are not exported, these variables are not present in the export either. These keys should be unique for each cluster. -
ha_cluster_regenerate_keys- You should decide whether to use existing keys or to generate new ones.
To export the current cluster configuration, run the ha_cluster RHEL system role and set ha_cluster_export_configuration: true. This triggers the export once the role finishes configuring a cluster or a qnetd host and stores it in the ha_cluster_facts variable.
By default, ha_cluster_cluster_present is set to true and ha_cluster_qnetd.present is set to false. These settings will reconfigure your cluster on the specified hosts, remove qnetd configuration from the specified hosts, and then export the configuration. To trigger the export without modifying an existing configuration, run the role with the following settings:
The following procedure:
-
Exports the cluster configuration from cluster node
node1into theha_cluster_factsvariable. -
Sets the
ha_cluster_cluster_presentandha_cluster_qnetdvariables to null to ensure that running this playbook does not modify the existing cluster configuration. -
Uses the Ansible debug module to display the content of
ha_cluster_facts. -
Saves the contents of
ha_cluster_factsto a file on the control node in a YAML format for you to write a playbook around it.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You have previously configured the high availability cluster with the configuration to export.
- You have created an inventory file on the control node, as described in Preparing a control node on RHEL 9.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
hosts: node1- A node containing the cluster information to export.
ha_cluster_cluster_present: null- Setting to indicate that the cluster configuration will not be changed on the specified host.
ha_cluster_qnetd: null- Setting to indicate that the qnetd host configuration will not be changed on the specified host.
ha_cluster_export_configuration: true-
A variable that determines whether to export the current cluster configuration and store it in the
ha_cluster_factsvariable, which is generated by theha_cluster_infomodule. ha_cluster_facts- A variable that contains the exported cluster configuration.
delegate_to: localhost- Specifies the control node as the location for the exported configuration file.
content: "{{ ha_cluster_facts | to_nice_yaml(sort_keys=false) }"},dest: /path/to/file,mode: "0640"- Copies the configuration file in a YAML format to /path/to/file, setting the file permissions to 0640.
Write a playbook for your system using the variables you exported to /path/to/file on the control node.
You must add the
ha_cluster_hacluster_passwordvariable, as it is a required variable but is not present in the export. Optionally, add theha_cluster_corosync_key_src,ha_cluster_pacemaker_key_src,ha_cluster_fence_virt_key_src, andha_cluster_regenerate_keysvariables if your system requires them. These variables are never exported.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11. Configuring a high availability cluster that implements access control lists (ACLs) by using the ha_cluster RHEL system role Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure high availability clusters with access control lists (ACLs). With ACLs, you can grant permission for specific local users other than user hacluster to manage a Pacemaker cluster.
A common use case for this feature is to restrict unauthorized users from accessing business-sensitive information.
By default, ACLs are not enabled. Consequently, any member of the group haclient on all nodes has full local read and write access to the cluster configuratioan. Users who are not members of haclient have no access. When ACLs are enabled, however, even users who are members of the haclient group have access only to what has been granted to that user by the ACLs. The root and hacluster user accounts always have full access to the cluster configuration, even when ACLs are enabled.
When you set permissions for local users with ACLs, you create a role which defines the permissions for that role. You then assign that role to a user. If you assign multiple roles to the same user, any deny permission takes precedence, then write, then read.
The following example procedure uses the ha_cluster RHEL system role to create in an automated fashion a high availability cluster that implements ACLs to control access to the cluster configuration.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
- RHEL 9.5 and later For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_resource_primitives: <cluster resources>-
A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources. ha_cluster_cluster_properties: <cluster properties>- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_acls: <dictionary>- A dictionary of ACL role, user, and group values.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.12. Configuring a high availability cluster with SBD node fencing by using the ha_cluster_node_options variable Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure high availability clusters with STONITH Block Device (SBD) fencing in an automated fashion.
You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using an SBD. This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.
With ha_cluster, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:
-
ha_cluster_node_options: (RHEL 9.5 and later) This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node. -
ha_cluster: (RHEL 9.1 and later) A dictionary that defines options for one node only. You configure theha_clustervariable in an inventory file. To set different values for each node, you define the variable separately for each node.
If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.
This example procedure uses the ha_cluster_node_options variable in a playbook file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster variable in an inventory file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_sbd_enabled: true- A variable that determines whether the cluster can use the SBD node fencing mechanism.
ha_cluster_sbd_options: <sbd options>-
A list of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environmentsection of thesbd(8) man page on your system. ha_cluster_node_options: <node options>A variable that defines settings which vary from one cluster node to another. You can configure the following SBD and watchdog items:
-
sbd_watchdog_modules- Modules to be loaded, which create/dev/watchdog*devices. -
sbd_watchdog_modules_blocklist- Watchdog kernel modules to be unloaded and blocked. -
sbd_watchdog- Watchdog device to be used by SBD. -
sbd_devices- Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (/dev/disk/by-id/).
-
ha_cluster_cluster_properties: <cluster properties>- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_resource_primitives: <cluster resources>-
A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.13. Configuring a high availability cluster with SBD node fencing by using the ha_cluster variable Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure high availability clusters with STONITH Block Device (SBD) fencing in an automated fashion.
You must configure a Red Hat high availability cluster with at least one fencing device to ensure the cluster-provided services remain available when a node in the cluster encounters a problem. If your environment does not allow for a remotely accessible power switch to fence a cluster node, you can configure fencing by using an SBD. This device provides a node fencing mechanism for Pacemaker-based clusters through the exchange of messages by means of shared block storage. SBD integrates with Pacemaker, a watchdog device and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required.
With ha_cluster, you can configure watchdog and SBD devices on a node-to-node basis by using one of two variables:
-
ha_cluster_node_options: (RHEL 9.5 and later) This is a single variable you define in a playbook file. It is a list of dictionaries where each dictionary defines options for one node. -
ha_cluster: (RHEL 9.1 and later) A dictionary that defines options for one node only. You configure theha_clustervariable in an inventory file. To set different values for each node, you define the variable separately for each node.
If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.
If both the ha_cluster_node_options and ha_cluster variables contain SBD options, those in ha_cluster_node_options have precedence.`
The following example procedure uses the ha_cluster system role to create a high availability cluster with SBD fencing. This example procedure uses the ha_cluster variable in an inventory file to configure node addresses and SBD options on a per-node basis. For an example procedure that uses the ha_cluster_node_options variable in a playbook file, see Configuring a high availability cluster with SBD node fencing by using the ha_cluster_nodes_options variable.
The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
Procedure
Create an inventory file for your cluster that configures watchdog and SBD devices for each node by using the
ha_clustervariable, as in the following exampla;eCopy to Clipboard Copied! Toggle word wrap Toggle overflow The SBD and watchdog settings specified in the example inventory include the following:
sbd_watchdog_modules-
Watchdog kernel modules to be loaded, which create
/dev/watchdog*devices. sbd_watchdog_modules_blocklist- Watchdog kernel modules to be unloaded and blocked.
sbd_watchdog- Watchdog device to be used by SBD.
sbd_devices-
Devices to use for exchanging SBD messages and for monitoring. Always refer to the devices using the long, stable device name (
/dev/disk/by-id/).
For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, as in the following example. Since you have specified the SBD and watchog variables in an inventory, you do not need to include them in the playbook.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: cluster_name- The name of the cluster you are creating.
ha_cluster_hacluster_password: password-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_sbd_enabled: true- A variable that determines whether the cluster can use the SBD node fencing mechanism.
ha_cluster_sbd_options: sbd options-
A list of name-value dictionaries specifying SBD options. For information about these options, see the
Configuration via environmentsection of thesbd(8) man page on your system. ha_cluster_cluster_properties: cluster properties- A list of sets of cluster properties for Pacemaker cluster-wide configuration.
ha_cluster_resource_primitives: cluster resources-
A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.14. Configuring a placement strategy for a high availability cluster by using the RHEL ha_cluster RHEL system role Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to create a high availability cluster in an automated fashion that configures utilization attributes to define a placement strategy.
A Pacemaker cluster allocates resources according to a resource allocation score. By default, if the resource allocation scores on all the nodes are equal, Pacemaker allocates the resource to the node with the smallest number of allocated resources. If the resources in your cluster use significantly different proportions of a node’s capacities, such as memory or I/O, the default behavior may not be the best strategy for balancing your system’s workload. In this case, you can customize an allocation strategy by configuring utilization attributes and placement strategies for nodes and resources.
For detailed information about configuring utilization attributes and placement strategies, see Configuring a node placement strategy.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - RHEL 9.5 and later
- The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_cluster_properties: <cluster properties>-
List of sets of cluster properties for Pacemaker cluster-wide configuration. For utilization to have an effect, the
placement-strategyproperty must be set and its value must be different from the valuedefault. - `ha_cluster_node_options: <node options>
- A variable that defines various settings which vary from cluster node to cluster node.
ha_cluster_resource_primitives: <cluster resources>A list of resource definitions for the Pacemaker resources configured by the
ha_clusterRHEL system role, including fencing resources.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.15. Configuring alerts for a high availability cluster by using the ha_cluster RHEL system role Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure alerts for high availability clusters.
When a Pacemaker event occurs, such as a resource or a node failure or a configuration change, you may want to take some external action. For example, you may want to send an email message or log to a file or update a monitoring system.
You can configure your system to take an external action by using alert agents. These are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. The cluster passes information about the event to the agent through environment variables.
The ha_cluster RHEL system role configures the cluster to call external programs to handle alerts. However, you must provide these programs and distribute them to cluster nodes.
For more detailed information about alert agents, see Triggering scripts for cluster events.
This example procedure uses the ha_cluster RHEL system role to create a high availability cluster in an automated fashion that configures a Pacemaker alert.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - RHEL 9.5 and later
- The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_alerts: <alert definitions>A variable that defines Pacemaker alerts.
-
id- ID of an alert. -
path- Path to the alert agent executable. -
description- Description of the alert. -
instance_attrs- List of sets of the alert’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
meta_attrs- List of sets of the alert’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
recipients- List of alert’s recipients. -
value- Value of a recipient. -
id- ID of the recipient. -
description- Description of the recipient. -
instance_attrs-List of sets of the recipient’s instance attributes. Currently, only one set is supported, so the first set is used and the rest are ignored. -
meta_attrs- List of sets of the recipient’s meta attributes. Currently, only one set is supported, so the first set is used and the rest are ignored.
-
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.16. Configuring a high availability cluster using a quorum device Copy linkLink copied to clipboard!
Your cluster can sustain more node failures than standard quorum rules permit when you configure a separate quorum device. The quorum device acts as a lightweight arbitration device for the cluster. Use a quorum device for clusters with an even number of nodes.
With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation.
For information about quorum devices, see Configuring quorum devices.
To configure a high availability cluster with a separate quorum device by using the ha_cluster RHEL system role, first set up the quorum device. After setting up the quorum device, you can use the device in any number of clusters.
This feature is available in RHEL 9.2 and later.
11.16.1. Configuring a quorum device Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure a quorum device for high availability clusters. Note that you cannot run a quorum device on a cluster node.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The system that you will use to run the quorum device has active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the quorum devices as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook-qdevice.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_present: false-
A variable that, if set to
false, determines that all cluster configuration will be removed from the target host. ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_qnetd: <quorum_device_options>-
A variable that configures a
qnetdhost.
Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml
$ ansible-playbook --ask-vault-pass ~/playbook-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.16.2. Configuring a cluster to use a quorum device Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure a cluster with a quorum device.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
- You have configured a quorum device.
Procedure
Create a playbook file, for example,
~/playbook-cluster-qdevice.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_quorum: <quorum_parameters>- A variable that configures cluster quorum which you can use to specify that the cluster uses a quorum device.
Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml
$ ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.17. Configuring a high availability cluster with node attributes Copy linkLink copied to clipboard!
You can use Pacemaker rules to make your configuration more dynamic. For example, you can use a node attribute to assign machines to different processing groups based on time and then use that attribute when creating location constraints.
Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. For information on node attributes, see Determining resource location with rules.
The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that configures node attributes.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role.
- RHEL 9.4 and later For general information about creating an inventory file, see Preparing a control node on RHEL 9.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_node_options: <node_settings>- A variable that defines various settings that vary from one cluster node to another.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Additional resources
11.18. Configuring an Apache HTTP server in a high availability cluster with the ha_cluster RHEL system role Copy linkLink copied to clipboard!
You can use the ha_cluster RHEL system role to configure an Apache HTTP server in a high availability cluster.
High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide indexes to the various areas of Red Hat cluster documentation, see the Red Hat High Availability Add-On Documentation Guide.
The following example use case configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster by using the ha_cluster RHEL system role. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.
This example uses an APC power switch with a host name of zapc.example.com. If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining the ha_cluster_fence_agent_packages variable, as in this example.
The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role. For general information about creating an inventory file, see Preparing a control node on RHEL 9.
- You have configured an LVM logical volume with an XFS file system, as described in Configuring an LVM volume with an XFS file system in a Pacemaker cluster.
- You have configured an Apache HTTP server, as described in Configuring an Apache HTTP Server.
- Your system includes an APC power switch that will be used to fence the cluster nodes.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:cluster_password: <cluster_password>
cluster_password: <cluster_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ha_cluster_cluster_name: <cluster_name>- The name of the cluster you are creating.
ha_cluster_hacluster_password: <password>-
The password of the
haclusteruser. Thehaclusteruser has full access to a cluster. ha_cluster_manage_firewall: true-
A variable that determines whether the
ha_clusterRHEL system role manages the firewall. ha_cluster_manage_selinux: true-
A variable that determines whether the
ha_clusterRHEL system role manages the ports of the firewall high availability service using theselinuxRHEL system role. ha_cluster_fence_agent_packages: <fence_agent_packages>- A list of fence agent packages to install.
ha_cluster_resource_primitives: <cluster_resources>- A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing
ha_cluster_resource_groups: <resource_groups>-
A list of resource group definitions configured by the
ha_clusterRHEL system role.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you use the
apacheresource agent to manage Apache, it does not usesystemd. Because of this, you must edit thelogrotatescript supplied with Apache so that it does not usesystemctlto reload Apache.Remove the following line in the
/etc/logrotate.d/httpdfile on each node in the cluster./bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
# /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the line you removed with the following three lines, specifying
/var/run/httpd-website.pidas the PID file path where website is the name of the Apache resource. In this example, the Apache resource name isWebsite./usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true
/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node,
z1.example.com.If you find that the resources you configured are not running, you can run the
pcs resource debug-start resourcecommand to test the resource configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2resource to view the sample display, consisting of the simple word "Hello".Hello
HelloCopy to Clipboard Copied! Toggle word wrap Toggle overflow To test whether the resource group running on
z1.example.comfails over to nodez2.example.com, put nodez1.example.cominstandbymode, after which the node will no longer be able to host resources.pcs node standby z1.example.com
[root@z1 ~]# pcs node standby z1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow After putting node
z1instandbymode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running onz2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The website at the defined IP address should still display, without interruption.
To remove
z1fromstandbymode, enter the following command.pcs node unstandby z1.example.com
[root@z1 ~]# pcs node unstandby z1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemoving a node from
standbymode does not in itself cause the resources to fail back over to that node. This will depend on theresource-stickinessvalue for the resources. For information about theresource-stickinessmeta attribute, see Configuring a resource to prefer its current node.
Chapter 12. Configuring the systemd journal by using RHEL system roles Copy linkLink copied to clipboard!
With the journald RHEL system role you can automate the systemd journal, and configure persistent logging by using the Red Hat Ansible Automation Platform.
12.1. Configuring persistent logging by using the journald RHEL system role Copy linkLink copied to clipboard!
By default, the systemd journal stores logs only in a small ring buffer in /run/log/journal, which is not persistent. Rebooting the system also removes journal database logs. You can configure persistent logging consistently on multiple systems by using the journald RHEL system role.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
journald_persistent: true- Enables persistent logging.
journald_max_disk_size: <size>-
Specifies the maximum size of disk space for journal files in MB, for example,
2048. journald_per_user: true-
Configures
journaldto keep log data separate for each user. journald_sync_interval: <interval>Sets the synchronization interval in minutes, for example,
1.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.journald/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 13. Configuring automatic crash dumps by using RHEL system roles Copy linkLink copied to clipboard!
To manage kdump using Ansible, you can use the kdump role, which is one of the RHEL system roles available in RHEL 9.
Using the kdump role enables you to specify where to save the contents of the system’s memory for later analysis.
13.1. Configuring the kernel crash dumping mechanism by using the kdump RHEL system role Copy linkLink copied to clipboard!
Kernel crash dumping is a crucial feature for diagnosing and troubleshooting system issues. When your system encounters a kernel panic or other critical failure, crash kernel dumping allows you to capture a memory dump (core dump) of the kernel’s state at the time of the failure.
By using an Ansible playbook, you can set kernel crash dump parameters on multiple systems using the kdump RHEL system role. This ensures consistent settings across all managed nodes for the kdump service.
The kdump system role replaces the content in the /etc/kdump.conf and /etc/sysconfig/kdump configuration files. Previous settings are changed to those specified in the role variables, and lost if they are not specified in the role variables.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
kdump_target: <type_and_location>-
Writes
vmcoreto a location other than the root file system. Thelocationrefers to a partition (by name, label, or UUID) when thetypeis raw or file system. kernel_settings_reboot_ok: <true|false>-
The default is
false. If set totrue, the system role will determine if a reboot of the managed host is necessary for the requested changes to take effect and reboot it. If set tofalse, the role will return the variablekernel_settings_reboot_requiredwith a value oftrue, indicating that a reboot is required. In this case, a user must reboot the managed node manually.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.kdump/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the kernel crash dump parameters:
ansible managed-node-01.example.com -m command -a 'grep crashkernel /proc/cmdline'
$ ansible managed-node-01.example.com -m command -a 'grep crashkernel /proc/cmdline'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 14. Configuring kernel parameters permanently by using RHEL system roles Copy linkLink copied to clipboard!
You can use the kernel_settings RHEL system role to configure kernel parameters on multiple clients simultaneously.
Simultaneous configuration has the following advantages:
- Provides a friendly interface with efficient input setting.
- Keeps all intended kernel parameters in one place.
After you run the kernel_settings role from the control machine, the kernel parameters are applied to the managed systems immediately and persist across reboots.
Note that RHEL system role delivered over RHEL channels are available to RHEL customers as an RPM package in the default AppStream repository. RHEL system role are also available as a collection to customers with Ansible subscriptions over Ansible Automation Hub.
14.1. Applying selected kernel parameters by using the kernel_settings RHEL system role Copy linkLink copied to clipboard!
You can use the kernel_settings RHEL system role to remotely configure various kernel parameters across multiple managed operating systems with persistent effects.
For example, by using the kernel_settings role, you can configure:
- Transparent hugepages to increase performance by reducing the overhead of managing smaller pages.
- The largest packet sizes to be transmitted over the network with the loopback interface.
- Limits on files to be opened simultaneously.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
kernel_settings_sysfs: <list_of_sysctl_settings>-
A YAML list of
sysctlsettings and the values you want to assign to these settings. kernel_settings_transparent_hugepages: <value>-
Controls the memory subsystem Transparent Huge Pages (THP) setting. You can disable THP support (
never), enable it system wide (always) or insideMAD_HUGEPAGEregions (madvise). kernel_settings_reboot_ok: <true|false>-
The default is
false. If set totrue, the system role will determine if a reboot of the managed host is necessary for the requested changes to take effect and reboot it. If set tofalse, the role will return the variablekernel_settings_reboot_requiredwith a value oftrue, indicating that a reboot is required. In this case, a user must reboot the managed node manually.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.kdump/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the affected kernel parameters:
ansible managed-node-01.example.com -m command -a 'sysctl fs.file-max kernel.threads-max net.ipv6.conf.lo.mtu' ansible managed-node-01.example.com -m command -a 'cat /sys/kernel/mm/transparent_hugepage/enabled'
# ansible managed-node-01.example.com -m command -a 'sysctl fs.file-max kernel.threads-max net.ipv6.conf.lo.mtu' # ansible managed-node-01.example.com -m command -a 'cat /sys/kernel/mm/transparent_hugepage/enabled'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. Configuring logging by using RHEL system roles Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure your local and remote hosts as logging servers in an automated fashion to collect logs from many client systems.
Logging solutions provide multiple ways of reading logs and multiple logging outputs.
For example, a logging system can receive the following inputs:
- Local files
-
systemd/journal - Another logging system over the network
In addition, a logging system can have the following outputs:
-
Logs stored in the local files in the
/var/log/directory - Logs sent to Elasticsearch engine
- Logs forwarded to another logging system
With the logging RHEL system role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files.
15.1. Filtering local log messages by using the logging RHEL system role Copy linkLink copied to clipboard!
You can use the property-based filter of the logging RHEL system role to filter your local log messages based on various conditions.
You can achieve, for example:
- Log clarity: In a high-traffic environment, logs can grow rapidly. The focus on specific messages, like errors, can help to identify problems faster.
- Optimized system performance: Excessive amount of logs is usually connected with system performance degradation. Selective logging for only the important events can prevent resource depletion, which enables your systems to run more efficiently.
- Enhanced security: Efficient filtering through security messages, like system errors and failed logins, helps to capture only the relevant logs. This is important for detecting breaches and meeting compliance standards.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
logging_inputs-
Defines a list of logging input dictionaries. The
type: basicsoption covers inputs fromsystemdjournal or Unix socket. logging_outputs-
Defines a list of logging output dictionaries. The
type: filesoption supports storing logs in the local files, usually in the/var/log/directory. Theproperty: msg;property: contains; andproperty_value: erroroptions specify that all logs that contain theerrorstring are stored in the/var/log/errors.logfile. Theproperty: msg;property: !contains; andproperty_value: erroroptions specify that all other logs are put in the/var/log/others.logfile. You can replace theerrorvalue with the string by which you want to filter. logging_flows-
Defines a list of logging flow dictionaries to specify relationships between
logging_inputsandlogging_outputs. Theinputs: [files_input]option specifies a list of inputs, from which processing of logs starts. Theoutputs: [files_output0, files_output1]option specifies a list of outputs, to which the logs are sent.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed node, test the syntax of the
/etc/rsyslog.conffile:rsyslogd -N 1
# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the managed node, verify that the system sends messages that contain the
errorstring to the log:Send a test message:
logger error
# logger errorCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
/var/log/errors.loglog, for example:cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error
# cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: errorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
hostnameis the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot.
15.2. Applying a remote logging solution by using the logging RHEL system role Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure centralized log management across multiple systems. The server receives remote input from the remote_rsyslog and remote_files configurations, and outputs the logs to local files in directories named by remote host names.
As a result, you can cover use cases where you need for example:
- Centralized log management: Collecting, accessing, and managing log messages of multiple machines from a single storage point simplifies day-to-day monitoring and troubleshooting tasks. Also, this use case reduces the need to log in to individual machines to check the log messages.
- Enhanced security: Storing log messages in one central place increases chances they are in a secure and tamper-proof environment. Such an environment makes it easier to detect and respond to security incidents more effectively and to meet audit requirements.
- Improved efficiency in log analysis: Correlating log messages from multiple systems is important for fast troubleshooting of complex problems that span multiple machines or services. That way you can quickly analyze and cross-reference events from different sources.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - Define the ports in the SELinux policy of the server or client system and open the firewall for those ports. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, see modify the SELinux policy on the client and server systems.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the first play of the example playbook include the following:
logging_inputs-
Defines a list of logging input dictionaries. The
type: remoteoption covers remote inputs from the other logging system over the network. Theudp_ports: [ 601 ]option defines a list of UDP port numbers to monitor. Thetcp_ports: [ 601 ]option defines a list of TCP port numbers to monitor. If bothudp_portsandtcp_portsis set,udp_portsis used andtcp_portsis dropped. logging_outputs-
Defines a list of logging output dictionaries. The
type: remote_filesoption makes output store logs to the local files per remote host and program name originated the logs. logging_flows-
Defines a list of logging flow dictionaries to specify relationships between
logging_inputsandlogging_outputs. Theinputs: [remote_udp_input, remote_tcp_input]option specifies a list of inputs, from which processing of logs starts. Theoutputs: [remote_files_output]option specifies a list of outputs, to which the logs are sent.
The settings specified in the second play of the example playbook include the following:
logging_inputs-
Defines a list of logging input dictionaries. The
type: basicsoption covers inputs fromsystemdjournal or Unix socket. logging_outputs-
Defines a list of logging output dictionaries. The
type: forwardsoption supports sending logs to the remote logging server over the network. Theseverity: infooption refers to log messages of the informative importance. Thefacility: mailoption refers to the type of system program that is generating the log message. Thetarget: <host1.example.com>option specifies the hostname of the remote logging server. Theudp_port: 601/tcp_port: 601options define the UDP/TCP ports on which the remote logging server listens. logging_flows-
Defines a list of logging flow dictionaries to specify relationships between
logging_inputsandlogging_outputs. Theinputs: [basic_input]option specifies a list of inputs, from which processing of logs starts. Theoutputs: [forward_output0, forward_output1]option specifies a list of outputs, to which the logs are sent.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On both the client and the server system, test the syntax of the
/etc/rsyslog.conffile:rsyslogd -N 1
# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the client system sends messages to the server:
On the client system, send a test message:
logger test
# logger testCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the server system, view the
/var/log/<host2.example.com>/messageslog, for example:cat /var/log/<host2.example.com>/messages Aug 5 13:48:31 <host2.example.com> root[6778]: test
# cat /var/log/<host2.example.com>/messages Aug 5 13:48:31 <host2.example.com> root[6778]: testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<host2.example.com>is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot.
15.3. Using the logging RHEL system role with TLS Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure a secure transfer of log messages, where one or more clients take logs from the systemd-journal service and transfer them to a remote server while using TLS.
Typically, TLS for transferring logs in a remote logging solution is used when sending sensitive data over less trusted or public networks, such as the Internet. Also, by using certificates in TLS you can ensure that the client is forwarding logs to the correct and trusted server. This prevents attacks like "man-in-the-middle".
15.3.1. Configuring client logging with TLS Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption.
The role creates a private key and a certificate. Next, it configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network.
You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically when the logging_certificates variable is set.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes are enrolled in an IdM domain.
- If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
logging_certificates-
The value of this parameter is passed on to
certificate_requestsin thecertificateRHEL system role and used to create a private key and certificate. logging_pki_filesUsing this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert,ca_cert_src,cert,cert_src,private_key,private_key_src, andtls.NoteIf you are using
logging_certificatesto create the files on the managed node, do not useca_cert_src,cert_src, andprivate_key_src, which are used to copy files not created bylogging_certificates.ca_cert-
Represents the path to the CA certificate file on the managed node. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to the certificate file on the managed node. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to the private key file on the managed node. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert. Do not use this if usinglogging_certificates. cert_src-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert. Do not use this if usinglogging_certificates. private_key_src-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key. Do not use this if usinglogging_certificates. tls-
Setting this parameter to
trueensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
15.3.2. Configuring server logging with TLS Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption.
The role creates a private key and a certificate. Next, it configures TLS on all hosts in the server group in the Ansible inventory.
You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes are enrolled in an IdM domain.
- If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
logging_certificates-
The value of this parameter is passed on to
certificate_requestsin thecertificateRHEL system role and used to create a private key and certificate. logging_pki_filesUsing this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert,ca_cert_src,cert,cert_src,private_key,private_key_src, andtls.NoteIf you are using
logging_certificatesto create the files on the managed node, do not useca_cert_src,cert_src, andprivate_key_src, which are used to copy files not created bylogging_certificates.ca_cert-
Represents the path to the CA certificate file on the managed node. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to the certificate file on the managed node. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to the private key file on the managed node. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert. Do not use this if usinglogging_certificates. cert_src-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert. Do not use this if usinglogging_certificates. private_key_src-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key. Do not use this if usinglogging_certificates. tls-
Setting this parameter to
trueensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.4. Using the logging RHEL system roles with RELP Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure Reliable Event Logging Protocol (RELP) between a RELP client and RELP server.
RELP is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss.
The RELP sender transfers log entries in the form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery.
15.4.1. Configuring client logging with RELP Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure a transfer of log messages stored locally to the remote logging system with RELP.
The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
target- This is a required parameter that specifies the host name where the remote logging system is running.
port- Port number the remote logging system is listening.
tlsEnsures secure transfer of logs over the network. If you do not want a secure wrapper you can set the
tlsvariable tofalse. By defaulttlsparameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert,cert,private_key} and/or {ca_cert_src,cert_src,private_key_src}.-
If the {
ca_cert_src,cert_src,private_key_src} triplet is set, the default locations/etc/pki/tls/certsand/etc/pki/tls/privateare used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert,cert,private_key} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
ca_cert-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to certificate. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents local CA certificate file path which is copied to the managed node. If
ca_certis specified, it is copied to the location. cert_src-
Represents the local certificate file path which is copied to the managed node. If
certis specified, it is copied to the location. private_key_src-
Represents the local key file path which is copied to the managed node. If
private_keyis specified, it is copied to the location. pki_authmode-
Accepts the authentication mode as
nameorfingerprint. permitted_servers- List of servers that will be allowed by the logging client to connect and send logs over TLS.
inputs- List of logging input dictionary.
outputs- List of logging output dictionary.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.4.2. Configuring server logging with RELP Copy linkLink copied to clipboard!
You can use the logging RHEL system role to configure a server for receiving log messages from the remote logging system with RELP.
The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
port- Port number the remote logging system is listening.
tlsEnsures secure transfer of logs over the network. If you do not want a secure wrapper you can set the
tlsvariable tofalse. By defaulttlsparameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert,cert,private_key} and/or {ca_cert_src,cert_src,private_key_src}.-
If the {
ca_cert_src,cert_src,private_key_src} triplet is set, the default locations/etc/pki/tls/certsand/etc/pki/tls/privateare used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert,cert,private_key} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
ca_cert-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to the certificate. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents local CA certificate file path which is copied to the managed node. If
ca_certis specified, it is copied to the location. cert_src-
Represents the local certificate file path which is copied to the managed node. If
certis specified, it is copied to the location. private_key_src-
Represents the local key file path which is copied to the managed node. If
private_keyis specified, it is copied to the location. pki_authmode-
Accepts the authentication mode as
nameorfingerprint. permitted_clients- List of clients that will be allowed by the logging server to connect and send logs over TLS.
inputs- List of logging input dictionary.
outputs- List of logging output dictionary.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 16. Configuring performance monitoring with PCP by using RHEL system roles Copy linkLink copied to clipboard!
Performance Co-Pilot (PCP) is a system performance analysis toolkit. You can use it to record and analyze performance data from many components on a RHEL system. Use the metrics RHEL system role to automate the installation and configuration of PCP, and configure Grafana to visualize PCP metrics.
16.1. Configuring Performance Co-Pilot by using the metrics RHEL system role Copy linkLink copied to clipboard!
You can use Performance Co-Pilot (PCP) to monitor many metrics, such as CPU utilization and memory usage. For example, this can help to identify resource and performance bottlenecks. By using the metrics RHEL system role, you can remotely configure PCP on multiple hosts to record metrics.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
metrics_retention_days: <number>-
Sets the number of days after which the
pmlogger_dailysystemd timer removes old PCP archives. metrics_manage_firewall: <true|false>-
Defines whether the role should open the required ports in the
firewalldservice. If you want to remotely access PCP on the managed nodes, set this variable totrue.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.metrics/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query a metric, for example:
ansible managed-node-01.example.com -m command -a 'pminfo -f kernel.all.load'
# ansible managed-node-01.example.com -m command -a 'pminfo -f kernel.all.load'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next step
16.2. Configuring Performance Co-Pilot with authentication by using the metrics RHEL system role Copy linkLink copied to clipboard!
You can use the metrics RHEL system role to remotely configure Performance Co-Pilot (PCP) with authentication on multiple hosts.
You can enable authentication in PCP so that the pmcd service and Performance Metrics Domain Agents (PDMAs) can determine whether the user running the monitoring tools is allowed to perform an action. Authenticated users have access to metrics with sensitive information. Additionally, certain agents require authentication. For example, the bpftrace agent uses authentication to identify whether a user is allowed to load bpftrace scripts into the kernel to generate metrics.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:metrics_usr: <username> metrics_pwd: <password>
metrics_usr: <username> metrics_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
metrics_retention_days: <number>-
Sets the number of days after which the
pmlogger_dailysystemd timer removes old PCP archives. metrics_manage_firewall: <true|false>-
Defines whether the role should open the required ports in the
firewalldservice. If you want to remotely access PCP on the managed nodes, set this variable totrue. metrics_username: <username>-
The role creates this user locally on the managed node, adds the credentials to the
/etc/pcp/passwd.dbSimple Authentication and Security Layer (SASL) database, and configures authentication in PCP. Additionally, if you setmetrics_from_bpftrace: truein the playbook, PCP uses this account to registerbpftracescripts.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.metrics/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On a host with the
pcppackage installed, query a metric that requires authentication:Query the metrics by using the credentials that you used in the playbook:
pminfo -fmdt -h pcp://managed-node-01.example.com?username=<user> proc.fd.count
# pminfo -fmdt -h pcp://managed-node-01.example.com?username=<user> proc.fd.count Password: <password> proc.fd.count inst [844 or "000844 /var/lib/pcp/pmdas/proc/pmdaproc"] value 5Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command succeeds, it returns the value of the
proc.fd.countmetric.Run the command again, but omit the username to verify that the command fails for unauthenticated users:
pminfo -fmdt -h pcp://managed-node-01.example.com proc.fd.count
# pminfo -fmdt -h pcp://managed-node-01.example.com proc.fd.count proc.fd.count Error: No permission to perform requested operationCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next step
16.3. Setting up Grafana by using the metrics RHEL system role to monitor multiple hosts with Performance Co-Pilot Copy linkLink copied to clipboard!
If you have configured Performance Co-Pilot (PCP) on multiple hosts, you can use Grafana to visualize the metrics for these hosts. By using the metrics RHEL system role, you can automate the process of setting up Grafana, the PCP plug-in, and the configuration of the data sources.
If you use the metrics role to install Grafana on a host, the role also installs automatically PCP on this host.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - PCP is configured for remote access on the hosts you want to monitor.
- The host on which you want to install Grafana can access port 44321 on the PCP nodes you plan to monitor.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:grafana_admin_pwd: <password>
grafana_admin_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
metrics_graph_service: true-
Installs Grafana and the PCP plug-in. Additionally, the role adds the
PCP Vector,PCP Redis, andPCP bpftracedata sources to Grafana. metrics_query_service: <true|false>- Defines whether the role should install and configure Redis for centralized metric recording. If enabled, data collected from PCP clients is stored in Redis and, as a result, you can also display historical data instead of only live data.
metrics_monitored_hosts: <list_of_hosts>- Defines the list of hosts to monitor. In Grafana, you can then display the data of these hosts and, additionally, the host that runs Grafana.
metrics_manage_firewall: <true|false>-
Defines whether the role should open the required ports in the
firewalldservice. If you set this variable totrue, you can, for example, access Grafana remotely.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.metrics/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Open
http://<grafana_server_IP_or_hostname>:3000in your browser, and log in as theadminuser with the password you set in the procedure. Display monitoring data:
To display live data:
- Click → → →
-
By default, the graphs display metrics from the host that runs Grafana. To switch to a different host, enter the hostname in the
hostspecfield and press Enter.
-
To display historical data stored in a Redis database: Create a panel with a PCP Redis data source. This requires that you set
metrics_query_service: truein the playbook.
16.4. Configuring web hooks in Performance Co-Pilot by using the metrics RHEL system role Copy linkLink copied to clipboard!
The Performance Co-Pilot (PCP) suite contains the performance metrics inference engine (PMIE) service. This service evaluates performance rules in real time. For example, you can use the default rules to detect excessive swap activities.
You can configure a host as a central PCP management site that collects the monitoring data from multiple PCP nodes. If a rule matches, this central host sends a notification to a web hook to notify other services. For example, the web hook can trigger Event-Driven Ansible to run on Ansible Automation Platform template or playbook on the host that had caused the event.
By using the metrics RHEL system role, you can automate the configuration of a central PCP management host that notifies a web hook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - PCP is configured for remote access on the hosts you want to monitor.
- The host on which you want to configure PMIE can access port 44321 on the PCP nodes you plan to monitor.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
metrics_retention_days: <number>-
Sets the number of days after which the
pmlogger_dailysystemd timer removes old PCP archives. metrics_manage_firewall: <true|false>-
Defines whether the role should open the required ports in the
firewalldservice. If you want to remotely access PCP on the managed nodes, set this variable totrue. metrics_monitored_hosts: <list_of_hosts>- Specifies the hosts to observe.
metrics_webhook_endpoint: <URL>- Sets the web hook endpoint to which the performance metrics inference engine (PMIE) sends notifications about detected performance issues. By default, these issues are logged to the local system only.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.metrics/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the configuration summary on
managed-node-node-01.example.com:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The last three lines confirm that PMIE is configured to monitor three systems.
Chapter 17. Configuring NBDE by using RHEL system roles Copy linkLink copied to clipboard!
You can use the nbde_client and nbde_server RHEL system roles for automated deployments of Policy-Based Decryption (PBD) solutions by using Clevis and Tang.
The rhel-system-roles package contains these system roles, the related examples, and the reference documentation.
17.1. Using the nbde_server RHEL system role for setting up multiple Tang servers Copy linkLink copied to clipboard!
By using the nbde_server system role, you can deploy and manage a Tang server as part of an automated disk encryption solution.
This role supports the following features:
- Rotating Tang keys
- Deploying and backing up Tang keys
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example playbook ensures deploying of your Tang server and a key rotation.
The settings specified in the example playbook include the following:
nbde_server_manage_firewall: true-
Use the
firewallsystem role to manage ports used by thenbde_serverrole. nbde_server_manage_selinux: trueUse the
selinuxsystem role to manage ports used by thenbde_serverrole.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.nbde_server/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On your NBDE client, verify that your Tang server works correctly by using the following command. The command must return the identical message you pass for encryption and decryption:
ansible managed-node-01.example.com -m command -a 'echo test | clevis encrypt tang '{"url":"<tang.server.example.com>"}' -y | clevis decrypt'# ansible managed-node-01.example.com -m command -a 'echo test | clevis encrypt tang '{"url":"<tang.server.example.com>"}' -y | clevis decrypt' testCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2. Setting up Clevis clients with DHCP by using the nbde_client RHEL system role Copy linkLink copied to clipboard!
The nbde_client system role enables you to deploy multiple Clevis clients in an automated way.
This role supports binding a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers - Tang servers. You can either preserve the existing volume encryption with a passphrase or remove it. After removing the passphrase, you can unlock the volume only using NBDE. This is useful when a volume is initially encrypted using a temporary key or password that you should remove after you provision the system.
If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not find any of these valid, it attempts to retrieve a passphrase from an existing binding.
Policy-Based Decryption (PBD) defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings for the same device. The default slot is slot 1.
The nbde_client system role supports only Tang bindings. Therefore, you cannot use it for TPM2 bindings.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - A volume that is already encrypted by using LUKS.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example playbook configures Clevis clients for automated unlocking of two LUKS-encrypted volumes when at least one of two Tang servers is available.
The settings specified in the example playbook include the following:
state: present-
The values of
stateindicate the configuration after you run the playbook. Use thepresentvalue for either creating a new binding or updating an existing one. Contrary to aclevis luks bindcommand, you can usestate: presentalso for overwriting an existing binding in its device slot. Theabsentvalue removes a specified binding. nbde_client_early_boot: trueThe
nbde_clientrole ensures that networking for a Tang pin is available during early boot by default. If you scenario requires to disable this feature, add thenbde_client_early_boot: falsevariable to your playbook.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.nbde_client/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On your NBDE client, check that the encrypted volume that should be automatically unlocked by your Tang servers contain the corresponding information in its LUKS pins:
ansible managed-node-01.example.com -m command -a 'clevis luks list -d /dev/rhel/root'
# ansible managed-node-01.example.com -m command -a 'clevis luks list -d /dev/rhel/root' 1: tang '{"url":"<http://server1.example.com/>"}' 2: tang '{"url":"<http://server2.example.com/>"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not use the
nbde_client_early_boot: falsevariable, verify that the bindings are available for the early boot, for example:ansible managed-node-01.example.com -m command -a 'lsinitrd | grep clevis-luks'
# ansible managed-node-01.example.com -m command -a 'lsinitrd | grep clevis-luks' lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3. Setting up static-IP Clevis clients by using the nbde_client RHEL system role Copy linkLink copied to clipboard!
The nbde_client RHEL system role supports only scenarios with Dynamic Host Configuration Protocol (DHCP). On an NBDE client with static IP configuration, you must pass your network configuration as a kernel boot parameter.
Typically, administrators want to reuse a playbook and not maintain individual playbooks for each host to which Ansible assigns static IP addresses during early boot. In this case, you can use variables in the playbook and provide the settings in an external file. As a result, you need only one playbook and one file with the settings.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - A volume that is already encrypted by using LUKS.
Procedure
Create a file with the network settings of your hosts, for example,
static-ip-settings-clients.yml, and add the values you want to dynamically assign to the hosts:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This playbook reads certain values dynamically for each host listed in the
~/static-ip-settings-clients.ymlfile.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 18. Configuring network settings by using RHEL system roles Copy linkLink copied to clipboard!
By using the network RHEL system role, you can automate network-related configuration and management tasks.
18.1. Configuring an Ethernet connection with a static IP address by using the network RHEL system role with an interface name Copy linkLink copied to clipboard!
You can use the network RHEL system role to configure an Ethernet connection with static IP addresses, gateways, and DNS settings, and assign them to a specified interface name.
To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
Typically, administrators want to reuse a playbook and not maintain individual playbooks for each host to which Ansible should assign static IP addresses. In this case, you can use variables in the playbook and maintain the settings in the inventory. As a result, you need only one playbook to dynamically assign individual settings to multiple hosts.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - A physical or virtual Ethernet device exists in the server configuration.
- The managed nodes use NetworkManager to configure the network.
Procedure
Edit the
~/inventoryfile, and append the host-specific settings to the host entries:managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe
managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This playbook reads certain values dynamically for each host from the inventory file and uses static values in the playbook for settings which are the same for all hosts.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the Ansible facts of the managed node and verify the active network settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.2. Configuring an Ethernet connection with a static IP address by using the network RHEL system role with a device path Copy linkLink copied to clipboard!
You can use the network RHEL system role to configure an Ethernet connection with static IP addresses, gateways, and DNS settings, and assign them to a device based on its path instead of its name.
To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - A physical or virtual Ethernet device exists in the server’s configuration.
- The managed nodes use NetworkManager to configure the network.
-
You know the path of the device. You can display the device path by using the
udevadm info /sys/class/net/<device_name> | grep ID_PATH=command.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
match-
Defines that a condition must be met in order to apply the settings. You can only use this variable with the
pathoption. path-
Defines the persistent path of a device. You can set it as a fixed path or an expression. Its value can contain modifiers and wildcards. The example applies the settings to devices that match PCI ID
0000:00:0[1-3].0, but not0000:00:02.0.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the Ansible facts of the managed node and verify the active network settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL system role with an interface name Copy linkLink copied to clipboard!
You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). With this role you can assign the connection profile to the specified interface name.
To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - A physical or virtual Ethernet device exists in the servers' configuration.
- A DHCP server and SLAAC are available in the network.
- The managed nodes use the NetworkManager service to configure the network.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
dhcp4: yes- Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes-
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the
managedflag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the Ansible facts of the managed node and verify that the interface received IP addresses and DNS settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.4. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL system role with a device path Copy linkLink copied to clipboard!
By using the network RHEL system role, you can configure an Ethernet connection to retrieve its IP addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). The role can assign the profile by the device’s path.
To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - A physical or virtual Ethernet device exists in the server’s configuration.
- A DHCP server and SLAAC are available in the network.
- The managed hosts use NetworkManager to configure the network.
-
You know the path of the device. You can display the device path by using the
udevadm info /sys/class/net/<device_name> | grep ID_PATH=command.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
match: path-
Defines that a condition must be met in order to apply the settings. You can only use this variable with the
pathoption. path: <path_and_expressions>-
Defines the persistent path of a device. You can set it as a fixed path or an expression. Its value can contain modifiers and wildcards. The example applies the settings to devices that match PCI ID
0000:00:0[1-3].0, but not0000:00:02.0. dhcp4: yes- Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes-
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the
managedflag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the Ansible facts of the managed node and verify that the interface received IP addresses and DNS settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.5. Configuring a static Ethernet connection with 802.1X network authentication by using the network RHEL system role Copy linkLink copied to clipboard!
By using the network RHEL system role, you can automate setting up Network Access Control (NAC) on remote hosts. You can define authentication details for clients in a playbook to ensure only authorized clients can access the network.
You can use an Ansible playbook to copy a private key, a certificate, and the CA certificate to the client, and then use the network RHEL system role to configure a connection profile with 802.1X network authentication.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The network supports 802.1X network authentication.
- The managed nodes use NetworkManager.
The following files required for the TLS authentication exist on the control node:
-
The client key is stored in the
/srv/data/client.keyfile. -
The client certificate is stored in the
/srv/data/client.crtfile. -
The Certificate Authority (CA) certificate is stored in the
/srv/data/ca.crtfile.
-
The client key is stored in the
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:pwd: <password>
pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ieee802_1x- This variable contains the 802.1X-related settings.
eap: tls-
Configures the profile to use the certificate-based
TLSauthentication method for the Extensible Authentication Protocol (EAP).
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Access resources on the network that require network authentication.
18.6. Configuring a wifi connection with 802.1X network authentication by using the network RHEL system role Copy linkLink copied to clipboard!
By using the network RHEL system role, you can automate setting up Network Access Control (NAC) on remote hosts. You can define authentication details for clients in a playbook to ensure only authorized clients can access the network.
You can use an Ansible playbook to copy a private key, a certificate, and the CA certificate to the client, and then use the network RHEL system role to configure a connection profile with 802.1X network authentication.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The network supports 802.1X network authentication.
-
You installed the
wpa_supplicantpackage on the managed node. - DHCP is available in the network of the managed node.
The following files required for TLS authentication exist on the control node:
-
The client key is stored in the
/srv/data/client.keyfile. -
The client certificate is stored in the
/srv/data/client.crtfile. -
The CA certificate is stored in the
/srv/data/ca.crtfile.
-
The client key is stored in the
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:pwd: <password>
pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ieee802_1x- This variable contains the 802.1X-related settings.
eap: tls-
Configures the profile to use the certificate-based
TLSauthentication method for the Extensible Authentication Protocol (EAP).
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.7. Configuring a network bond by using the network RHEL system role Copy linkLink copied to clipboard!
You can use the network RHEL system role to configure a network bond and, if a connection profile for the bond’s parent device does not exist, the role can create it as well.
You can combine network interfaces in a bond to provide a logical interface with higher throughput or redundancy. To configure a bond, create a NetworkManager connection profile. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - Two or more physical or virtual network devices are installed on the server.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
type: <profile_type>- Sets the type of the profile to create. The example playbook creates three connection profiles: One for the bond and two for the Ethernet devices.
dhcp4: yes- Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes-
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the
managedflag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. mode: <bond_mode>Sets the bonding mode. Possible values are:
-
balance-rr(default) -
active-backup -
balance-xor -
broadcast -
802.3ad -
balance-tlb -
balance-alb.
Depending on the mode you set, you need to set additional variables in the playbook.
-
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Temporarily remove the network cable from one of the network devices and check if the other device in the bond is handling the traffic.
Note that there is no method to properly test link failure events using software utilities. Tools that deactivate connections, such as
nmcli, show only the bonding driver’s ability to handle port configuration changes and not actual link failure events.
18.8. Configuring VLAN tagging by using the network RHEL system role Copy linkLink copied to clipboard!
You can use the network RHEL system role to configure VLAN tagging and, if a connection profile for the VLAN’s parent device does not exist, the role can create it as well.
If your network uses Virtual Local Area Networks (VLANs) to separate network traffic into logical networks, create a NetworkManager connection profile to configure VLAN tagging. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
If the VLAN device requires an IP address, default gateway, and DNS settings, configure them on the VLAN device and not on the parent device.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
type: <profile_type>- Sets the type of the profile to create. The example playbook creates two connection profiles: One for the parent Ethernet device and one for the VLAN device.
dhcp4: <value>-
If set to
yes, automatic IPv4 address assignment from DHCP, PPP, or similar services is enabled. Disable the IP address configuration on the parent device. auto6: <value>-
If set to
yes, IPv6 auto-configuration is enabled. In this case, by default, NetworkManager uses Router Advertisements and, if the router announces themanagedflag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. Disable the IP address configuration on the parent device. parent: <parent_device>- Sets the parent device of the VLAN connection profile. In the example, the parent is the Ethernet interface.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the VLAN settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.9. Configuring a network bridge by using the network RHEL system role Copy linkLink copied to clipboard!
You can use the network RHEL system role to configure a bridge and, if a connection profile for the bridge’s parent device does not exist, the role can create it as well.
You can connect multiple networks on layer 2 of the Open Systems Interconnection (OSI) model by creating a network bridge. To configure a bridge, create a connection profile in NetworkManager. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
If you want to assign IP addresses, gateways, and DNS settings to a bridge, configure them on the bridge and not on its ports.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - Two or more physical or virtual network devices are installed on the server.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
type: <profile_type>- Sets the type of the profile to create. The example playbook creates three connection profiles: One for the bridge and two for the Ethernet devices.
dhcp4: yes- Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes-
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the
managedflag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the link status of Ethernet devices that are ports of a specific bridge:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the status of Ethernet devices that are ports of any bridge device:
ansible managed-node-01.example.com -m command -a 'bridge link show'
# ansible managed-node-01.example.com -m command -a 'bridge link show' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.10. Setting the default gateway on an existing connection by using the network RHEL system role Copy linkLink copied to clipboard!
By using the network RHEL system role, you can automate setting the default gateway in a NetworkManager connection profile. With this method, you can remotely configure the default gateway on hosts defined in a playbook.
In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously-created connection.
You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the Ansible facts of the managed node and verify the active network settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.11. Configuring a static route by using the network RHEL system role Copy linkLink copied to clipboard!
You can use the network RHEL system role to configure static routes.
When you run a play that uses the network RHEL system role and if the setting values do not match the values specified in the play, the role overrides the existing connection profile with the same name. To prevent resetting these values to their defaults, always specify the whole configuration of the network connection profile in the play, even if the configuration, for example the IP configuration, already exists.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the IPv4 routes:
ansible managed-node-01.example.com -m command -a 'ip -4 route'
# ansible managed-node-01.example.com -m command -a 'ip -4 route' managed-node-01.example.com | CHANGED | rc=0 >> ... 198.51.100.0/24 via 192.0.2.10 dev enp7s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the IPv6 routes:
ansible managed-node-01.example.com -m command -a 'ip -6 route'
# ansible managed-node-01.example.com -m command -a 'ip -6 route' managed-node-01.example.com | CHANGED | rc=0 >> ... 2001:db8:2::/64 via 2001:db8:1::10 dev enp7s0 metric 1024 pref mediumCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.12. Routing traffic from a specific subnet to a different default gateway by using the network RHEL system role Copy linkLink copied to clipboard!
You can use policy-based routing to configure a different default gateway for traffic from certain subnets. By using the network RHEL system role, you can automate the creation of the connection profiles, including routing tables and rules.
For example, you can configure RHEL as a router that, by default, routes all traffic to internet provider A using the default route. However, traffic received from the internal workstations subnet is routed to provider B. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
This procedure assumes the following network topology:
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The managed nodes use NetworkManager and the
firewalldservice. The managed nodes you want to configure has four network interfaces:
-
The
enp7s0interface is connected to the network of provider A. The gateway IP in the provider’s network is198.51.100.2, and the network uses a/30network mask. -
The
enp1s0interface is connected to the network of provider B. The gateway IP in the provider’s network is192.0.2.2, and the network uses a/30network mask. -
The
enp8s0interface is connected to the10.0.0.0/24subnet with internal workstations. -
The
enp9s0interface is connected to the203.0.113.0/24subnet with the company’s servers.
-
The
-
Hosts in the internal workstations subnet use
10.0.0.1as the default gateway. In the procedure, you assign this IP address to theenp8s0network interface of the router. -
Hosts in the server subnet use
203.0.113.1as the default gateway. In the procedure, you assign this IP address to theenp9s0network interface of the router.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
table: <value>-
Assigns the route from the same list entry as the
tablevariable to the specified routing table. routing_rule: <list>- Defines the priority of the specified routing rule and from a connection profile to which routing table the rule is assigned.
zone: <zone_name>-
Assigns the network interface from a connection profile to the specified
firewalldzone.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On a RHEL host in the internal workstation subnet:
Install the
traceroutepackage:dnf install traceroute
# dnf install tracerouteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
tracerouteutility to display the route to a host on the internet:traceroute redhat.com
# traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.2 (192.0.2.2) 0.884 ms 1.066 ms 1.248 ms ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the command displays that the router sends packets over
192.0.2.1, which is the network of provider B.
On a RHEL host in the server subnet:
Install the
traceroutepackage:dnf install traceroute
# dnf install tracerouteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
tracerouteutility to display the route to a host on the internet:traceroute redhat.com
# traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the command displays that the router sends packets over
198.51.100.2, which is the network of provider A.
On the RHEL router that you configured using the RHEL system role:
Display the rule list:
ip rule list
# ip rule list 0: from all lookup local 5: from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow By default, RHEL contains rules for the tables
local,main, anddefault.Display the routes in table
5000:ip route list table 5000
# ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the interfaces and firewall zones:
firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
externalzone has masquerading enabled:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.13. Configuring an ethtool offload feature by using the network RHEL system role Copy linkLink copied to clipboard!
You can use the network RHEL system role to automate configuring TCP offload engine (TOE) to offload processing certain operations to the network controller. TOE improves the network throughput.
You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
gro: no- Disables Generic receive offload (GRO).
gso: yes- Enables Generic segmentation offload (GSO).
tx_sctp_segmentation: no- Disables TX stream control transmission protocol (SCTP) segmentation.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the Ansible facts of the managed node and verify the offload settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.14. Configuring an ethtool coalesce settings by using the network RHEL system role Copy linkLink copied to clipboard!
Interrupt coalescing collects network packets and generates a single interrupt for multiple packets. This reduces interrupt load and maximizes throughput. You can automate the configuration of these settings in the NetworkManager connection profile by using the network RHEL system role.
You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rx_frames: <value>- Sets the number of RX frames.
gso: <value>- Sets the number of TX frames.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the current offload features of the network device:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.15. Increasing the ring buffer size to reduce a high packet drop rate by using the network RHEL system role Copy linkLink copied to clipboard!
Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.
Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You know the maximum ring buffer sizes that the device supports.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rx: <value>- Sets the maximum number of received ring buffer entries.
tx: <value>- Sets the maximum number of transmitted ring buffer entries.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the maximum ring buffer sizes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.16. Configuring an IPoIB connection by using the network RHEL system role Copy linkLink copied to clipboard!
To configure IP over InfiniBand (IPoIB), create a NetworkManager connection profile. You can automate this process by using the network RHEL system role and remotely configure connection profiles on hosts defined in a playbook.
You can use the network RHEL system role to configure IPoIB and, if a connection profile for the InfiniBand’s parent device does not exist, the role can create it as well.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
An InfiniBand device named
mlx4_ib0is installed in the managed nodes. - The managed nodes use NetworkManager to configure the network.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
type: <profile_type>- Sets the type of the profile to create. The example playbook creates two connection profiles: One for the InfiniBand connection and one for the IPoIB device.
parent: <parent_device>- Sets the parent device of the IPoIB connection profile.
p_key: <value>-
Sets the InfiniBand partition key. If you set this variable, do not set
interface_nameon the IPoIB device. transport_mode: <mode>-
Sets the IPoIB connection operation mode. You can set this variable to
datagram(default) orconnected.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the IP settings of the
mlx4_ib0.8002device:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the partition key (P_Key) of the
mlx4_ib0.8002device:ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/pkey' managed-node-01.example.com | CHANGED | rc=0 >> 0x8002
# ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/pkey' managed-node-01.example.com | CHANGED | rc=0 >> 0x8002Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the mode of the
mlx4_ib0.8002device:ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/mode' managed-node-01.example.com | CHANGED | rc=0 >> datagram
# ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/mode' managed-node-01.example.com | CHANGED | rc=0 >> datagramCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.17. Network states for the network RHEL system role Copy linkLink copied to clipboard!
The network RHEL system role supports state configurations in playbooks to configure the devices. For this, use the network_state variable followed by the state configurations.
Benefits of using the network_state variable in a playbook:
- Using the declarative method with the state configurations, you can configure interfaces, and the NetworkManager creates a profile for these interfaces in the background.
-
With the
network_statevariable, you can specify the options that you require to change, and all the other options will remain the same as they are. However, with thenetwork_connectionsvariable, you must specify all settings to change the network connection profile.
You can set only Nmstate YAML instructions in network_state. These instructions differ from the variables you can set in network_connections.
For example, to create an Ethernet connection with dynamic IP address settings, use the following vars block in your playbook:
| Playbook with state configurations | Regular playbook |
|
|
|
For example, to only change the connection status of dynamic IP address settings that you created as above, use the following vars block in your playbook:
| Playbook with state configurations | Regular playbook |
|
|
|
Chapter 19. Managing containers by using RHEL system roles Copy linkLink copied to clipboard!
With the podman RHEL system role, you can manage Podman configuration, containers, and systemd services that run Podman containers.
19.1. Creating a rootless container with bind mount by using the podman RHEL system role Copy linkLink copied to clipboard!
You can use the podman RHEL system role to create rootless containers with bind mount by running an Ansible playbook and with that, manage your application configuration.
The example Ansible playbook starts two Kubernetes pods: one for a database and another for a web application. The database pod configuration is specified in the playbook, while the web application pod is defined in an external YAML file.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The user and group
webappexist, and must be listed in the/etc/subuidand/etc/subgidfiles on the host.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
run_as_userandrun_as_group- Specify that containers are rootless.
kube_file_contentContains a Kubernetes YAML file defining the first container named
db. You can generate the Kubernetes YAML file by using thepodman kube generatecommand.-
The
dbcontainer is based on thequay.io/db/db:stablecontainer image. -
The
dbbind mount maps the/var/lib/dbdirectory on the host to the/var/lib/dbdirectory in the container. TheZflag labels the content with a private unshared label, therefore, only thedbcontainer can access the content.
-
The
kube_file_src: <path>-
Defines the second container. The content of the
/path/to/webapp.ymlfile on the controller node will be copied to thekube_filefield on the managed node. volumes: <list>-
A YAML list to define the source of the data to provide in one or more containers. For example, a local disk on the host (
hostPath) or other disk device. volumeMounts: <list>- A YAML list to define the destination where the individual container will mount a given volume.
podman_create_host_directories: true-
Creates the directory on the host. This instructs the role to check the kube specification for
hostPathvolumes and create those directories on the host. If you need more control over the ownership and permissions, usepodman_host_directories.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.podman/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
19.2. Creating a rootful container with Podman volume by using the podman RHEL system role Copy linkLink copied to clipboard!
You can use the podman RHEL system role to create a rootful container with a Podman volume by running an Ansible playbook and with that, manage your application configuration.
The example Ansible playbook deploys a Kubernetes pod named ubi8-httpd running an HTTP server container from the registry.access.redhat.com/ubi8/httpd-24 image. The container’s web content is mounted from a persistent volume named ubi8-html-volume. By default, the podman role creates rootful containers.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
kube_file_contentContains a Kubernetes YAML file defining the first container named
db. You can generate the Kubernetes YAML file by using thepodman kube generatecommand.-
The
ubi8-httpdcontainer is based on theregistry.access.redhat.com/ubi8/httpd-24container image. -
The
ubi8-html-volumemaps the/var/www/htmldirectory on the host to the container. TheZflag labels the content with a private unshared label, therefore, only theubi8-httpdcontainer can access the content. -
The pod mounts the existing persistent volume named
ubi8-html-volumewith the mount path/var/www/html.
-
The
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.podman/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
19.3. Creating a Quadlet application with secrets by using the podman RHEL system role Copy linkLink copied to clipboard!
You can use the podman RHEL system role to create a Quadlet application with secrets by running an Ansible playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The certificate and the corresponding private key that the web server in the container should use are stored in the
~/certificate.pemand~/key.pemfiles.
Procedure
Display the contents of the certificate and private key files:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You require this information in a later step.
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all lines in the
certificateandkeyvariables start with two spaces.- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The procedure creates a WordPress content management system paired with a MySQL database. The
podman_quadlet_specs rolevariable defines a set of configurations for the Quadlet, which refers to a group of containers or services that work together in a certain way. It includes the following specifications:-
The Wordpress network is defined by the
quadlet-demonetwork unit. -
The volume configuration for MySQL container is defined by the
file_src: quadlet-demo-mysql.volumefield. -
The
template_src: quadlet-demo-mysql.container.j2field is used to generate a configuration for the MySQL container. -
Two YAML files follow:
file_src: envoy-proxy-configmap.ymlandfile_src: quadlet-demo.yml. Note that .yml is not a valid Quadlet unit type, therefore these files will just be copied and not processed as a Quadlet specification. -
The Wordpress and envoy proxy containers and configuration are defined by the
file_src: quadlet-demo.kubefield. The kube unit refers to the previous YAML files in the[Kube]section asYaml=quadlet-demo.ymlandConfigMap=envoy-proxy-configmap.yml.
-
The Wordpress network is defined by the
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 20. Configuring Postfix MTA by using RHEL system roles Copy linkLink copied to clipboard!
You can use the postfix RHEL system role to consistently manage configurations of the Postfix mail transfer agent (MTA) in an automated fashion.
Deploying Postfix configurations are helpful when you need for example:
- Stable mail server: enables system administrators to configure a fast and scalable server for sending and receiving emails.
- Secure communication: supports features such as TLS encryption, authentication, domain blacklisting, and more, to ensure safe email transmission.
- Improved email management and routing: implements filters and rules so that you have control over your email traffic.
The postfix_conf dictionary holds key-value pairs of the supported Postfix configuration parameters. Those keys that Postfix does not recognize as supported are ignored. The postfix RHEL system role directly passes the key-value pairs that you provide to the postfix_conf dictionary without verifying their syntax or limiting them. Therefore, the role is especially useful to those familiar with Postfix, and who know how to configure it.
20.1. Configuring Postfix as a null client for only sending outgoing emails Copy linkLink copied to clipboard!
You can use the postfix RHEL system role to automate configuring Postfix as a null client for sending outgoing emails.
A null client is a special configuration, where the Postfix server is set up only to send outgoing emails, but not receive any incoming emails. Such a setup is widely used in scenarios where you need to send notifications, alerts, or logs; but receiving or managing emails is not needed.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
myhostname: <server.example.com>- The internet hostname of this mail system. Defaults to the fully-qualified domain name (FQDN).
myorigin: $mydomain-
The domain name that locally-posted mail appears to come from and that locally posted mail is delivered to. Defaults to
$myhostname. relayhost: <smtp.example.com>- The next-hop destination(s) for non-local mail, overrides non-local domains in recipient addresses. Defaults to an empty field.
inet_interfaces: loopback-only- Defines which network interfaces the Postfix server listens on for incoming email connections. It controls whether and how the Postfix server accepts email from the network.
mydestination- Defines which domains and hostnames are considered local.
relay_domains: "hash:/etc/postfix/relay_domains"-
Specifies the domains that Postfix can forward emails to when it is acting as a relay server (SMTP relay). In this case the domains will be generated by the
postfix_filesvariable. On RHEL 10, you have to userelay_domains: "lmdb:/etc/postfix/relay_domains". postfix_files-
Defines a list of files that will be placed in the
/etc/postfix/directory. Those files can be converted into Postfix Lookup Tables if needed. In this casepostfix_filesgenerates domain names for the SMTP relay.
For details about the role variables and the Postfix configuration parameters used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.postfix/README.mdfile and thepostconf(5)manual page on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 21. Installing and configuring a PostgreSQL database server by using RHEL system roles Copy linkLink copied to clipboard!
You can use the postgresql RHEL system role to automate the installation and management of the PostgreSQL database server. By default, this role also optimizes PostgreSQL by automatically configuring performance-related settings in the PostgreSQL service configuration files.
21.1. Configuring PostgreSQL with an existing TLS certificate by using the postgresql RHEL system role Copy linkLink copied to clipboard!
You can configure PostgreSQL with TLS encryption using the postgresql RHEL system role to automate secure database setup with existing certificates and private keys.
The postgresql role cannot open ports in the firewalld service. To allow remote access to the PostgreSQL server, add a task that uses the firewall RHEL system role to your playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. Both the private key of the managed node and the certificate are stored on the control node in the following files:
-
Private key:
~/<FQDN_of_the_managed_node>.key -
Certificate:
~/<FQDN_of_the_managed_node>.crt
-
Private key:
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:pwd: <password>
pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
postgresql_version: <version>Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node.
You cannot upgrade or downgrade PostgreSQL by changing the
postgresql_versionvariable and running the playbook again.postgresql_password: <password>Sets the password of the
postgresdatabase superuser.You cannot change the password by changing the
postgresql_passwordvariable and running the playbook again.postgresql_cert_name: <private_key_and_certificate_file>Defines the path and base name of both the certificate and private key on the managed node without
.crtandkeysuffixes. During the PostgreSQL configuration, the role creates symbolic links in the/var/lib/pgsql/data/directory that refer to these files.The certificate and private key must exist locally on the managed node. You can use tasks with the
ansible.builtin.copymodule to transfer the files from the control node to the managed node, as shown in the playbook.postgresql_server_conf: <list_of_settings>Defines
postgresql.confsettings the role should set. The role adds these settings to the/etc/postgresql/system-roles.conffile and includes this file at the end of/var/lib/pgsql/data/postgresql.conf. Consequently, settings from thepostgresql_server_confvariable override settings in/var/lib/pgsql/data/postgresql.conf.Re-running the playbook with different settings in
postgresql_server_confoverwrites the/etc/postgresql/system-roles.conffile with the new settings.postgresql_pg_hba_conf: <list_of_authentication_entries>Configures client authentication entries in the
/var/lib/pgsql/data/pg_hba.conffile. For details, see see the PostgreSQL documentation.The example allows the following connections to PostgreSQL:
- Unencrypted connections by using local UNIX domain sockets.
- TLS-encrypted connections to the IPv4 and IPv6 localhost addresses.
-
TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the
listen_addressessetting in thepostgresql_server_confvariable appropriately.
Re-running the playbook with different settings in
postgresql_pg_hba_confoverwrites the/var/lib/pgsql/data/pg_hba.conffile with the new settings.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.postgresql/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Use the
postgressuper user to connect to a PostgreSQL server and execute the\conninfometa command:psql "postgresql://postgres@managed-node-01.example.com:5432" -c '\conninfo'
# psql "postgresql://postgres@managed-node-01.example.com:5432" -c '\conninfo' Password for user postgres: You are connected to database "postgres" as user "postgres" on host "192.0.2.1" at port "5432". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled.
21.2. Configuring PostgreSQL with a TLS certificate issued from IdM by using the postgresql RHEL system role Copy linkLink copied to clipboard!
You can configure PostgreSQL with TLS encryption using the postgresql RHEL system role to automate secure database setup with certificates issued from Identity Management (IdM) and managed by the certmonger service.
The postgresql role cannot open ports in the firewalld service. To allow remote access to the PostgreSQL server, add a task to your playbook that uses the firewall RHEL system role.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You enrolled the managed node in an IdM domain.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:pwd: <password>
pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
postgresql_version: <version>Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node.
You cannot upgrade or downgrade PostgreSQL by changing the
postgresql_versionvariable and running the playbook again.postgresql_password: <password>Sets the password of the
postgresdatabase superuser.You cannot change the password by changing the
postgresql_passwordvariable and running the playbook again.postgresql_certificates: <certificate_role_settings>-
A list of YAML dictionaries with settings for the
certificaterole. postgresql_server_conf: <list_of_settings>Defines
postgresql.confsettings you want the role to set. The role adds these settings to the/etc/postgresql/system-roles.conffile and includes this file at the end of/var/lib/pgsql/data/postgresql.conf. Consequently, settings from thepostgresql_server_confvariable override settings in/var/lib/pgsql/data/postgresql.conf.Re-running the playbook with different settings in
postgresql_server_confoverwrites the/etc/postgresql/system-roles.conffile with the new settings.postgresql_pg_hba_conf: <list_of_authentication_entries>Configures client authentication entries in the
/var/lib/pgsql/data/pg_hba.conffile. For details, see see the PostgreSQL documentation.The example allows the following connections to PostgreSQL:
- Unencrypted connections by using local UNIX domain sockets.
- TLS-encrypted connections to the IPv4 and IPv6 localhost addresses.
-
TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the
listen_addressessetting in thepostgresql_server_confvariable appropriately.
Re-running the playbook with different settings in
postgresql_pg_hba_confoverwrites the/var/lib/pgsql/data/pg_hba.conffile with the new settings.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.postgresql/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Use the
postgressuper user to connect to a PostgreSQL server and execute the\conninfometa command:psql "postgresql://postgres@managed-node-01.example.com:5432" -c '\conninfo'
# psql "postgresql://postgres@managed-node-01.example.com:5432" -c '\conninfo' Password for user postgres: You are connected to database "postgres" as user "postgres" on host "192.0.2.1" at port "5432". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled.
Chapter 22. Registering the system by using RHEL system roles Copy linkLink copied to clipboard!
The rhc RHEL system role enables administrators to automate the registration of multiple systems with Red Hat Subscription Management (RHSM) and Satellite servers. The role also supports Insights-related configuration and management tasks by using Ansible.
By default, when you register a system by using rhc, the system is connected to Red Hat Insights. Additionally, with rhc, you can:
- Configure connections to Red Hat Insights
- Enable and disable repositories
- Configure the proxy to use for the connection
- Configure Insights remediations and, auto updates
- Set the release of the system
- Configure Insights tags
22.1. Registering a system by using the rhc RHEL system role Copy linkLink copied to clipboard!
You can register multiple systems at scale with Red Hat subscription management (RHSM) by using the rhc RHEL system role. By default, rhc connects the system to Red Hat Insights when you register it.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:activationKey: <activation_key> organizationID: <organizationID> username: <username> password: <password>
activationKey: <activation_key> organizationID: <organizationID> username: <username> password: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:To register by using an activation key and organization ID (recommended), use the following playbook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rhc_auth: activation_keys-
The key
activation_keysspecifies that you want to register by using the activation keys.
To register by using a username and password, use the following playbook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The settings specified in the example playbook include the following:
rhc_auth: login-
The key
loginspecifies that you want to register by using the username and password.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.2. Registering a system with Satellite by using the rhc RHEL system role Copy linkLink copied to clipboard!
When organizations use Satellite to manage systems, it is necessary to register the system through Satellite. You can remotely register your system with Satellite by using the rhc RHEL system role.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:activationKey: <activation_key> organizationID: <organizationID>
activationKey: <activation_key> organizationID: <organizationID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
hostname: example.com- A fully qualified domain name (FQDN) of the Satellite server for system registration and package management.
port: 443- Defines the network port used for communication with the Satellite server.
prefix: /rhsm- Specifies the URL path prefix for accessing resources on the Satellite server.
rhc_baseurl: http://example.com/pulp/content-
Defines the prefix for content URLs. In a Satellite environment, the
baseurlmust be set to the same server where the system is registered. Refer to thehostnamevalue to ensure the correct server is used.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.3. Disabling the connection to Insights after the registration by using the rhc RHEL system role Copy linkLink copied to clipboard!
When you register a system by using the rhc RHEL system role, the role, by default, enables the connection to Red Hat Insights. You can disable Insights by using the rhc RHEL system role, if not required.
Red Hat Insights is a managed service in the Hybrid Cloud Console that uses predictive analytics, remediation capabilities, and deep domain expertise to simplify complex operational tasks.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You have registered the system.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rhc_insights absent|present- Enables or disables system registration with Red Hat Insights for proactive analytics and recommendations.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.4. Managing repositories by using the rhc RHEL system role Copy linkLink copied to clipboard!
Enabling repositories on a RHEL system is essential for accessing, installing, and updating software packages from verified sources. You can remotely enable or disable repositories on managed nodes by using rhc RHEL system role to ensure the system security, stability, and compatibility.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You have details of the repositories which you want to enable or disable on the managed nodes.
- You have registered the system.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
name: RepositoryName- Name of the repository that should be enabled.
state: enabled|disabled-
Optional, enables or disables the repository. Default is
enabled.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.5. Locking the system to a particular release by using the rhc RHEL system role Copy linkLink copied to clipboard!
You can lock your system to a specific RHEL release to maintain stability and prevent unintended updates in production environments.
To ensure system stability and compatibility, it is sometimes necessary to limit the RHEL system to use only repositories from a specific minor version rather than automatically upgrading to the latest available release. Locking the system to a particular minor version helps maintain consistency in production environments, which prevents unintended updates that might introduce compatibility issues.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You know the RHEL version to which you want to lock the system. Note that you can only lock the system to the RHEL minor version that the managed node currently runs or a later minor version.
- You have registered the system.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rhc_release: version- The version of RHEL to set for the system, so the available content will be limited to that version.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.6. Using a proxy server when registering the host by using the rhc RHEL system role Copy linkLink copied to clipboard!
If your security restrictions allow access to the Internet only through a proxy server, you can specify the proxy settings of the rhc role when you register the system using rhc.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:username: <username> password: <password> proxy_username: <proxyusernme> proxy_password: <proxypassword>
username: <username> password: <password> proxy_username: <proxyusernme> proxy_password: <proxypassword>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
hostname: proxy.example.com- A fully qualified domain name (FQDN) of the proxy server.
port: 3128- Defines the network port used for communication with the proxy server.
username: proxy_username- Specifies the username for authentication. This is required only if the proxy server requires authentication.
password: proxy_password- Specifies the password for authentication. This is required only if the proxy server requires authentication.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.7. Managing auto updates of Insights rules by using the rhc RHEL system role Copy linkLink copied to clipboard!
You can enable or disable the automatic collection rule updates for Red Hat Insights by using the rhc RHEL system role. By default, when you connect your system to Red Hat Insights, this option is enabled. You can disable it by using rhc.
If you disable this feature, you risk using outdated rule definition files and not getting the most recent validation updates.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You have registered the system.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:username: <username> password: <password>
username: <username> password: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
autoupdate: true|false- Enables or disables the automatic collection rule updates for Red Hat Insights.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.8. Configuring Insights remediations by using the rhc RHEL system role Copy linkLink copied to clipboard!
You can use the rhc RHEL system role to configure Insights remediations on your systems. When you connect your system to Red Hat Insights, it is enabled by default.
You can use the rhc role to ensure your system is ready for remediation when connected directly to Red Hat. For more information about Red Hat Insights remediations, see Red Hat Insights Remediations Guide.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You have Insights remediations enabled.
- You have registered the system.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.9. Configuring Insights tags by using the rhc RHEL system role Copy linkLink copied to clipboard!
You can use the rhc RHEL system role to configure Red Hat Insights tags. With these tags you can efficiently filter and group systems based on attributes, such as their location. This simplifies automation and enhances security compliance across large infrastructures.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:username: <username> password: <password>
username: <username> password: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
group: group-name-value- Specifies the system group for organizing and managing registered hosts.
location: location-name-value- Defines the location associated with the registered system.
description- Provides a brief summary or identifier for the registered system.
state: present|absentIndicates the current status of the registered system.
NoteThe content inside the
tagsis a YAML structure representing the tags desired by the administrator for the configured systems. The example provided here is for illustrative purposes only and is not exhaustive. Administrators can customize the YAML structure to include any additional keys and values as needed.
Validate the playbook syntax:
ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml
$ ansible-playbook --syntax-check --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.10. Unregistering a system by using the rhc RHEL system role Copy linkLink copied to clipboard!
You can use the rhc RHEL system role to unregister the system from the Red Hat subscription service if you no longer want to receive content from the registration server on a specific system, for example, system decommissioning, VM deletion, or when switching to a local content mirror.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The system is already registered.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rhc_state: absent- Specifies the system should be unregistered from the registration server, RHSM, or Satellite.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 23. Remote management with IPMI and Redfish by using the rhel_mgmt collection Copy linkLink copied to clipboard!
With the Intelligent Platform Management Interface (IPMI) and the Redfish API, administrators can remotely manage hosts even if the operating system is not running. The rhel_mgmt Ansible collection provides modules that use IPMI and Redfish to perform certain remote operations.
23.1. Setting the boot device by using the rhel_mgmt.ipmi_boot module Copy linkLink copied to clipboard!
You can set the boot device of a host by using the ipmi_boot module of the redhat.rhel_mgmt collection. This module uses the Intelligent Platform Management Interface (IPMI) to perform this operation.
When you use this Ansible module, three hosts are involved: the control node, the managed node, and the host with the baseboard management controller (BMC) on which the actual IPMI operation is applied. The control node executes the playbook on the managed node. The managed host connects to the remote BMC to execute the IPMI operation. For example, if you set hosts: managed-node-01.example.com and name: server.example.com in the playbook, then managed-node-01.example.com changes the setting by using IPMI on server.example.com.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The
ansible-collection-redhat-rhel_mgmtpackage is installed on the control node. - You have credentials to access the BMC, and these credentials have permissions to change settings.
- The managed node can access the remote BMC over the network.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:ipmi_usr: <username> ipmi_pwd: <password>
ipmi_usr: <username> ipmi_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
name: <bmc_hostname_or_ip_address>- Defines the hostname or IP address of the BMC. This is the BMC of the host on which the managed node performs the action.
port: <bmc_port_number>-
Sets the Remote Management Control Protocol (RMCP) port number. The default is
623. bootdev: <value>Sets the boot device. You can select one of the following values:
-
hd: Boots from the hard disk. -
network: Boots from network. -
optical: Boots from an optical drive, such as a DVD-ROM. -
floppy: Boots from a floppy disk. -
safe: Boots from hard drive in safe mode. -
setup: Boots into the BIOS or UEFI. -
default: Removes any IPMI-directed boot device request.
-
persistent: <true|false>-
Configures whether the remote host uses the defined setting for all future boots or only for the next one. By default, this variable is set to
false. Note that not all BMCs support setting the boot device persistently.
For details about all variables used in the playbook, use the
ansible-doc redhat.rhel_mgmt.ipmi_bootcommand on the control node to display the documentation of the module.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.2. Setting the system power state by using the rhel_mgmt.ipmi_power module Copy linkLink copied to clipboard!
You can set the hardware state by using the ipmi_power module of the redhat.rhel_mgmt collection. For example, you can ensure that a host is powered on or hard-reset it without involvement of the operating system.
The ipmi_power module uses the Intelligent Platform Management Interface (IPMI) to perform operations.
When you use this Ansible module, three hosts are involved: the control node, the managed node, and the host with the baseboard management controller (BMC) on which the actual IPMI operation is applied. The control node executes the playbook on the managed node. The managed host connects to the remote BMC to execute the IPMI operation. For example, if you set hosts: managed-node-01.example.com and name: server.example.com in the playbook, then managed-node-01.example.com changes the setting by using IPMI on server.example.com.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The
ansible-collection-redhat-rhel_mgmtpackage is installed on the control node. - You have credentials to access the BMC, and these credentials have permissions to change settings.
- The managed node can access the remote BMC over the network.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:ipmi_usr: <username> ipmi_pwd: <password>
ipmi_usr: <username> ipmi_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
name: <bmc_hostname_or_ip_address>- Defines the hostname or IP address of the BMC. This is the BMC of the host on which the managed node performs the action.
port: <bmc_port_number>-
Sets the Remote Management Control Protocol (RMCP) port number. The default is
623. state: <value>Sets the state which the device should be in. You can select one of the following values:
-
on: Powers on the system. -
off: Powers off the system without notifying the operating system. -
shutdown: Requests a shutdown from the operating system. -
reset: Performs a hard reset. -
boot: Powers on the system if it was switched off, or resets the system if it was switched off.
-
For details about all variables used in the playbook, use the
ansible-doc redhat.rhel_mgmt.ipmi_powercommand on the control node to display the documentation of the module.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3. Managing out-of-band controllers by using the rhel_mgmt.redfish_command module Copy linkLink copied to clipboard!
You can send commands to the Redfish API to remotely manage out-of-band (OOB) controllers by using the redfish_command module of the redhat.rhel_mgmt collection. With this module, you can perform a large number of management operations.
For example, you can perform the following operations:
- Performing power management actions
- Managing virtual media
- Managing users of the OOB controller
- Updating the firmware
When you use this Ansible module, three hosts are involved: the control node, the managed node, and the host with the OOB controller on which the actual operation is performed. The control node executes the playbook on the managed node, and the managed host connects to the remote OOB controller by using the Redfish API to execute the operation. For example, if you set hosts: managed-node-01.example.com and baseuri: server.example.com in the playbook, then managed-node-01.example.com executes the operation on server.example.com.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The
ansible-collection-redhat-rhel_mgmtpackage is installed on the control node. - You have credentials to access the OOB controller, and these credentials have permissions to change settings.
- The managed node can access the remote OOB controller over the network.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:redfish_usr: <username> redfish_pwd: <password>
redfish_usr: <username> redfish_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
baseuri: <uri>- Defines the URI of the OOB controller. This is the OOB controller of the host on which the managed node performs the action.
category: <value>Sets the category of the command to execute. The following categories are available:
-
Accounts: Manages user accounts of the OOB controller. -
Chassis: Manages chassis-related settings. -
Manager: Provides access to Redfish services. -
Session: Manages Redfish login sessions. -
Systems(default): Manages machine-related settings. -
Update: Manages firmware update-related actions.
-
command: <command>- Sets the command to execute. Depending on the command, it can be necessary to set additional variables.
For details about all variables used in the playbook, use the
ansible-doc redhat.rhel_mgmt.redfish_commandcommand on the control node to display the documentation of the module.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.4. Querying information from out-of-band controllers by using the rhel_mgmt.redfish_info module Copy linkLink copied to clipboard!
You can remotely query information from of out-of-band (OOB) controllers through the Redfish API by using the redfish_info module of the redhat.rhel_mgmt collection. To display the returned value, register a variable with the fetched information, and display the content of this variable.
When you use this Ansible module, three hosts are involved: the control node, the managed node, and the host with the OOB controller on which the actual operation is performed. The control node executes the playbook on the managed node, and the managed host connects to the remote OOB controller by using the Redfish API to execute the operation. For example, if you set hosts: managed-node-01.example.com and baseuri: server.example.com in the playbook, then managed-node-01.example.com executes the operation on server.example.com.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The
ansible-collection-redhat-rhel_mgmtpackage is installed on the control node. - You have credentials to access the OOB controller, and these credentials have permissions to query settings.
- The managed node can access the remote OOB controller over the network.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:redfish_usr: <username> redfish_pwd: <password>
redfish_usr: <username> redfish_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
baseuri: <uri>- Defines the URI of the OOB controller. This is the OOB controller of the host on which the managed node performs the action.
category: <value>Sets the category of the information to query. The following categories are available:
-
Accounts: User accounts of the OOB controller -
Chassis: Chassis-related settings -
Manager: Redfish services -
Session: Redfish login sessions -
Systems(default): Machine-related settings -
Update: Firmware-related settings -
All: Information from all categories.
You can also set multiple categories if you use a list, for example
["Systems", "Accounts"].-
command: <command>- Sets the query command to execute.
For details about all variables used in the playbook, use the
ansible-doc redhat.rhel_mgmt.redfish_infocommand on the control node to display the documentation of the module.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.5. Managing BIOS, UEFI, and out-of-band controllers by using the rhel_mgmt.redfish_config module Copy linkLink copied to clipboard!
You can configure BIOS, UEFI, and out-of-band (OOB) controllers settings through the Redfish API by using the redfish_config module of the redhat.rhel_mgmt collection. This enables you to modify the settings remotely with Ansible.
When you use this Ansible module, three hosts are involved: the control node, the managed node, and the host with the OOB controller on which the actual operation is performed. The control node executes the playbook on the managed node, and the managed host connects to the remote OOB controller by using the Redfish API to execute the operation. For example, if you set hosts: managed-node-01.example.com and baseuri: server.example.com in the playbook, then managed-node-01.example.com executes the operation on server.example.com.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The
ansible-collection-redhat-rhel_mgmtpackage is installed on the control node. - You have credentials to access the OOB controller, and these credentials have permissions to change settings.
- The managed node can access the remote OOB controller over the network.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:redfish_usr: <username> redfish_pwd: <password>
redfish_usr: <username> redfish_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
baseuri: <uri>- Defines the URI of the OOB controller. This is the OOB controller of the host on which the managed node performs the action.
category: <value>Sets the category of the command to execute. The following categories are available:
-
Accounts: Manages user accounts of the OOB controller. -
Chassis: Manages chassis-related settings. -
Manager: Provides access to Redfish services. -
Session: Manages Redfish login sessions. -
Systems(default): Manages machine-related settings. -
Update: Manages firmware update-related actions.
-
command: <command>- Sets the command to execute. Depending on the command, it can be necessary to set additional variables.
For details about all variables used in the playbook, use the
ansible-doc redhat.rhel_mgmt.redfish_configcommand on the control node to display the documentation of the module.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 24. Configuring SELinux by using RHEL system roles Copy linkLink copied to clipboard!
You can remotely configure and manage SELinux permissions by using the selinux RHEL system role.
For example, use the selinux role for the following tasks:
- Cleaning local policy modifications related to SELinux booleans, file contexts, ports, and logins.
- Setting SELinux policy booleans, file contexts, ports, and logins.
- Restoring file contexts on specified files or directories.
- Managing SELinux modules.
24.1. Restoring the SELinux context on directories by using the selinux RHEL system role Copy linkLink copied to clipboard!
To remotely reset the SELinux context on directories, you can use the selinux RHEL system role. With an incorrect SELinux context, applications can fail to access the files.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
selinux_restore_dirs: <list>- Defines the list of directories on which the role should reset the SELinux context.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.selinux/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the SELinux context for files or directories for which you have reset the context. For example, to display the context on the
/var/www/directory, enter:ansible rhel9.example.com -m command -a 'ls -ldZ /var/www/'
# ansible rhel9.example.com -m command -a 'ls -ldZ /var/www/' drwxr-xr-x. 4 root root system_u:object_r:httpd_sys_content_t:s0 33 Feb 28 13:20 /var/www/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
24.2. Managing SELinux network port labels by using the selinux RHEL system role Copy linkLink copied to clipboard!
If you want to run a service on a non-standard port, you must set the corresponding SELinux type label on this port to prevent that SELinux denies permission to the service. By using the selinux RHEL system role, you can automate this task and remotely assign a type label on ports.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ports: <port_number>- Defines the port numbers to which you want to assign the SELinux label. Separate multiple values by comma.
setype: <type_label>- Defines the SELinux type label.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.selinux/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the port numbers that have the
http_port_tlabel assigned:ansible managed-node-01.example.com -m shell -a 'semanage port --list | grep http_port_t'
# ansible managed-node-01.example.com -m shell -a 'semanage port --list | grep http_port_t' http_port_t tcp 80, 81, 443, <port_number>, 488, 8008, 8009, 8443, 9000Copy to Clipboard Copied! Toggle word wrap Toggle overflow
24.3. Deploying an SELinux module by using the selinux RHEL system role Copy linkLink copied to clipboard!
If the default SELinux policies do not meet your requirements, you can create custom modules to allow your application to access the required resources. By using the selinux RHEL system role, you can automate this process and remotely deploy SELinux modules.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The SELinux module you want to deploy is stored in the same directory as the playbook.
The SELinux module is available in the Common Intermediate Language (CIL) or policy package (PP) format.
If you are using a PP module, ensure that
policydbversion on the managed nodes is the same or later than the version used to build the PP module.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
path: <module_file>- Sets the path to the module file on the control node.
priority: <value>-
Sets the SELinux module priority.
400is the default. state: <value>Defines the state of the module:
-
enabled: Install or enable the module. -
disabled: Disable a module. -
absent: Remove a module.
-
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.selinux/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Remotely display the list of SELinux modules and filter for the one you used in the playbook:
ansible managed-node-01.example.com -m shell -a 'semodule -l | grep <module>'
# ansible managed-node-01.example.com -m shell -a 'semodule -l | grep <module>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the module is listed, it is installed and enabled.
Chapter 25. Configuring the OpenSSH server and client by using RHEL system roles Copy linkLink copied to clipboard!
You can use the sshd RHEL system role to configure OpenSSH servers and the ssh RHEL system role to configure OpenSSH clients consistently, in an automated fashion, and on any number of RHEL systems at the same time.
Such configurations are necessary for any system where secure remote interaction is needed, for example:
- Remote system administration: securely connecting to your machine from another computer using an SSH client.
- Secure file transfers: the Secure File Transfer Protocol (SFTP) provided by OpenSSH enable you to securely transfer files between your local machine and a remote system.
- Automated DevOps pipelines: automating software deployments that require secure connection to remote servers (CI/CD pipelines).
- Tunneling and port forwarding: forwarding a local port to access a web service on a remote server behind a firewall. For example a remote database or a development server.
- Key-based authentication: more secure alternative to password-based logins.
- Certificate-based authentication: centralized trust management and better scalability.
- Enhanced security: disabling root logins, restricting user access, enforcing strong encryption and other such forms of hardening ensures stronger system security.
25.1. How the sshd RHEL system role maps settings from a playbook to the configuration file Copy linkLink copied to clipboard!
In the sshd RHEL system role playbook, you can define the parameters for the server SSH configuration file. If you do not specify these settings, the role produces the sshd_config file that matches the RHEL defaults.
In all cases, booleans correctly render as yes and no in the final configuration on your managed nodes. You can use lists to define multi-line configuration items. For example:
sshd_ListenAddress: - 0.0.0.0 - '::'
sshd_ListenAddress:
- 0.0.0.0
- '::'
renders as:
ListenAddress 0.0.0.0 ListenAddress ::
ListenAddress 0.0.0.0
ListenAddress ::
25.2. Configuring OpenSSH servers by using the sshd RHEL system role Copy linkLink copied to clipboard!
You can use the sshd RHEL system role to configure multiple OpenSSH servers for secure remote access.
The role ensures secure communication environment for remote users by providing namely:
- Management of incoming SSH connections from remote clients
- Credentials verification
- Secure data transfer and command execution
You can use the sshd RHEL system role alongside with other RHEL system roles that change SSHD configuration, for example the Identity Management RHEL system roles. To prevent the configuration from being overwritten, ensure the sshd RHEL system role uses namespaces (RHEL 8 and earlier versions) or a drop-in directory (RHEL 9).
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
PasswordAuthentication: yes|no-
Controls whether the OpenSSH server (
sshd) accepts authentication from clients that use the username and password combination. Match:-
The match block allows the
rootuser login using password only from the subnet192.0.2.0/24.
For details about the role variables and the OpenSSH configuration options used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.sshd/README.mdfile and thesshd_config(5)manual page on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Log in to the SSH server:
ssh <username>@<ssh_server>
$ ssh <username>@<ssh_server>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the contents of the
sshd_configfile on the SSH server:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that you can connect to the server as root from the
192.0.2.0/24subnet:Determine your IP address:
hostname -I
$ hostname -I 192.0.2.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the IP address is within the
192.0.2.1-192.0.2.254range, you can connect to the server.Connect to the server as
root:ssh root@<ssh_server>
$ ssh root@<ssh_server>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
25.3. Using the sshd RHEL system role for non-exclusive configuration Copy linkLink copied to clipboard!
By default, applying the sshd RHEL system role overwrites the entire configuration. This may be problematic if you have previously adjusted the configuration with a different playbook. You can use the non-exclusive configuration to apply changes only to selected configuration options.
You can apply a non-exclusive configuration:
- In RHEL 8 and earlier by using a configuration snippet.
-
In RHEL 9 and later by using files in a drop-in directory. The default configuration file is already placed in the drop-in directory as
/etc/ssh/sshd_config.d/00-ansible_system_role.conf.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:For managed nodes that run RHEL 8 or earlier:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For managed nodes that run RHEL 9 or later:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbooks include the following:
sshd_config_namespace: <my-application>- The role places the configuration that you specify in the playbook to configuration snippets in the existing configuration file under the given namespace. You need to select a different namespace when running the role from different context.
sshd_config_file: /etc/ssh/sshd_config.d/<42-my-application>.conf-
In the
sshd_config_filevariable, define the.conffile into which thesshdsystem role writes the configuration options. Use a two-digit prefix, for example42-to specify the order in which the configuration files will be applied. AcceptEnv:Controls which environment variables the OpenSSH server (
sshd) will accept from a client:-
LANG: defines the language and locale settings. -
LS_COLORS: defines the displaying color scheme for thelscommand in the terminal. -
EDITOR: specifies the default text editor for the command-line programs that need to open an editor.
-
For details about the role variables and the OpenSSH configuration options used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.sshd/README.mdfile and thesshd_config(5)manual page on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the configuration on the SSH server:
For managed nodes that run RHEL 8 or earlier:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For managed nodes that run RHEL 9 or later:
cat /etc/ssh/sshd_config.d/42-my-application.conf # Ansible managed # AcceptEnv LANG LS_COLORS EDITOR
# cat /etc/ssh/sshd_config.d/42-my-application.conf # Ansible managed # AcceptEnv LANG LS_COLORS EDITORCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.4. Overriding the system-wide cryptographic policy on an SSH server by using the sshd RHEL system role Copy linkLink copied to clipboard!
When the default cryptographic settings do not meet certain security or compatibility needs, you may want to override the system-wide cryptographic policy on the OpenSSH server by using the sshd RHEL system role.
Override the system-wide cryptographic policy in the following notable situations:
- Compatibility with older clients: necessity to use weaker-than-default encryption algorithms, key exchange protocols, or ciphers.
- Enforcing stronger security policies: simultaneously, you can disable weaker algorithms. Such a measure could exceed the default system cryptographic policies, especially in the highly secure and regulated environments.
- Performance considerations: the system defaults could enforce stronger algorithms that can be computationally intensive for some systems.
- Customizing for specific security needs: adapting for unique requirements that are not covered by the default cryptographic policies.
It is not possible to override all aspects of the cryptographic policies from the sshd RHEL system role. For example, SHA1 signatures might be forbidden on a different layer so for a more generic solution, see Setting a custom cryptographic policy by using RHEL system roles.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
sshd_KexAlgorithms-
You can choose key exchange algorithms, for example,
ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group14-sha1, ordiffie-hellman-group-exchange-sha256. sshd_Ciphers-
You can choose ciphers, for example,
aes128-ctr,aes192-ctr, oraes256-ctr. sshd_MACs-
You can choose MACs, for example,
hmac-sha2-256,hmac-sha2-512, orhmac-sha1. sshd_HostKeyAlgorithms-
You can choose a public key algorithm, for example,
ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521, orssh-rsa.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.sshd/README.mdfile on the control node.On RHEL 9 managed nodes, the system role writes the configuration into the
/etc/ssh/sshd_config.d/00-ansible_system_role.conffile, where cryptographic options are applied automatically. You can change the file by using thesshd_config_filevariable. However, to ensure the configuration is effective, use a file name that lexicographically precedes the/etc/ssh/sshd_config.d/50-redhat.conffile, which includes the configured crypto policies.On RHEL 8 managed nodes, you must enable override by setting the
sshd_sysconfig_override_crypto_policyandsshd_sysconfigvariables totrue.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can verify the success of the procedure by using the verbose SSH connection and check the defined variables in the following output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
25.5. How the ssh RHEL system role maps settings from a playbook to the configuration file Copy linkLink copied to clipboard!
In the ssh RHEL system role playbook, you can define the parameters for the client SSH configuration file. If you do not specify these settings, the role produces a global ssh_config file that matches the RHEL defaults.
In all the cases, booleans correctly render as yes or no in the final configuration on your managed nodes. You can use lists to define multi-line configuration items. For example:
LocalForward: - 22 localhost:2222 - 403 localhost:4003
LocalForward:
- 22 localhost:2222
- 403 localhost:4003
renders as:
LocalForward 22 localhost:2222 LocalForward 403 localhost:4003
LocalForward 22 localhost:2222
LocalForward 403 localhost:4003
The configuration options are case sensitive.
25.6. Configuring OpenSSH clients by using the ssh RHEL system role Copy linkLink copied to clipboard!
You can use the ssh RHEL system role to configure multiple OpenSSH clients.
OpenSSH clients enable the local user to establish a secure connection with the remote OpenSSH server by ensuring namely:
- Secure connection initiation
- Credentials provision
- Negotiation with the OpenSSH server on the encryption method used for the secure communication channel
- Ability to send files securely to and from the OpenSSH server
You can use the ssh RHEL system role alongside with other system roles that change SSH configuration, for example the Identity Management RHEL system roles. To prevent the configuration from being overwritten, make sure that the ssh RHEL system role uses a drop-in directory (default in RHEL 8 and later).
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
ssh_user: root-
Configures the
rootuser’s SSH client preferences on the managed nodes with certain configuration specifics. Compression: true- Compression is enabled.
ControlMaster: auto-
ControlMaster multiplexing is set to
auto. Host-
Creates alias
examplefor connecting to theserver.example.comhost as a user calleduser1. ssh_ForwardX11: no- X11 forwarding is disabled.
For details about the role variables and the OpenSSH configuration options used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.ssh/README.mdfile and thessh_config(5)manual page on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the managed node has the correct configuration by displaying the SSH configuration file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 26. Managing local storage by using RHEL system roles Copy linkLink copied to clipboard!
To manage LVM and local file systems (FS) by using Ansible, you can use the storage role. Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines.
26.1. Creating an XFS file system on a block device by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to automate the creation of an XFS file system on block devices.
The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The setting specified in the example playbook include the following:
name: barefs-
The volume name (
barefsin the example) is currently arbitrary. Thestoragerole identifies the volume by the disk device listed under thedisksattribute. fs_type: <file_system>-
You can omit the
fs_typeparameter if you want to use the default file system XFS. disks: <list_of_disks_and_volumes>A YAML list of disk and LV names. To create the file system on an LV, provide the LVM setup under the
disksattribute, including the enclosing volume group. For details, see Creating or resizing a logical volume by using the storage RHEL system role.Do not provide the path to the LV device.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.2. Persistently mounting a file system by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to persistently mount file systems to ensure they remain available across system reboots and are automatically mounted on startup. If the file system on the device you specified in the playbook does not exist, the role creates it.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.3. Creating or resizing a logical volume by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to create and resize LVM logical volumes. The role automatically creates volume groups when if they do not exist.
Use the storage role to perform the following tasks:
- To create an LVM logical volume in a volume group consisting of many disks
- To resize an existing file system on LVM
- To express an LVM volume size in percentage of the pool’s total size
If the volume group does not exist, the role creates it. If a logical volume exists in the volume group, it is resized if the size does not match what is specified in the playbook.
If you are reducing a logical volume, to prevent data loss you must ensure that the file system on that logical volume is not using the space in the logical volume that is being reduced.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
size: <size>- You must specify the size by using units (for example, GiB) or percentage (for example, 60%).
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that specified volume has been created or resized to the requested size:
ansible managed-node-01.example.com -m command -a 'lvs myvg'
# ansible managed-node-01.example.com -m command -a 'lvs myvg'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
26.4. Enabling online block discard by using the storage RHEL system role Copy linkLink copied to clipboard!
You can mount an XFS file system with the online block discard option to automatically discard unused blocks.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that online block discard option is enabled:
ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'
# ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
26.5. Creating and mounting a file system by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to create and mount file systems that persist across reboots. The role automatically adds entries to /etc/fstab to ensure persistent mounting.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The setting specified in the example playbook include the following:
disks: <list_of_devices>- A YAML list of device names that the role uses when it creates the volume.
fs_type: <file_system>-
Specifies the file system the role should set on the volume. You can select
xfs,ext3,ext4,swap, orunformatted. label-name: <file_system_label>- Optional: sets the label of the file system.
mount_point: <directory>-
Optional: if the volume should be automatically mounted, set the
mount_pointvariable to the directory to which the volume should be mounted.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.6. Configuring a RAID volume by using the storage RHEL system role Copy linkLink copied to clipboard!
With the storage system role, you can configure a RAID volume on RHEL by using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements.
Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, use persistent naming attributes in the playbook. For more information about persistent naming attributes, see Persistent naming attributes.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the array was correctly created:
ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'
# ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
26.8. Configuring a stripe size for RAID LVM volumes by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to configure stripe sizes for RAID LVM volumes.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that stripe size is set to the required size:
ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'
# ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
26.9. Configuring an LVM-VDO volume by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to create a VDO volume on LVM (LVM-VDO) with enabled compression and deduplication.
Because of the storage system role use of LVM-VDO, only one volume can be created per pool.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
vdo_pool_size: <size>- The actual size that the volume takes on the device. You can specify the size in human-readable format, such as 10 GiB. If you do not specify a unit, it defaults to bytes.
size: <size>- The virtual size of VDO volume.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the current status of compression and deduplication:
ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state'
$ ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state' LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert VDOCompression VDOCompressionState VDODeduplication VDOIndexState mylv1 myvg vwi-a-v--- 3.00t vpool0 enabled online enabled onlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.10. Creating a LUKS2 encrypted volume by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:luks_password: <password>
luks_password: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Find the
luksUUIDvalue of the LUKS encrypted volume:ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb'
# ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5cCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the encryption status of the volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the created LUKS encrypted volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
26.12. Resizing physical volumes by using the storage RHEL system role Copy linkLink copied to clipboard!
With the storage system role, you can resize LVM physical volumes after resizing the underlying storage or disks from outside of the host. For example, you increased the size of a virtual disk and want to use the extra space in an existing LVM.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The size of the underlying block storage has been changed.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the new physical volume size:
ansible managed-node-01.example.com -m command -a 'pvs'
$ ansible managed-node-01.example.com -m command -a 'pvs' PV VG Fmt Attr PSize PFree /dev/sdf1 myvg lvm2 a-- 1,99g 1,99gCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.13. Creating an encrypted Stratis pool by using the storage RHEL system role Copy linkLink copied to clipboard!
To secure your data, you can create an encrypted Stratis pool with the storage RHEL system role. In addition to a passphrase, you can use Clevis and Tang or TPM protection as an encryption method.
You can configure Stratis encryption only on the entire pool.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You can connect to the Tang server. For more information, see Deploying a Tang server with SELinux in enforcing mode.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:luks_password: <password>
luks_password: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
encryption_password- Password or passphrase used to unlock the LUKS volumes.
encryption_clevis_pin-
Clevis method that you can use to encrypt the created pool. You can use
tangandtpm2. encryption_tang_url- URL of the Tang server.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the pool was created with Clevis and Tang configured:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 27. Using the sudo RHEL system role Copy linkLink copied to clipboard!
You can consistently configure the /etc/sudoers files on multiple systems by using the sudo RHEL system role.
27.1. Applying custom sudoers configuration by using RHEL system roles Copy linkLink copied to clipboard!
You can use the sudo RHEL system role to apply custom sudoers configuration on your managed nodes. That way, you can define which users can run which commands on which hosts, with better configuration efficiency and more granular control.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the playbook include the following:
users- The list of users that the rule applies to.
hosts-
The list of hosts that the rule applies to. You can use
ALLfor all hosts. commandsThe list of commands that the rule applies to. You can use
ALLfor all commands.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.sudo/README.mdfile on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed node, verify that the playbook applied the new rules.
cat /etc/sudoers | tail -n1 <user_name> <host_name>= <path_to_command_binary>
# cat /etc/sudoers | tail -n1 <user_name> <host_name>= <path_to_command_binary>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 28. Managing systemd units by using RHEL system roles Copy linkLink copied to clipboard!
By using the systemd RHEL system role, you can automate certain systemd-related tasks and perform them remotely.
You can use the systemd role for the following actions:
- Manage services
- Deploy units
- Deploy drop-in files
28.1. Managing services by using the systemd RHEL system role Copy linkLink copied to clipboard!
You can automate and remotely manage systemd units, such as starting or enabling services, by using the systemd RHEL system role.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content. Use only the variables depending on what actions you want to perform.Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.systemd/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.2. Deploying systemd drop-in files by using the systemd RHEL system role Copy linkLink copied to clipboard!
Systemd applies drop-in files on top of setting it reads for a unit from other locations. Therefore, you can modify unit settings with drop-in files without changing the original unit file. By using the systemd RHEL system role, you can automate the process of deploying drop-in files.
The role uses the hard-coded file name 99-override.conf to store drop-in files in /etc/systemd/system/<name>._<unit_type>/. Note that it overrides existing files with this name in the destination directory.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a Jinja2 template with the systemd drop-in file contents. For example, create the
~/sshd.service.conf.j2file with the following content:{{ ansible_managed | comment }} [Unit] After= After=network.target sshd-keygen.target network-online.target{{ ansible_managed | comment }} [Unit] After= After=network.target sshd-keygen.target network-online.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow This drop-in file specifies the same units in the
Aftersetting as the original/usr/lib/systemd/system/sshd.servicefile and, additionally,network-online.target. With this extra target,sshdstarts after the network interfaces are actived and have IP addresses assigned. This ensures thatsshdcan bind to all IP addresses.Use the
<name>.<unit_type>.conf.j2convention for the file name. For example, to add a drop-in for thesshd.serviceunit, you must name the filesshd.service.conf.j2. Place the file in the same directory as the playbook.Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
systemd_dropins: <list_of_files>- Specifies the names of the drop-in files to deploy in YAML list format.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.systemd/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the role placed the drop-in file in the correct location:
ansible managed-node-01.example.com -m command -a 'ls /etc/systemd/system/sshd.service.d/'
# ansible managed-node-01.example.com -m command -a 'ls /etc/systemd/system/sshd.service.d/' 99-override.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.3. Deploying systemd system units by using the systemd RHEL system role Copy linkLink copied to clipboard!
You can create unit files for custom applications, and systemd reads them from the /etc/systemd/system/ directory. By using the systemd RHEL system role, you can automate the deployment of custom unit files.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a Jinja2 template with the custom systemd unit file contents. For example, create the
~/example.service.j2file with the contents for your service:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
<name>.<unit_type>.j2convention for the file name. For example, to create theexample.serviceunit, you must name the fileexample.service.j2. Place the file in the same directory as the playbook.Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.systemd/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the service is enabled and started:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
28.4. Deploying systemd user units by using the systemd RHEL system role Copy linkLink copied to clipboard!
You can create per-user unit files for custom applications, and systemd reads them from the /home/<username>/.config/systemd/user/ directory. By using the systemd RHEL system role, you can automate the deployment of custom unit files for individual users.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The user you specify in the playbook for the systemd unit exists.
Procedure
Create a Jinja2 template with the custom systemd unit file contents. For example, create the
~/example.service.j2file with the contents for your service:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
<name>.<unit_type>.j2convention for the file name. For example, to create theexample.serviceunit, you must name the fileexample.service.j2. Place the file in the same directory as the playbook.Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
systemdRHEL system role does not create new users, and it returns an error if you specify a non-existent user in the playbook.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.systemd/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the service is enabled and started:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 29. Configuring time synchronization by using RHEL system roles Copy linkLink copied to clipboard!
The Network Time Protocol (NTP) and Precision Time Protocol (PTP) are standards to synchronize the clock of computers over a network. By using the timesync RHEL system role, you can automate the configuration of time synchronization on RHEL.
An accurate time synchronization in networks is important because certain services rely on it. For example, Kerberos tolerates only a small time difference between the server and client to prevent replay attacks.
You can set the time service to configure in the timesync_ntp_provider variable of a playbook. If you do not set this variable, the role determines the time service based on the following factors:
-
On RHEL 8 and later:
chronyd -
On RHEL 6 and 7:
chronyd(default) or, if already installedntpd.
29.1. Configuring time synchronization over NTP by using the timesync RHEL system role Copy linkLink copied to clipboard!
The Network Time Protocol (NTP) synchronizes the time of a host with an NTP server over a network. By using the timesync RHEL system role, you can automate the configuration of RHEL NTP clients in your network and keep the time synchronized.
The timesync RHEL system role replaces the configuration of the specified given or detected provider service on the managed host. Consequently, all settings are lost if they are not specified in the playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
pool: <yes|no>- Flags a source as an NTP pool rather than an individual host. In this case, the service expects that the name resolves to multiple IP addresses which can change over time.
iburst: yes- Enables fast initial synchronization.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.timesync/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the details about the time sources:
If the managed node runs the
chronydservice, enter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the managed node runs the
ntpdservice, enter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
29.2. Configuring time synchronization over NTP with NTS by using the timesync RHEL system role Copy linkLink copied to clipboard!
By using the Network Time Security (NTS) mechanism, clients establish a TLS-encrypted connection to the server and authenticate Network Time Protocol (NTP) packets. By using the timesync RHEL system role, you can automate the configuration of RHEL NTP clients with NTS.
Note that you cannot mix NTS servers with non-NTS servers. In mixed configurations, NTS servers are trusted and clients do not fall back to unauthenticated NTP sources because they can be exploited in man-in-the-middle (MITM) attacks. For further details, see the authselectmode parameter description in the chrony.conf(5) man page on your system.
The timesync RHEL system role replaces the configuration of the specified given or detected provider service on the managed host. Consequently, all settings are lost if they are not specified in the playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
The managed nodes use
chronyd.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
iburst: yes- Enables fast initial synchronization.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.timesync/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If the managed node runs the
chronydservice:Display the details about the time sources:
ansible managed-node-01.example.com -m command -a 'chronyc sources'
# ansible managed-node-01.example.com -m command -a 'chronyc sources' MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* ptbtime1.ptb.de 1 6 17 55 -13us[ -54us] +/- 12ms ^- ptbtime2.ptb.de 1 6 17 56 -257us[ -297us] +/- 12msCopy to Clipboard Copied! Toggle word wrap Toggle overflow For sources with NTS enabled, display information that is specific to authentication of NTP sources:
ansible managed-node-01.example.com -m command -a 'chronyc -N authdata'
# ansible managed-node-01.example.com -m command -a 'chronyc -N authdata' Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen ========================================================================= ptbtime1.ptb.de NTS 1 15 256 229 0 0 8 100 ptbtime2.ptb.de NTS 1 15 256 230 0 0 8 100Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the reported cookies in the
Cookcolumn is larger than 0.
If the managed node runs the
ntpdservice, enter:ansible managed-node-01.example.com -m command -a 'ntpq -p'
# ansible managed-node-01.example.com -m command -a 'ntpq -p' remote refid st t when poll reach delay offset jitter ============================================================================== *ptbtime1.ptb.de .PTB. 1 8 2 64 77 23.585 967.902 0.684 -ptbtime2.ptb.de .PTB. 1 8 30 64 78 24.653 993.937 0.765Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 30. Configuring a system for session recording by using RHEL system roles Copy linkLink copied to clipboard!
Use the tlog RHEL system role to record and monitor terminal session activities on your managed nodes in an automatic fashion. You can configure the recording to take place per user or user group by means of the SSSD service.
The session recording solution in the tlog RHEL system role consists of the following components:
-
The
tlogutility - System Security Services Daemon (SSSD)
- Optional: The web console interface
30.1. Configuring session recording for individual users by using the tlog RHEL system role Copy linkLink copied to clipboard!
Prepare and apply an Ansible playbook to configure a RHEL system to log session recording data to the systemd journal. With that, you can enable recording the terminal output and input of a specific user during their sessions, when the user logs in on the console, or by SSH.
The playbook installs tlog-rec-session, a terminal session I/O logging program, that acts as the login shell for a user. The role creates an SSSD configuration drop file, and this file defines for which users and groups the login shell should be used. Additionally, if the cockpit package is installed on the system, the playbook also installs the cockpit-session-recording package, which is a Cockpit module that allows you to view and play recordings in the web console interface.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow tlog_scope_sssd: <value>-
The
somevalue specifies you want to record only certain users and groups, notallornone. tlog_users_sssd: <list_of_users>- A YAML list of users you want to record a session from. Note that the role does not add users if they do not exist.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the SSSD drop-in file’s content:
cd /etc/sssd/conf.d/sssd-session-recording.conf
# cd /etc/sssd/conf.d/sssd-session-recording.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can see that the file contains the parameters you set in the playbook.
- Log in as a user whose session will be recorded, perform some actions, and log out.
As the
rootuser:Display the list of recorded sessions:
journalctl _COMM=tlog-rec-sessio
# journalctl _COMM=tlog-rec-sessio Nov 12 09:17:30 managed-node-01.example.com -tlog-rec-session[1546]: {"ver":"2.3","host":"managed-node-01.example.com","rec":"07418f2b0f334c1696c10cbe6f6f31a6-60a-e4a2","user":"demo-user",... ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow You require the value of the
rec(recording ID) field in the next step.Note that the value of the
_COMMfield is shortened due to a 15 character limit.Play back a session:
tlog-play -r journal -M TLOG_REC=<recording_id>
# tlog-play -r journal -M TLOG_REC=<recording_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
30.2. Excluding certain users and groups from session recording by using the tlog RHEL system role Copy linkLink copied to clipboard!
You can use the tlog_exclude_users_sssd and tlog_exclude_groups_sssd role variables from the tlog RHEL system role to exclude users or groups from having their sessions recorded and logged in the systemd journal.
The playbook installs tlog-rec-session, a terminal session I/O logging program, that acts as the login shell for a user. The role creates an SSSD configuration drop file, and this file defines for which users and groups the login shell should be used. Additionally, if the cockpit package is installed on the system, the playbook also installs the cockpit-session-recording package, which is a Cockpit module that allows you to view and play recordings in the web console interface.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow tlog_scope_sssd: <value>-
The value
allspecifies that you want to record all users and groups. tlog_exclude_users_sssd: <user_list>- A YAML list of users user names you want to exclude from the session recording.
tlog_exclude_groups_sssd: <group_list>- A YAML list of groups you want to exclude from the session recording.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the SSSD drop-in file’s content:
cat /etc/sssd/conf.d/sssd-session-recording.conf
# cat /etc/sssd/conf.d/sssd-session-recording.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can see that the file contains the parameters you set in the playbook.
- Log in as a user whose session will be recorded, perform some actions, and log out.
As the
rootuser:Display the list of recorded sessions:
journalctl _COMM=tlog-rec-sessio
# journalctl _COMM=tlog-rec-sessio Nov 12 09:17:30 managed-node-01.example.com -tlog-rec-session[1546]: {"ver":"2.3","host":"managed-node-01.example.com","rec":"07418f2b0f334c1696c10cbe6f6f31a6-60a-e4a2","user":"demo-user",... ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow You require the value of the
rec(recording ID) field in the next step.Note that the value of the
_COMMfield is shortened due to a 15 character limit.Play back a session:
tlog-play -r journal -M TLOG_REC=<recording_id>
# tlog-play -r journal -M TLOG_REC=<recording_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 31. Configuring IPsec VPN connections by using RHEL system roles Copy linkLink copied to clipboard!
Configure IPsec VPN connections to establish encrypted tunnels over untrusted networks and ensure the integrity of data in transit. By using the RHEL system roles, you can automate the setup for use cases, such as connecting branch offices to headquarters.
The vpn RHEL system role can only create VPN configurations that use pre-shared keys (PSKs) or certificates to authenticate peers to each other.
31.1. Configuring an IPsec host-to-host VPN with PSK authentication by using the vpn RHEL system role Copy linkLink copied to clipboard!
A host-to-host VPN establishes an encrypted connection between two devices, allowing applications to communicate safely over an insecure network. By using the vpn RHEL system role, you can automate the process of creating IPsec host-to-host connections.
For authentication, a pre-shared key (PSK) is a straightforward method that uses a single, shared secret known only to the two peers. This approach is simple to configure and ideal for basic setups where ease of deployment is a priority. However, you must keep the key strictly confidential. An attacker with access to the key can compromise the connection.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
hosts: <list>Defines a YAML dictionary with the peers between which you want to configure a VPN. If an entry is not an Ansible managed node, you must specify its fully-qualified domain name (FQDN) or IP address in the
hostnameparameter, for example:... - hosts: ... external-host.example.com: hostname: 192.0.2.1... - hosts: ... external-host.example.com: hostname: 192.0.2.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The role configures the VPN connection on each managed node. The connections are named
<peer_A>-to-<peer_B>, for example,managed-node-01.example.com-to-managed-node-02.example.com. Note that the role cannot configure Libreswan on external (unmanaged) nodes. You must manually create the configuration on these peers.auth_method: psk-
Enables PSK authentication between the peers. The role uses
opensslon the control node to create the PSK. auto: <startup_method>-
Specifies the startup method of the connection. Valid values are
add,ondemand,start, andignore. For details, see theipsec.conf(5)man page on a system with Libreswan installed. The default value of this variable is null, which means no automatic startup operation. vpn_manage_firewall: true-
Defines that the role opens the required ports in the
firewalldservice on the managed nodes. vpn_manage_selinux: true- Defines that the role sets the required SELinux port type on the IPsec ports.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.vpn/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the connections are successfully started, for example:
ansible managed-node-01.example.com -m shell -a 'ipsec trafficstatus | grep "managed-node-01.example.com-to-managed-node-02.example.com"'
# ansible managed-node-01.example.com -m shell -a 'ipsec trafficstatus | grep "managed-node-01.example.com-to-managed-node-02.example.com"' ... 006 #3: "managed-node-01.example.com-to-managed-node-02.example.com", type=ESP, add_time=1741857153, inBytes=38622, outBytes=324626, maxBytes=2^63B, id='@managed-node-02.example.com'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only succeeds if the VPN connection is active. If you set the
autovariable in the playbook to a value other thanstart, you might need to manually activate the connection on the managed nodes first.
Use the vpn RHEL system role to automate the process of creating an IPsec host-to-host VPN. To enhance security by minimizing the risk of control messages being intercepted or disrupted, configure separate connections for both the data traffic and the control traffic.
A host-to-host VPN establishes a direct, secure, and encrypted connection between two devices, allowing applications to communicate safely over an insecure network, such as the internet.
For authentication, a pre-shared key (PSK) is a straightforward method that uses a single, shared secret known only to the two peers. This approach is simple to configure and ideal for basic setups where ease of deployment is a priority. However, you must keep the key strictly confidential. An attacker with access to the key can compromise the connection.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
hosts: <list>Defines a YAML dictionary with the hosts between which you want to configure a VPN. The connections are named
<name>-<IP_address_A>-to-<IP_address_B>, for examplecontrol_plane_vpn-203.0.113.1-to-198.51.100.2.The role configures the VPN connection on each managed node. Note that the role cannot configure Libreswan on external (unmanaged) nodes. You must manually create the configuration on these hosts.
auth_method: psk-
Enables PSK authentication between the hosts. The role uses
opensslon the control node to create the pre-shared key. auto: <startup_method>-
Specifies the startup method of the connection. Valid values are
add,ondemand,start, andignore. For details, see theipsec.conf(5)man page on a system with Libreswan installed. The default value of this variable is null, which means no automatic startup operation. vpn_manage_firewall: true-
Defines that the role opens the required ports in the
firewalldservice on the managed nodes. vpn_manage_selinux: true- Defines that the role sets the required SELinux port type on the IPsec ports.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.vpn/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the connections are successfully started, for example:
ansible managed-node-01.example.com -m shell -a 'ipsec trafficstatus | grep "control_plane_vpn-203.0.113.1-to-198.51.100.2"'
# ansible managed-node-01.example.com -m shell -a 'ipsec trafficstatus | grep "control_plane_vpn-203.0.113.1-to-198.51.100.2"' ... 006 #3: "control_plane_vpn-203.0.113.1-to-198.51.100.2", type=ESP, add_time=1741860073, inBytes=0, outBytes=0, maxBytes=2^63B, id='198.51.100.2'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only succeeds if the VPN connection is active. If you set the
autovariable in the playbook to a value other thanstart, you might need to manually activate the connection on the managed nodes first.
31.3. Configuring an IPsec site-to-site VPN with PSK authentication by using the vpn RHEL system role Copy linkLink copied to clipboard!
A site-to-site VPN establishes an encrypted tunnel between two distinct networks, seamlessly linking them across an insecure public network. By using the vpn RHEL system role, you can automate the process of creating IPsec site-to-site VPN connections.
A site-to-site VPN enables devices in a branch office to access resources at a corporate headquarters just as if they were all part of the same local network.
For authentication, a pre-shared key (PSK) is a straightforward method that uses a single, shared secret known only to the two peers. This approach is simple to configure and ideal for basic setups where ease of deployment is a priority. However, you must keep the key strictly confidential. An attacker with access to the key can compromise the connection.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
hosts: <list>Defines a YAML dictionary with the gateways between which you want to configure a VPN. If an entry is not an Ansible-managed node, you must specify its fully-qualified domain name (FQDN) or IP address in the
hostnameparameter, for example:... - hosts: ... external-host.example.com: hostname: 192.0.2.1... - hosts: ... external-host.example.com: hostname: 192.0.2.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The role configures the VPN connection on each managed node. The connections are named
<gateway_A>-to-<gateway_B>, for example,managed-node-01.example.com-to-managed-node-02.example.com. Note that the role cannot configure Libreswan on external (unmanaged) nodes. You must manually create the configuration on these peers.subnets: <yaml_list_of_subnets>- Defines subnets in classless inter-domain routing (CIDR) format that are connected through the tunnel.
auth_method: psk-
Enables PSK authentication between the peers. The role uses
opensslon the control node to create the PSK. auto: <startup_method>-
Specifies the startup method of the connection. Valid values are
add,ondemand,start, andignore. For details, see theipsec.conf(5)man page on a system with Libreswan installed. The default value of this variable is null, which means no automatic startup operation. vpn_manage_firewall: true-
Defines that the role opens the required ports in the
firewalldservice on the managed nodes. vpn_manage_selinux: true- Defines that the role sets the required SELinux port type on the IPsec ports.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.vpn/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the connections are successfully started, for example:
ansible managed-node-01.example.com -m shell -a 'ipsec trafficstatus | grep "managed-node-01.example.com-to-managed-node-02.example.com"'
# ansible managed-node-01.example.com -m shell -a 'ipsec trafficstatus | grep "managed-node-01.example.com-to-managed-node-02.example.com"' ... 006 #3: "managed-node-01.example.com-to-managed-node-02.example.com", type=ESP, add_time=1741857153, inBytes=38622, outBytes=324626, maxBytes=2^63B, id='@managed-node-02.example.com'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only succeeds if the VPN connection is active. If you set the
autovariable in the playbook to a value other thanstart, you might need to manually activate the connection on the managed nodes first.
31.4. Configuring an IPsec mesh VPN with certificate-based authentication by using the vpn RHEL system role Copy linkLink copied to clipboard!
An IPsec mesh creates a fully interconnected network where every server can communicate securely and directly with every other server. By using the vpn RHEL system role, you can automate configuring a VPN mesh with certificate-based authentication among managed nodes.
An IPsec mesh is ideal for distributed database clusters or high-availability environments that span multiple data centers or cloud providers. Establishing a direct, encrypted tunnel between each pair of servers ensures secure communication without a central bottleneck.
For authentication, using digital certificates managed by a Certificate Authority (CA) offers a highly secure and scalable solution. Each host in the mesh presents a certificate signed by a trusted CA. This method provides strong, verifiable authentication and simplifies user management. Access can be granted or revoked centrally at the CA, and Libreswan enforces this by checking each certificate against a certificate revocation list (CRL), denying access if a certificate appears on the list.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. You prepared a PKCS #12 file for each managed node:
Each file contains:
- The private key of the server
- The server certificate
- The CA certificate
- If required, intermediate certificates
-
The files are named
<managed_node_name_as_in_the_inventory>.p12. - The files are stored in the same directory as the playbook.
The server certificate contains the following fields:
-
Extended Key Usage (EKU) is set to
TLS Web Server Authentication. - Common Name (CN) or Subject Alternative Name (SAN) is set to the fully-qualified domain name (FQDN) of the host.
- X509v3 CRL distribution points contains URLs to Certificate Revocation Lists (CRLs).
-
Extended Key Usage (EKU) is set to
Procedure
Edit the
~/inventoryfile, and append thecert_namevariable:managed-node-01.example.com cert_name=managed-node-01.example.com managed-node-02.example.com cert_name=managed-node-02.example.com managed-node-03.example.com cert_name=managed-node-03.example.com
managed-node-01.example.com cert_name=managed-node-01.example.com managed-node-02.example.com cert_name=managed-node-02.example.com managed-node-03.example.com cert_name=managed-node-03.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
cert_namevariable to the value of the common name (CN) field used in the certificate for each host. Typically, the CN field is set to the fully-qualified domain name (FQDN).Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:pkcs12_pwd: <password>
pkcs12_pwd: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
opportunistic: true-
Enables an opportunistic mesh among multiple hosts. The
policiesvariable defines for which subnets and hosts traffic must or can be encrypted and which of them should continue using plain text connections. auth_method: cert- Enables certificate-based authentication. This requires that you specify the nickname of each managed node’s certificate in the inventory.
policies: <list_of_policies>Defines the Libreswan policies in YAML list format.
The default policy is
private-or-clear. To change it toprivate, the above playbook contains an according policy for the defaultcidrentry.To prevent a loss of the SSH connection during the execution of the playbook if the Ansible control node is in the same IP subnet as the managed nodes, add a
clearpolicy for the control node’s IP address. For example, if the mesh should be configured for the192.0.2.0/24subnet and the control node uses the IP address192.0.2.1, you require aclearpolicy for192.0.2.1/32as shown in the playbook.For details about policies, see the
ipsec.conf(5)man page on a system with Libreswan installed.vpn_manage_firewall: true-
Defines that the role opens the required ports in the
firewalldservice on the managed nodes. vpn_manage_selinux: true- Defines that the role sets the required SELinux port type on the IPsec ports.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.vpn/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On a node in the mesh, ping another node to activate the connection:
ping managed-node-02.example.com
[root@managed-node-01]# ping managed-node-02.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the connection is active:
ipsec trafficstatus
[root@managed-node-01]# ipsec trafficstatus 006 #2: "private#192.0.2.0/24"[1] ...192.0.2.2, type=ESP, add_time=1741938929, inBytes=372408, outBytes=545728, maxBytes=2^63B, id='CN=managed-node-02.example.com'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 32. Configuring Microsoft SQL Server by using RHEL system roles Copy linkLink copied to clipboard!
You can use the microsoft.sql.server Ansible system role to automate the installation and management of Microsoft SQL Server. This role also optimizes Red Hat Enterprise Linux (RHEL) to improve the performance and throughput of SQL Server by applying the mssql TuneD profile.
During the installation, the role adds repositories for SQL Server and related packages to the managed hosts. Packages in these repositories are provided, maintained, and hosted by Microsoft.
32.1. Installing and configuring SQL Server with an existing TLS certificate by using the microsoft.sql.server Ansible system role Copy linkLink copied to clipboard!
By using the microsoft.sql.server Ansible system role, you can automate the installation and configuration of Microsoft SQL Server with TLS encryption. In the playbook, you can use an existing private key and a TLS certificate that was issued by a certificate authority (CA).
If your application requires a Microsoft SQL Server database, you can configure SQL Server with TLS encryption to enable secure communication between the application and the database. The microsoft.sql.server role uses the certificate Ansible system role to configure certmonger and request a certificate from IdM.
Depending on the RHEL version on the managed host, the version of SQL Server that you can install differs:
- RHEL 7.9: SQL Server 2017 and 2019
- RHEL 8: SQL Server 2017, 2019, and 2022
- RHEL 9.4 and later: SQL Server 2022
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
You installed the
ansible-collection-microsoft-sqlpackage or themicrosoft.sqlcollection on the control node. - The managed node has 2 GB or more RAM installed.
- The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
-
You stored the certificate in the
sql_crt.pemfile in the same directory as the playbook. -
You stored the private key in the
sql_cert.keyfile in the same directory as the playbook. - SQL clients trust the CA that issued the certificate.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:sa_pwd: <sa_password>
sa_pwd: <sa_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
mssql_tls_enable: true-
Enables TLS encryption. If you enable this setting, you must also define
mssql_tls_certandmssql_tls_private_key. mssql_tls_self_sign: false-
Indicates whether the certificates that you use are self-signed or not. Based on this setting, the role decides whether to run the
sqlcmdcommand with the-Cargument to trust certificates. mssql_tls_cert: <path>-
Sets the path to the TLS certificate stored on the control node. The role copies this file to the
/etc/pki/tls/certs/directory on the managed node. mssql_tls_private_key: <path>-
Sets the path to the TLS private key on the control node. The role copies this file to the
/etc/pki/tls/private/directory on the managed node. mssql_tls_force: true- Replaces the TLS certificate and private key in their destination directories if they exist.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the SQL Server host, use the
sqlcmdutility with the-Nparameter to establish an encrypted connection to SQL server and run a query, for example:/opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U "sa" -P <sa_password> -Q 'SELECT SYSTEM_USER'
$ /opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U "sa" -P <sa_password> -Q 'SELECT SYSTEM_USER'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command succeeds, the connection to the server was TLS encrypted.
32.2. Installing and configuring SQL Server with a TLS certificate issued from IdM by using the microsoft.sql.server Ansible system role Copy linkLink copied to clipboard!
By using the microsoft.sql.server Ansible system role, you can automate the installation and configuration of Microsoft SQL Server with TLS encryption.
If your application requires a Microsoft SQL Server database, you can configure SQL Server with TLS encryption to enable secure communication between the application and the database. If the SQL Server host is a member in a RHEL Identity Management (IdM) domain, the certmonger service can manage the certificate request and future renewals.
The microsoft.sql.server role uses the certificate Ansible system role to configure certmonger and request a certificate from IdM.
Depending on the RHEL version on the managed host, the version of SQL Server that you can install differs:
- RHEL 7.9: SQL Server 2017 and 2019
- RHEL 8: SQL Server 2017, 2019, and 2022
- RHEL 9.4 and later: SQL Server 2022
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
You installed the
ansible-collection-microsoft-sqlpackage or themicrosoft.sqlcollection on the control node. - The managed node has 2 GB or more RAM installed.
- The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
- You enrolled the managed node in a Red Hat Enterprise Linux Identity Management (IdM) domain.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:sa_pwd: <sa_password>
sa_pwd: <sa_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
mssql_tls_enable: true-
Enables TLS encryption. If you enable this setting, you must also define
mssql_tls_certificates. mssql_tls_certificates-
A list of YAML dictionaries with settings for the
certificaterole. name: <file_name>-
Defines the base name of the certificate and private key. The
certificaterole stores the certificate in the/etc/pki/tls/certs/<file_name>.crtand the private key in the/etc/pki/tls/private/<file_name>.keyfile. dns: <hostname_or_list_of_hostnames>-
Sets the hostnames that the Subject Alternative Names (SAN) field in the issued certificate contains. You can use a wildcard (
*) or specify multiple names in YAML list format. ca: <ca_type>-
Defines how the
certificaterole requests the certificate. Set the variable toipaif the host is enrolled in an IdM domain orself-signto request a self-signed certificate.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the SQL Server host, use the
sqlcmdutility with the-Nparameter to establish an encrypted connection to SQL server and run a query, for example:/opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U "sa" -P <sa_password> -Q 'SELECT SYSTEM_USER'
$ /opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U "sa" -P <sa_password> -Q 'SELECT SYSTEM_USER'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command succeeds, the connection to the server was TLS encrypted.
32.3. Installing and configuring SQL Server with custom storage paths by using the microsoft.sql.server Ansible system role Copy linkLink copied to clipboard!
When you use the microsoft.sql.server Ansible system role to install and configure a new SQL Server, you can customize the paths and modes of the data and log directories. For example, configure custom paths if you want to store databases and log files in a different directory with more storage.
If you change the data or log path and re-run the playbook, the previously-used directories and all their content remains at the original path. Only new databases and logs are stored in the new location.
| Type | Directory | Mode | Owner | Group |
|---|---|---|---|---|
| Data |
|
|
| |
| Logs |
|
|
| |
[a]
If the directory exists, the role preserves the mode. If the directory does not exist, the role applies the default umask on the managed node when it creates the directory.
| ||||
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
You installed the
ansible-collection-microsoft-sqlpackage or themicrosoft.sqlcollection on the control node. - The managed node has 2 GB or more RAM installed.
- The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:sa_pwd: <sa_password>
sa_pwd: <sa_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Edit an existing playbook file, for example
~/playbook.yml, and add the storage and log-related variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
mssql_datadir_modeandmssql_logdir_mode- Set the permission modes. Specify the value in single quotes to ensure that the role parses the value as a string and not as an octal number.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the mode of the data directory:
ansible managed-node-01.example.com -m command -a 'ls -ld /var/lib/mssql/'
$ ansible managed-node-01.example.com -m command -a 'ls -ld /var/lib/mssql/' drwx------. 12 mssql mssql 4096 Jul 3 13:53 /var/lib/mssql/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the mode of the log directory:
ansible managed-node-01.example.com -m command -a 'ls -ld /var/log/mssql/'
$ ansible managed-node-01.example.com -m command -a 'ls -ld /var/log/mssql/' drwx------. 12 mssql mssql 4096 Jul 3 13:53 /var/log/mssql/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
32.4. Installing and configuring SQL Server with AD integration by using the microsoft.sql.server Ansible system role Copy linkLink copied to clipboard!
You can integrate Microsoft SQL Server into an Active Directory (AD) to enable AD users to authenticate to SQL Server. By using the microsoft.sql.server Ansible system role, you can automate this process and remotely install and configure SQL Server accordingly.
Depending on the RHEL version on the managed host, the version of SQL Server that you can install differs:
- RHEL 7.9: SQL Server 2017 and 2019
- RHEL 8: SQL Server 2017, 2019, and 2022
- RHEL 9.4 and later: SQL Server 2022
Note that you must still perform manual steps in AD and SQL Server after you run the playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
You installed the
ansible-collection-microsoft-sqlpackage or themicrosoft.sqlcollection on the control node. - The managed node has 2 GB or more RAM installed.
- The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
- An AD domain is available in the network.
- A reverse DNS (RDNS) zone exists in AD, and it contains Pointer (PTR) resource records for each AD domain controller (DC).
- The managed host’s network settings use an AD DNS server.
The managed host can resolve the following DNS entries:
- Both the hostnames and the fully-qualified domain names (FQDNs) of the AD DCs resolve to their IP addresses.
- The IP addresses of the AD DCs resolve to their FQDNs.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:sa_pwd: <sa_password> sql_pwd: <SQL_AD_password> ad_admin_pwd: <AD_admin_password>
sa_pwd: <sa_password> sql_pwd: <SQL_AD_password> ad_admin_pwd: <AD_admin_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
mssql_ad_configure: true- Enables authentication against AD.
mssql_ad_join: true-
Uses the
ad_integrationRHEL system role to join the managed node to AD. The role uses the settings from thead_integration_realm,ad_integration_user, andad_integration_passwordvariables to join the domain. mssql_ad_sql_user: <username>- Sets the name of an AD account that the role should create in AD and SQL Server for administration purposes.
ad_integration_user: <AD_user>-
Sets the name of an AD user with privileges to join machines to the domain and to create the AD user specified in
mssql_ad_sql_user.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Authorize AD users that should be able to authenticate to SQL Server. On the SQL Server, perform the following steps:
Obtain a Kerberos ticket for the
Administratoruser:kinit Administrator@ad.example.com
$ kinit Administrator@ad.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Authorize an AD user:
/opt/mssql-tools/bin/sqlcmd -S. -Q 'CREATE LOGIN [AD\<AD_user>] FROM WINDOWS;'
$ /opt/mssql-tools/bin/sqlcmd -S. -Q 'CREATE LOGIN [AD\<AD_user>] FROM WINDOWS;'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat this step for every AD user who should be able to access SQL Server.
Verification
On the managed node that runs SQL Server:
Obtain a Kerberos ticket for an AD user:
kinit <AD_user>@ad.example.com
$ kinit <AD_user>@ad.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
sqlcmdutility to log in to SQL Server and run a query, for example:/opt/mssql-tools/bin/sqlcmd -S. -Q 'SELECT SYSTEM_USER'
$ /opt/mssql-tools/bin/sqlcmd -S. -Q 'SELECT SYSTEM_USER'Copy to Clipboard Copied! Toggle word wrap Toggle overflow