Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Automating system administration by using RHEL System Roles in RHEL 7.9
Consistent and repeatable configuration of RHEL deployments across multiple hosts with Red Hat Ansible Automation Platform playbooks
Abstract
Making open source more inclusive Link kopierenLink in die Zwischenablage kopiert!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation Link kopierenLink in die Zwischenablage kopiert!
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Introduction to RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
By using RHEL System Roles, you can remotely manage the system configurations of multiple RHEL systems across major versions of RHEL. RHEL System Roles is a collection of Ansible roles and modules. To use it to configure systems, you must use the following components:
- Control node
- A control node is the system from which you run Ansible commands and playbooks. Your control node can be an Ansible Automation Platform, Red Hat Satellite, or a RHEL 9, 8, or 7 host. For more information, see Preparing a control node on RHEL 8.
- Managed node
- Managed nodes are the servers and network devices that you manage with Ansible. Managed nodes are also sometimes called hosts. Ansible does not have to be installed on managed nodes. For more information, see Preparing a managed node.
- Ansible playbook
- In a playbook, you define the configuration you want to achieve on your managed nodes or a set of steps for the system on the managed node to perform. Playbooks are Ansible’s configuration, deployment, and orchestration language.
- Inventory
- In an inventory file, you list the managed nodes and specify information such as IP address for each managed node. In an inventory, you can also organize managed nodes, creating and nesting groups for easier scaling. An inventory file is also sometimes called a hostfile.
On Red Hat Enterprise Linux 8, you can use the following roles provided by the rhel-system-roles package, which is available in the AppStream repository:
| Role name | Role description | Chapter title |
|---|---|---|
|
| Certificate Issuance and Renewal | Requesting certificates using RHEL System Roles |
|
| Web console | Installing and configuring web console with the cockpit RHEL System Role |
|
| System-wide cryptographic policies | Setting a custom cryptographic policy across systems |
|
| Firewalld | Configuring firewalld using System Roles |
|
| HA Cluster | Configuring a high-availability cluster using System Roles |
|
| Kernel Dumps | Configuring kdump using RHEL System Roles |
|
| Kernel Settings | Using Ansible roles to permanently configure kernel parameters |
|
| Logging | Using the logging System Role |
|
| Metrics (PCP) | Monitoring performance using RHEL System Roles |
|
| Microsoft SQL Server | Configuring Microsoft SQL Server using the microsoft.sql.server Ansible role |
|
| Networking | Using the network RHEL System Role to manage InfiniBand connections |
|
| Network Bound Disk Encryption client | Using the nbde_client and nbde_server System Roles |
|
| Network Bound Disk Encryption server | Using the nbde_client and nbde_server System Roles |
|
| Postfix | Variables of the postfix role in System Roles |
|
| SELinux | Configuring SELinux using System Roles |
|
| SSH client | Configuring secure communication with the ssh System Roles |
|
| SSH server | Configuring secure communication with the ssh System Roles |
|
| Storage | Managing local storage using RHEL System Roles |
|
| Terminal Session Recording | Configuring a system for session recording using the tlog RHEL System Role |
|
| Time Synchronization | Configuring time synchronization using RHEL System Roles |
|
| VPN | Configuring VPN connections with IPsec by using the vpn RHEL System Role |
Chapter 2. Preparing a control node and managed nodes to use RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
Before you can use individual RHEL System Roles to manage services and settings, you must prepare the control node and managed nodes.
2.1. Preparing a control node on RHEL 8 Link kopierenLink in die Zwischenablage kopiert!
Before using RHEL System Roles, you must configure a control node. This system then configures the managed hosts from the inventory according to the playbooks.
Prerequisites
- RHEL 7.9 is installed. For more information about installing RHEL, see Installation guide.
- The system is registered to the Customer Portal.
-
A
Red Hat Enterprise Linux Serversubscription is attached to the system. -
If available in your Customer Portal account, an
Ansible Automation Platformsubscription is attached to the system.
Procedure
Install the
rhel-system-rolespackage:yum install rhel-system-roles
[root@control-node]# yum install rhel-system-rolesCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command installs the
ansible-corepackage as a dependency.Create a user named
ansibleto manage and run playbooks:useradd ansible
[root@control-node]# useradd ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to the newly created
ansibleuser:su - ansible
[root@control-node]# su - ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Perform the rest of the procedure as this user.
Create an SSH public and private key:
ssh-keygen
[ansible@control-node]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ansible/.ssh/id_rsa): <password> ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the suggested default location for the key file.
- Optional: To prevent Ansible from prompting you for the SSH key password each time you establish a connection, configure an SSH agent.
Create the
~/.ansible.cfgfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSettings in the
~/.ansible.cfgfile have a higher priority and override settings from the global/etc/ansible/ansible.cfgfile.With these settings, Ansible performs the following actions:
- Manages hosts in the specified inventory file.
-
Uses the account set in the
remote_userparameter when it establishes SSH connections to managed nodes. -
Uses the
sudoutility to execute tasks on managed nodes as therootuser. - Prompts for the root password of the remote user every time you apply a playbook. This is recommended for security reasons.
Create an
~/inventoryfile in INI or YAML format that lists the hostnames of managed hosts. You can also define groups of hosts in the inventory file. For example, the following is an inventory file in the INI format with three hosts and one host group namedUS:managed-node-01.example.com [US] managed-node-02.example.com ansible_host=192.0.2.100 managed-node-03.example.com
managed-node-01.example.com [US] managed-node-02.example.com ansible_host=192.0.2.100 managed-node-03.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the control node must be able to resolve the hostnames. If the DNS server cannot resolve certain hostnames, add the
ansible_hostparameter next to the host entry to specify its IP address.
Next steps
- Prepare the managed nodes. For more information, see Preparing a managed node.
2.2. Preparing a managed node Link kopierenLink in die Zwischenablage kopiert!
Managed nodes are the systems listed in the inventory and which will be configured by the control node according to the playbook. You do not have to install Ansible on managed hosts.
Prerequisites
- You prepared the control node. For more information, see Preparing a control node on RHEL 8.
You have SSH access from the control node.
ImportantDirect SSH access as the
rootuser is a security risk. To reduce this risk, you will create a local user on this node and configure asudopolicy when preparing a managed node. Ansible on the control node can then use the local user account to log in to the managed node and run playbooks as different users, such asroot.
Procedure
Create a user named
ansible:useradd ansible
[root@managed-node-01]# useradd ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control node later uses this user to establish an SSH connection to this host.
Set a password for the
ansibleuser:passwd ansible
[root@managed-node-01]# passwd ansible Changing password for user ansible. New password: <password> Retype new password: <password> passwd: all authentication tokens updated successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must enter this password when Ansible uses
sudoto perform tasks as therootuser.Install the
ansibleuser’s SSH public key on the managed node:Log in to the control node as the
ansibleuser, and copy the SSH public key to the managed node:ssh-copy-id managed-node-01.example.com
[ansible@control-node]$ ssh-copy-id managed-node-01.example.com /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub" The authenticity of host 'managed-node-01.example.com (192.0.2.100)' can't be established. ECDSA key fingerprint is SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.Copy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, connect by entering
yes:Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysCopy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, enter the password:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the SSH connection by remotely executing a command on the control node:
ssh <managed-node-01.example.com> whoami
[ansible@control-node]$ ssh <managed-node-01.example.com> whoami ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
sudoconfiguration for theansibleuser:Create and edit the
/etc/sudoers.d/ansiblefile by using thevisudocommand:visudo /etc/sudoers.d/ansible
[root@managed-node-01]# visudo /etc/sudoers.d/ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow The benefit of using
visudoover a normal editor is that this utility provides basic sanity checks and checks for parse errors before installing the file.Configure a
sudoerspolicy in the/etc/sudoers.d/ansiblefile that meets your requirements, for example:To grant permissions to the
ansibleuser to run all commands as any user and group on this host after entering theansibleuser’s password, use:ansible ALL=(ALL) ALL
ansible ALL=(ALL) ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow To grant permissions to the
ansibleuser to run all commands as any user and group on this host without entering theansibleuser’s password, use:ansible ALL=(ALL) NOPASSWD: ALL
ansible ALL=(ALL) NOPASSWD: ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Alternatively, configure a more fine-granular policy that matches your security requirements. For further details on
sudoerspolicies, see thesudoers(5)man page.
Verification
Verify that you can execute commands from the control node on an all managed nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The hard-coded all group dynamically contains all hosts listed in the inventory file.
Verify that privilege escalation works correctly by running the
whoamiutility on a managed host by using the Ansiblecommandmodule:ansible managed-node-01.example.com -m command -a whoami
[ansible@control-node]$ ansible managed-node-01.example.com -m command -a whoami BECOME password: <password> managed-node-01.example.com | CHANGED | rc=0 >> rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the command returns root, you configured
sudoon the managed nodes correctly.
Chapter 3. Installing and Using Collections Link kopierenLink in die Zwischenablage kopiert!
3.1. Introduction to Ansible Collections Link kopierenLink in die Zwischenablage kopiert!
Ansible Collections are the new way of distributing, maintaining, and consuming automation. By combining multiple types of Ansible content such as playbooks, roles, modules, and plugins, you can benefit from improvements in flexibility and scalability.
The Ansible Collections are an option to the traditional RHEL System Roles format. Using the RHEL System Roles in the Ansible Collection format is almost the same as using it in the traditional RHEL System Roles format. The difference is that Ansible Collections use the concept of a fully qualified collection name (FQCN), which consists of a namespace and the collection name. The namespace we use is redhat and the collection name is rhel_system_roles. So, while the traditional RHEL System Roles format for the kernel_settings role is presented as rhel-system-roles.kernel_settings (with dashes), using the Collection fully qualified collection name for the kernel_settings role would be presented as redhat.rhel_system_roles.kernel_settings (with underscores).
The combination of a namespace and a collection name guarantees that the objects are unique. It also ensures that objects are shared across the Ansible Collections and namespaces without any conflicts.
3.2. Collections structure Link kopierenLink in die Zwischenablage kopiert!
Collections are a package format for Ansible content. The data structure is as below:
- docs/: local documentation for the collection, with examples, if the role provides the documentation
- galaxy.yml: source data for the MANIFEST.json that will be part of the Ansible Collection package
playbooks/: playbooks are available here
- tasks/: this holds 'task list files' for include_tasks/import_tasks usage
plugins/: all Ansible plugins and modules are available here, each in its subdirectory
- modules/: Ansible modules
- modules_utils/: common code for developing modules
- lookup/: search for a plugin
- filter/: Jinja2 filter plugin
- connection/: connection plugins required if not using the default
- roles/: directory for Ansible roles
- tests/: tests for the collection’s content
3.3. Installing Collections by using the CLI Link kopierenLink in die Zwischenablage kopiert!
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins.
You can install Collections through Ansible Galaxy, through the browser, or by using the command line.
Prerequisites
- Access and permissions to one or more managed nodes.
Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed. - An inventory file which lists the managed nodes.
-
The
Procedure
Install the collection via RPM package:
yum install rhel-system-roles
# yum install rhel-system-rolesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After the installation is finished, the roles are available as redhat.rhel_system_roles.<role_name>. Additionally, you can find the documentation for each role at /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/roles/<role_name>/README.md.
Verification steps
To verify the installation, run the kernel_settings role with check mode on your localhost. You must also use the --become parameter because it is necessary for the Ansible package module. However, the parameter will not change your system:
Run the following command:
ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.yml
$ ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The last line of the command output should contain the value failed=0.
The comma after localhost is mandatory. You must add it even if there is only one host on the list. Without it, ansible-playbook would identify localhost as a file or a directory.
3.4. Installing Collections from Automation Hub Link kopierenLink in die Zwischenablage kopiert!
If you are using the Automation Hub, you can install the RHEL System Roles Collection hosted on the Automation Hub.
Prerequisites
- Access and permissions to one or more managed nodes.
Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed. - An inventory file which lists the managed nodes.
-
The
Procedure
-
Define Red Hat Automation Hub as the default source for content in the
ansible.cfgconfiguration file. See Configuring Red Hat Automation Hub as the primary source for content . Install the
redhat.rhel_system_rolescollection from the Automation Hub:ansible-galaxy collection install redhat.rhel_system_roles
# ansible-galaxy collection install redhat.rhel_system_rolesCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the installation is finished, the roles are available as
redhat.rhel_system_roles.<role_name>. Additionally, you can find the documentation for each role at/usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/roles/<role_name>/README.md.
Verification steps
To verify the install, run the kernel_settings role with check mode on your localhost. You must also use the --become parameter because it is necessary for the Ansible package module. However, the parameter will not change your system:
Run the following command:
ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.yml
$ ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The last line of the command output should contain the value failed=0.
The comma after localhost is mandatory. You must add it even if there is only one host on the list. Without it, ansible-playbook would identify localhost as a file or a directory.
3.5. Applying a local logging System Role using Collections Link kopierenLink in die Zwischenablage kopiert!
Following is an example using Collections to prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines.
Prerequisites
- A Collection format of rhel-system-roles is installed either from an rpm package or from the Automation Hub.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
vi logging-playbook.yml
# vi logging-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following content into the YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the playbook on a specific inventory:
ansible-playbook -i inventory-file logging-playbook.yml
# ansible-playbook -i inventory-file logging-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
- inventory-file is the name of your inventory file.
- logging-playbook.yml is the playbook you use.
Verification steps
Test the syntax of the configuration files
/etc/rsyslog.confand/etc/rsyslog.d:rsyslogd -N 1
# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the system sends messages to the log:
Send a test message:
logger test
# logger testCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
/var/log/messageslog, for example:cat /var/log/messages Aug 5 13:48:31 hostname root[6778]: test
# cat /var/log/messages Aug 5 13:48:31 hostname root[6778]: testCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
hostnameis the hostname of the client system. The log displays the user name of the user that entered the logger command, in this case,root.
Chapter 4. Ansible IPMI modules in RHEL Link kopierenLink in die Zwischenablage kopiert!
4.1. The rhel_mgmt collection Link kopierenLink in die Zwischenablage kopiert!
The Intelligent Platform Management Interface (IPMI) is a specification for a set of standard protocols to communicate with baseboard management controller (BMC) devices. The IPMI modules allow you to enable and support hardware management automation. The IPMI modules are available in:
-
The
rhel_mgmtCollection. The package name isansible-collection-redhat-rhel_mgmt. -
The RHEL 7.9 AppStream, as part of the new
ansible-collection-redhat-rhel_mgmtpackage.
The following IPMI modules are available in the rhel_mgmt collection:
-
ipmi_boot: Management of boot device order -
ipmi_power: Power management for machine
The mandatory parameters used for the IPMI Modules are:
-
ipmi_bootparameters:
| Module name | Description |
|---|---|
| name | Hostname or ip address of the BMC |
| password | Password to connect to the BMC |
| bootdev | Device to be used on next boot * network * floppy * hd * safe * optical * setup * default |
| User | Username to connect to the BMC |
-
ipmi_powerparameters:
| Module name | Description |
|---|---|
| name | BMC Hostname or IP address |
| password | Password to connect to the BMC |
| user | Username to connect to the BMC |
| State | Check if the machine is on the desired status * on * off * shutdown * reset * boot |
4.2. Installing the rhel mgmt Collection using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can install the rhel_mgmt Collection using the command line.
Prerequisites
-
The
ansible-corepackage is installed.
Procedure
Install the collection via RPM package:
yum install ansible-collection-redhat-rhel_mgmt
# yum install ansible-collection-redhat-rhel_mgmtCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the installation is finished, the IPMI modules are available in the
redhat.rhel_mgmtAnsible collection.
4.3. Example using the ipmi_boot module Link kopierenLink in die Zwischenablage kopiert!
The following example shows how to use the ipmi_boot module in a playbook to set a boot device for the next boot. For simplicity, the examples use the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed.
Prerequisites
- The rhel_mgmt collection is installed.
The
pyghmilibrary in thepython3-pyghmipackage is installed in one of the following locations:- The host where you execute the playbook.
-
The managed host. If you use localhost as the managed host, install the
python3-pyghmipackage on the host where you execute the playbook instead.
- The IPMI BMC that you want to control is accessible via network from the host where you execute the playbook, or the managed host (if not using localhost as the managed host). Note that the host whose BMC is being configured by the module is generally different from the host where the module is executing (the Ansible managed host), as the module contacts the BMC over the network using the IPMI protocol.
- You have credentials to access BMC with an appropriate level of access.
Procedure
Create a new playbook.yml file with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the playbook against localhost:
ansible-playbook playbook.yml
# ansible-playbook playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a result, the output returns the value “success”.
4.4. Example using the ipmi_power module Link kopierenLink in die Zwischenablage kopiert!
This example shows how to use the ipmi_boot module in a playbook to check if the system is turned on. For simplicity, the examples use the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed.
Prerequisites
- The rhel_mgmt collection is installed.
The
pyghmilibrary in thepython3-pyghmipackage is installed in one of the following locations:- The host where you execute the playbook.
-
The managed host. If you use localhost as the managed host, install the
python3-pyghmipackage on the host where you execute the playbook instead.
- The IPMI BMC that you want to control is accessible via network from the host where you execute the playbook, or the managed host (if not using localhost as the managed host). Note that the host whose BMC is being configured by the module is generally different from the host where the module is executing (the Ansible managed host), as the module contacts the BMC over the network using the IPMI protocol.
- You have credentials to access BMC with an appropriate level of access.
Procedure
Create a new playbook.yml file with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the playbook:
ansible-playbook playbook.yml
# ansible-playbook playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The output returns the value “true”.
Chapter 5. The Redfish modules in RHEL Link kopierenLink in die Zwischenablage kopiert!
The Redfish modules for remote management of devices are now part of the redhat.rhel_mgmt Ansible collection. With the Redfish modules, you can easily use management automation on bare-metal servers and platform hardware by getting information about the servers or control them through an Out-Of-Band (OOB) controller, using the standard HTTPS transport and JSON format.
5.1. The Redfish modules Link kopierenLink in die Zwischenablage kopiert!
The redhat.rhel_mgmt Ansible collection provides the Redfish modules to support hardware management in Ansible over Redfish. The redhat.rhel_mgmt collection is available in the ansible-collection-redhat-rhel_mgmt package. To install it, see Installing the redhat.rhel_mgmt Collection using the CLI.
The following Redfish modules are available in the redhat.rhel_mgmt collection:
-
redfish_info: Theredfish_infomodule retrieves information about the remote Out-Of-Band (OOB) controller such as systems inventory. -
redfish_command: Theredfish_commandmodule performs Out-Of-Band (OOB) controller operations like log management and user management, and power operations such as system restart, power on and off. -
redfish_config: Theredfish_configmodule performs OOB controller operations such as changing OOB configuration, or setting the BIOS configuration.
5.2. Redfish modules parameters Link kopierenLink in die Zwischenablage kopiert!
The parameters used for the Redfish modules are:
redfish_info parameters: | Description |
|---|---|
|
| (Mandatory) - Base URI of OOB controller. |
|
| (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. |
|
| (Mandatory) - List of commands to execute on OOB controller. |
|
| Username for authentication to OOB controller. |
|
| Password for authentication to OOB controller. |
redfish_command parameters: | Description |
|---|---|
|
| (Mandatory) - Base URI of OOB controller. |
|
| (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. |
|
| (Mandatory) - List of commands to execute on OOB controller. |
|
| Username for authentication to OOB controller. |
|
| Password for authentication to OOB controller. |
redfish_config parameters: | Description |
|---|---|
|
| (Mandatory) - Base URI of OOB controller. |
|
| (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. |
|
| (Mandatory) - List of commands to execute on OOB controller. |
|
| Username for authentication to OOB controller. |
|
| Password for authentication to OOB controller. |
|
| BIOS attributes to update. |
5.3. Using the redfish_info module Link kopierenLink in die Zwischenablage kopiert!
The following example shows how to use the redfish_info module in a playbook to get information about the CPU inventory. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed.
Prerequisites
-
The
redhat.rhel_mgmtcollection is installed. -
The
pyghmilibrary in thepython3-pyghmipackage is installed on the managed host. If you use localhost as the managed host, install thepython3-pyghmipackage on the host where you execute the playbook. - OOB controller access details.
Procedure
Create a new playbook.yml file with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the playbook against localhost:
ansible-playbook playbook.yml
# ansible-playbook playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a result, the output returns the CPU inventory details.
5.4. Using the redfish_command module Link kopierenLink in die Zwischenablage kopiert!
The following example shows how to use the redfish_command module in a playbook to turn on a system. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed.
Prerequisites
-
The
redhat.rhel_mgmtcollection is installed. -
The
pyghmilibrary in thepython3-pyghmipackage is installed on the managed host. If you use localhost as the managed host, install thepython3-pyghmipackage on the host where you execute the playbook. - OOB controller access details.
Procedure
Create a new playbook.yml file with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the playbook against localhost:
ansible-playbook playbook.yml
# ansible-playbook playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a result, the system powers on.
5.5. Using the redfish_config module Link kopierenLink in die Zwischenablage kopiert!
The following example shows how to use the redfish_config module in a playbook to configure a system to boot with UEFI. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed.
Prerequisites
-
The
redhat.rhel_mgmtcollection is installed. -
The
pyghmilibrary in thepython3-pyghmipackage is installed on the managed host. If you use localhost as the managed host, install thepython3-pyghmipackage on the host where you execute the playbook. - OOB controller access details.
Procedure
Create a new playbook.yml file with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the playbook against localhost:
ansible-playbook playbook.yml
# ansible-playbook playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a result, the system boot mode is set to UEFI.
Chapter 6. Configuring kernel parameters permanently by using the kernel_settings RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
As an experienced user with good knowledge of Red Hat Ansible, you can use the kernel_settings role to configure kernel parameters on multiple clients at once. This solution:
- Provides a friendly interface with efficient input setting.
- Keeps all intended kernel parameters in one place.
After you run the kernel_settings role from the control machine, the kernel parameters are applied to the managed systems immediately and persist across reboots.
Note that RHEL System Role delivered over RHEL channels are available to RHEL customers as an RPM package in the default AppStream repository. RHEL System Role are also available as a collection to customers with Ansible subscriptions over Ansible Automation Hub.
6.1. Introduction to the kernel_settings role Link kopierenLink in die Zwischenablage kopiert!
RHEL System Roles is a set of roles that provide a consistent configuration interface to remotely manage multiple systems.
RHEL System Roles were introduced for automated configurations of the kernel using the kernel_settings System Role. The rhel-system-roles package contains this system role, and also the reference documentation.
To apply the kernel parameters on one or more systems in an automated fashion, use the kernel_settings role with one or more of its role variables of your choice in a playbook. A playbook is a list of one or more plays that are human-readable, and are written in the YAML format.
You can use an inventory file to define a set of systems that you want Ansible to configure according to the playbook.
With the kernel_settings role you can configure:
-
The kernel parameters using the
kernel_settings_sysctlrole variable -
Various kernel subsystems, hardware devices, and device drivers using the
kernel_settings_sysfsrole variable -
The CPU affinity for the
systemdservice manager and processes it forks using thekernel_settings_systemd_cpu_affinityrole variable -
The kernel memory subsystem transparent hugepages using the
kernel_settings_transparent_hugepagesandkernel_settings_transparent_hugepages_defragrole variables
6.2. Applying selected kernel parameters using the kernel_settings role Link kopierenLink in die Zwischenablage kopiert!
Follow these steps to prepare and apply an Ansible playbook to remotely configure kernel parameters with persisting effect on multiple managed operating systems.
Prerequisites
-
You have
rootpermissions. -
Entitled by your RHEL subscription, you installed the
ansible-coreandrhel-system-rolespackages on the control machine. - An inventory of managed hosts is present on the control machine and Ansible is able to connect to them.
Procedure
Optionally, review the
inventoryfile for illustration purposes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The file defines the
[testingservers]group and other groups. It allows you to run Ansible more effectively against a specific set of systems.Create a configuration file to set defaults and privilege escalation for Ansible operations.
Create a new YAML file and open it in a text editor, for example:
vi /home/jdoe/<ansible_project_name>/ansible.cfg
# vi /home/jdoe/<ansible_project_name>/ansible.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following content into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
[defaults]section specifies a path to the inventory file of managed hosts. The[privilege_escalation]section defines that user privileges be shifted torooton the specified managed hosts. This is necessary for successful configuration of kernel parameters. When Ansible playbook is run, you will be prompted for user password. The user automatically switches torootby means ofsudoafter connecting to a managed host.
Create an Ansible playbook that uses the
kernel_settingsrole.Create a new YAML file and open it in a text editor, for example:
vi /home/jdoe/<ansible_project_name>/kernel-roles.yml
# vi /home/jdoe/<ansible_project_name>/kernel-roles.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This file represents a playbook and usually contains an ordered list of tasks, also called plays, that are run against specific managed hosts selected from your
inventoryfile.Insert the following content into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
namekey is optional. It associates an arbitrary string with the play as a label and identifies what the play is for. Thehostskey in the play specifies the hosts against which the play is run. The value or values for this key can be provided as individual names of managed hosts or as groups of hosts as defined in theinventoryfile.The
varssection represents a list of variables containing selected kernel parameter names and values to which they have to be set.The
roleskey specifies what system role is going to configure the parameters and values mentioned in thevarssection.NoteYou can modify the kernel parameters and their values in the playbook to fit your needs.
Optionally, verify that the syntax in your play is correct.
ansible-playbook --syntax-check kernel-roles.yml
# ansible-playbook --syntax-check kernel-roles.yml playbook: kernel-roles.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This example shows the successful verification of a playbook.
Execute your playbook.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before Ansible runs your playbook, you are going to be prompted for your password and so that a user on managed hosts can be switched to
root, which is necessary for configuring kernel parameters.The recap section shows that the play finished successfully (
failed=0) for all managed hosts, and that 4 kernel parameters have been applied (changed=4).- Restart your managed hosts and check the affected kernel parameters to verify that the changes have been applied and persist across reboots.
Chapter 7. Using the rhc System Role to register the system Link kopierenLink in die Zwischenablage kopiert!
The rhc RHEL System Role enables administrators to automate the registration of multiple systems with Red Hat Subscription Management (RHSM) and Satellite servers. The role also supports Insights-related configuration and management tasks by using Ansible.
7.1. Introduction to the rhc System Role Link kopierenLink in die Zwischenablage kopiert!
RHEL System Role is a set of roles that provides a consistent configuration interface to remotely manage multiple systems. The remote host configuration (rhc) System Role enables administrators to easily register RHEL systems to Red Hat Subscription Management (RHSM) and Satellite servers. By default, when you register a system by using the rhc System Role, the system is connected to Insights. Additionally, with the rhc System Role, you can:
- Configure connections to Red Hat Insights
- Enable and disable repositories
- Configure the proxy to use for the connection
- Configure insights remediations and, auto updates
- Set the release of the system
- Configure insights tags
7.2. Registering a system by using the rhc System Role Link kopierenLink in die Zwischenablage kopiert!
You can register your system to Red Hat by using the rhc RHEL System Role. By default, the rhc RHEL System Role connects the system to Red Hat Insights when you register it.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a vault to save the sensitive information:
ansible-vault create secrets.yml
$ ansible-vault create secrets.yml New Vault password: password Confirm New Vault password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
ansible-vault createcommand creates an encrypted vault file and opens it in an editor. Enter the sensitive data you want to save in the vault, for example:activationKey: activation_key username: username password: password
activationKey: activation_key username: username password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the changes, and close the editor. Ansible encrypts the data in the vault.
You can later edit the data in the vault by using the
ansible-vault edit secrets.ymlcommand.Optional: Display the vault content:
ansible-vault view secrets.yml
$ ansible-vault view secrets.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example
~/registration.yml, and use one of the following options depending on the action you want to perform:To register by using an activation key and organization ID (recommended), use the following playbook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To register by using a username and password, use the following playbook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the playbook:
ansible-playbook ~/registration.yml --ask-vault-pass
# ansible-playbook ~/registration.yml --ask-vault-passCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Registering a system with Satellite by using the rhc System Role Link kopierenLink in die Zwischenablage kopiert!
When organizations use Satellite to manage systems, it is necessary to register the system through Satellite. You can remotely register your system with Satellite by using the rhc RHEL System Role.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a vault to save the sensitive information:
ansible-vault create secrets.yml
$ ansible-vault create secrets.yml New Vault password: password Confirm New Vault password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
ansible-vault createcommand creates an encrypted file and opens it in an editor. Enter the sensitive data you want to save in the vault, for example:activationKey: activation_key
activationKey: activation_keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the changes, and close the editor. Ansible encrypts the data in the vault.
You can later edit the data in the vault by using the
ansible-vault edit secrets.ymlcommand.Optional: Display the vault content:
ansible-vault view secrets.yml
$ ansible-vault view secrets.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a playbook file, for example
~/registration-sat.yml. Use the following text in
~/registration-sat.ymlto register the system by using an activation key and organization ID:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/registration-sat.yml --ask-vault-pass
# ansible-playbook ~/registration-sat.yml --ask-vault-passCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Disabling the connection to Insights after the registration by using the rhc System Role Link kopierenLink in die Zwischenablage kopiert!
When you register a system by using the rhc RHEL System Role, the role by default, enables the connection to Red Hat Insights. You can disable it by using the rhc System Role, if not required.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- The system is already registered.
Procedure
Create a playbook file, for example
~/dis-insights.ymland add the following content in it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/dis-insights.yml
# ansible-playbook ~/dis-insights.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.5. Enabling repositories by using the rhc System Role Link kopierenLink in die Zwischenablage kopiert!
You can remotely enable or disable repositories on managed nodes by using the rhc RHEL System Role.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- You have details of the repositories which you want to enable or disable on the managed nodes.
- You have registered the system.
Procedure
Create a playbook file, for example
~/configure-repos.yml:To enable a repository:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable a repository:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the playbook:
ansible-playbook ~/configure-repos.yml
# ansible-playbook ~/configure-repos.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.6. Setting release versions by using the rhc system role Link kopierenLink in die Zwischenablage kopiert!
You can limit the system to use only repositories for a particular minor RHEL version instead of the latest one. This way, you can lock your system to a specific minor RHEL version.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- You know the minor RHEL version to which you want to lock the system. Note that you can only lock the system to the RHEL minor version that the host currently runs or a later minor version.
- You have registered the system.
Procedure
Create a playbook file, for example
~/release.yml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/release.yml
# ansible-playbook ~/release.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.7. Using a proxy server when registering the host by using the rhc System Role Link kopierenLink in die Zwischenablage kopiert!
If your security restrictions allow access to the Internet only through a proxy server, you can specify the proxy’s settings in the playbook when you register the system using the rhc RHEL System Role.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a vault to save the sensitive information:
ansible-vault create secrets.yml
$ ansible-vault create secrets.yml New Vault password: password Confirm New Vault password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
ansible-vault createcommand creates an encrypted file and opens it in an editor. Enter the sensitive data you want to save in the vault, for example:username: username password: password proxy_username: proxyusernme proxy_password: proxypassword
username: username password: password proxy_username: proxyusernme proxy_password: proxypasswordCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the changes, and close the editor. Ansible encrypts the data in the vault.
You can later edit the data in the vault by using the
ansible-vault edit secrets.ymlcommand.Optional: Display the vault content:
ansible-vault view secrets.yml
$ ansible-vault view secrets.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example
~/configure-proxy.yml:To register to the RHEL customer portal by using a proxy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the proxy server from the configuration of the Red Hat Subscription Manager service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the playbook:
ansible-playbook ~/configure-proxy.yml --ask-vault-pass
# ansible-playbook ~/configure-proxy.yml --ask-vault-passCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.8. Disabling auto updates of Insights rules by using the rhc System Role Link kopierenLink in die Zwischenablage kopiert!
You can disable the automatic collection rule updates for Red Hat Insights by using the rhc RHEL System Role. By default, when you connect your system to Red Hat Insights, this option is enabled. You can disable it by using the rhc RHEL System Role.
If you disable this feature, you risk using outdated rule definition files and not getting the most recent validation updates.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- You have registered the system.
Procedure
Create a vault to save the sensitive information:
ansible-vault create secrets.yml
$ ansible-vault create secrets.yml New Vault password: password Confirm New Vault password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
ansible-vault createcommand creates an encrypted file and opens it in an editor. Enter the sensitive data you want to save in the vault, for example:username: username password: password
username: username password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the changes, and close the editor. Ansible encrypts the data in the vault.
You can later edit the data in the vault by using the
ansible-vault edit secrets.ymlcommand.Optional: Display the vault content:
ansible-vault view secrets.yml
$ ansible-vault view secrets.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example
~/auto-update.ymland add following content to it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/auto-update.yml --ask-vault-pass
# ansible-playbook ~/auto-update.yml --ask-vault-passCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9. Disabling Insights remediations by using the rhc RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can configure systems to automatically update the dynamic configuration by using the rhc RHEL System Role. When you connect your system to Red hat Insights, it is enabled by default. You can disable it, if not required.
Enabling remediation with the rhc System Role ensures your system is ready to be remediated when connected directly to Red Hat. For systems connected to a Satellite, or Capsule, enabling remediation must be achieved differently. For more information about Red Hat Insights remediations, see Red Hat Insights Remediations Guide.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- You have Insights remediations enabled.
- You have registered the system.
Procedure
To enable the remediation, create a playbook file, for example
~/remediation.yml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/remediation.yml
# ansible-playbook ~/remediation.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.10. Configuring Insights tags by using the rhc system role Link kopierenLink in die Zwischenablage kopiert!
You can use tags for system filtering and grouping. You can also customize tags based on the requirements.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a vault to save the sensitive information:
ansible-vault create secrets.yml
$ ansible-vault create secrets.yml New Vault password: password Confirm New Vault password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
ansible-vault createcommand creates an encrypted file and opens it in an editor. Enter the sensitive data you want to save in the vault, for example:username: username password: password
username: username password: passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the changes, and close the editor. Ansible encrypts the data in the vault.
You can later edit the data in the vault by using the
ansible-vault edit secrets.ymlcommand.Optional: Display the vault content:
ansible-vault view secrets.yml
$ ansible-vault view secrets.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example
~/tags.yml, and add following content to it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/remediation.yml --ask-vault-pass
# ansible-playbook ~/remediation.yml --ask-vault-passCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.11. Unregistering a system by using the RHC System Role Link kopierenLink in die Zwischenablage kopiert!
You can unregister the system from Red Hat if you no longer need the subscription service.
Prerequisites
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- The system is already registered.
Procedure
To unregister, create a playbook file, for example,
~/unregister.ymland add the following content to it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/unregister.yml
# ansible-playbook ~/unregister.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Configuring network settings by using RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
Administrators can automate network-related configuration and management tasks by using the network RHEL System Role.
8.1. Configuring an Ethernet connection with a static IP address by using the network RHEL System Role with an interface name Link kopierenLink in die Zwischenablage kopiert!
You can remotely configure an Ethernet connection using the network RHEL System Role.
For example, the procedure below creates a NetworkManager connection profile for the enp7s0 device with the following settings:
-
A static IPv4 address -
192.0.2.1with a/24subnet mask -
A static IPv6 address -
2001:db8:1::1with a/64subnet mask -
An IPv4 default gateway -
192.0.2.254 -
An IPv6 default gateway -
2001:db8:1::fffe -
An IPv4 DNS server -
192.0.2.200 -
An IPv6 DNS server -
2001:db8:1::ffbb -
A DNS search domain -
example.com
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- A physical or virtual Ethernet device exists in the server’s configuration.
- The managed nodes use NetworkManager to configure the network.
Procedure
Create a playbook file, for example
~/ethernet-static-IP.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/ethernet-static-IP.yml
# ansible-playbook ~/ethernet-static-IP.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2. Configuring an Ethernet connection with a static IP address by using the network RHEL System Role with a device path Link kopierenLink in die Zwischenablage kopiert!
You can remotely configure an Ethernet connection using the network RHEL System Role.
You can identify the device path with the following command:
udevadm info /sys/class/net/<device_name> | grep ID_PATH=
# udevadm info /sys/class/net/<device_name> | grep ID_PATH=
For example, the procedure below creates a NetworkManager connection profile with the following settings for the device that matches the PCI ID 0000:00:0[1-3].0 expression, but not 0000:00:02.0:
-
A static IPv4 address -
192.0.2.1with a/24subnet mask -
A static IPv6 address -
2001:db8:1::1with a/64subnet mask -
An IPv4 default gateway -
192.0.2.254 -
An IPv6 default gateway -
2001:db8:1::fffe -
An IPv4 DNS server -
192.0.2.200 -
An IPv6 DNS server -
2001:db8:1::ffbb -
A DNS search domain -
example.com
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- A physical or virtual Ethernet device exists in the server’s configuration.
- The managed nodes use NetworkManager to configure the network.
Procedure
Create a playbook file, for example
~/ethernet-static-IP.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
matchparameter in this example defines that Ansible applies the play to devices that match PCI ID0000:00:0[1-3].0, but not0000:00:02.0. For further details about special modifiers and wild cards you can use, see thematchparameter description in the/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile.Run the playbook:
ansible-playbook ~/ethernet-static-IP.yml
# ansible-playbook ~/ethernet-static-IP.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL System Role with an interface name Link kopierenLink in die Zwischenablage kopiert!
You can remotely configure an Ethernet connection using the network RHEL System Role. For connections with dynamic IP address settings, NetworkManager requests the IP settings for the connection from a DHCP server.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- A physical or virtual Ethernet device exists in the server’s configuration.
- A DHCP server is available in the network
- The managed nodes use NetworkManager to configure the network.
Procedure
Create a playbook file, for example
~/ethernet-dynamic-IP.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/ethernet-dynamic-IP.yml
# ansible-playbook ~/ethernet-dynamic-IP.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL System Role with a device path Link kopierenLink in die Zwischenablage kopiert!
You can remotely configure an Ethernet connection using the network RHEL System Role. For connections with dynamic IP address settings, NetworkManager requests the IP settings for the connection from a DHCP server.
You can identify the device path with the following command:
udevadm info /sys/class/net/<device_name> | grep ID_PATH=
# udevadm info /sys/class/net/<device_name> | grep ID_PATH=
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- A physical or virtual Ethernet device exists in the server’s configuration.
- A DHCP server is available in the network.
- The managed hosts use NetworkManager to configure the network.
Procedure
Create a playbook file, for example
~/ethernet-dynamic-IP.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
matchparameter in this example defines that Ansible applies the play to devices that match PCI ID0000:00:0[1-3].0, but not0000:00:02.0. For further details about special modifiers and wild cards you can use, see thematchparameter description in the/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile.Run the playbook:
ansible-playbook ~/ethernet-dynamic-IP.yml
# ansible-playbook ~/ethernet-dynamic-IP.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.5. Configuring VLAN tagging by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the network RHEL System Role to configure VLAN tagging. This example adds an Ethernet connection and a VLAN with ID 10 on top of this Ethernet connection. As the child device, the VLAN connection contains the IP, default gateway, and DNS configurations.
Depending on your environment, adjust the play accordingly. For example:
-
To use the VLAN as a port in other connections, such as a bond, omit the
ipattribute, and set the IP configuration in the child configuration. -
To use team, bridge, or bond devices in the VLAN, adapt the
interface_nameandtypeattributes of the ports you use in the VLAN.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/vlan-ethernet.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
parentattribute in the VLAN profile configures the VLAN to operate on top of theenp1s0device.Run the playbook:
ansible-playbook ~/vlan-ethernet.yml
# ansible-playbook ~/vlan-ethernet.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6. Configuring a network bridge by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the network RHEL System Role to configure a Linux bridge. For example, use it to configure a network bridge that uses two Ethernet devices, and sets IPv4 and IPv6 addresses, default gateways, and DNS configuration.
Set the IP configuration on the bridge and not on the ports of the Linux bridge.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- Two or more physical or virtual network devices are installed on the server.
Procedure
Create a playbook file, for example
~/bridge-ethernet.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/bridge-ethernet.yml
# ansible-playbook ~/bridge-ethernet.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.7. Configuring a network bond by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the network RHEL System Roles to configure a Linux bond. For example, use it to configure a network bond in active-backup mode that uses two Ethernet devices, and sets an IPv4 and IPv6 addresses, default gateways, and DNS configuration.
Set the IP configuration on the bond and not on the ports of the Linux bond.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
- Two or more physical or virtual network devices are installed on the server.
Procedure
Create a playbook file, for example
~/bond-ethernet.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/bond-ethernet.yml
# ansible-playbook ~/bond-ethernet.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.8. Configuring an IPoIB connection by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the network RHEL System Role to remotely create NetworkManager connection profiles for IP over InfiniBand (IPoIB) devices. For example, remotely add an InfiniBand connection for the mlx4_ib0 interface with the following settings by running an Ansible playbook:
-
An IPoIB device -
mlx4_ib0.8002 -
A partition key
p_key-0x8002 -
A static
IPv4address -192.0.2.1with a/24subnet mask -
A static
IPv6address -2001:db8:1::1with a/64subnet mask
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
-
An InfiniBand device named
mlx4_ib0is installed in the managed nodes. - The managed nodes use NetworkManager to configure the network.
Procedure
Create a playbook file, for example
~/IPoIB.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you set a
p_keyparameter as in this example, do not set aninterface_nameparameter on the IPoIB device.Run the playbook:
ansible-playbook ~/IPoIB.yml
# ansible-playbook ~/IPoIB.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the
managed-node-01.example.comhost, display the IP settings of themlx4_ib0.8002device:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the partition key (P_Key) of the
mlx4_ib0.8002device:cat /sys/class/net/mlx4_ib0.8002/pkey 0x8002
# cat /sys/class/net/mlx4_ib0.8002/pkey 0x8002Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the mode of the
mlx4_ib0.8002device:cat /sys/class/net/mlx4_ib0.8002/mode datagram
# cat /sys/class/net/mlx4_ib0.8002/mode datagramCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.9. Routing traffic from a specific subnet to a different default gateway by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use policy-based routing to configure a different default gateway for traffic from certain subnets. For example, you can configure RHEL as a router that, by default, routes all traffic to Internet provider A using the default route. However, traffic received from the internal workstations subnet is routed to provider B.
To configure policy-based routing remotely and on multiple nodes, you can use the RHEL network System Role. Perform this procedure on the Ansible control node.
This procedure assumes the following network topology:
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on the them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
-
The managed nodes uses the
NetworkManagerandfirewalldservices. The managed nodes you want to configure has four network interfaces:
-
The
enp7s0interface is connected to the network of provider A. The gateway IP in the provider’s network is198.51.100.2, and the network uses a/30network mask. -
The
enp1s0interface is connected to the network of provider B. The gateway IP in the provider’s network is192.0.2.2, and the network uses a/30network mask. -
The
enp8s0interface is connected to the10.0.0.0/24subnet with internal workstations. -
The
enp9s0interface is connected to the203.0.113.0/24subnet with the company’s servers.
-
The
-
Hosts in the internal workstations subnet use
10.0.0.1as the default gateway. In the procedure, you assign this IP address to theenp8s0network interface of the router. -
Hosts in the server subnet use
203.0.113.1as the default gateway. In the procedure, you assign this IP address to theenp9s0network interface of the router.
Procedure
Create a playbook file, for example
~/pbr.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/pbr.yml
# ansible-playbook ~/pbr.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On a RHEL host in the internal workstation subnet:
Install the
traceroutepackage:yum install traceroute
# yum install tracerouteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
tracerouteutility to display the route to a host on the Internet:traceroute redhat.com
# traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the command displays that the router sends packets over
192.0.2.1, which is the network of provider B.
On a RHEL host in the server subnet:
Install the
traceroutepackage:yum install traceroute
# yum install tracerouteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
tracerouteutility to display the route to a host on the Internet:traceroute redhat.com
# traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the command displays that the router sends packets over
198.51.100.2, which is the network of provider A.
On the RHEL router that you configured using the RHEL System Role:
Display the rule list:
ip rule list
# ip rule list 0: from all lookup local 5: from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow By default, RHEL contains rules for the tables
local,main, anddefault.Display the routes in table
5000:ip route list table 5000
# ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the interfaces and firewall zones:
firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
externalzone has masquerading enabled:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.10. Configuring a static Ethernet connection with 802.1X network authentication by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
Using the network RHEL System Role, you can automate the creation of an Ethernet connection that uses the 802.1X standard to authenticate the client. For example, remotely add an Ethernet connection for the enp1s0 interface with the following settings by running an Ansible playbook:
-
A static IPv4 address -
192.0.2.1with a/24subnet mask -
A static IPv6 address -
2001:db8:1::1with a/64subnet mask -
An IPv4 default gateway -
192.0.2.254 -
An IPv6 default gateway -
2001:db8:1::fffe -
An IPv4 DNS server -
192.0.2.200 -
An IPv6 DNS server -
2001:db8:1::ffbb -
A DNS search domain -
example.com -
802.1X network authentication using the
TLSExtensible Authentication Protocol (EAP)
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file
- The network supports 802.1X network authentication.
- The managed nodes uses NetworkManager.
The following files required for TLS authentication exist on the control node:
-
The client key is stored in the
/srv/data/client.keyfile. -
The client certificate is stored in the
/srv/data/client.crtfile. -
The Certificate Authority (CA) certificate is stored in the
/srv/data/ca.crtfile.
-
The client key is stored in the
Procedure
Create a playbook file, for example
~/enable-802.1x.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/enable-802.1x.yml
# ansible-playbook ~/enable-802.1x.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.11. Setting the default gateway on an existing connection by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the network RHEL System Role to set the default gateway.
When you run a play that uses the network RHEL System Role, the system role overrides an existing connection profile with the same name if the value of settings does not match the ones specified in the play. Therefore, always specify the whole configuration of the network connection profile in the play, even if, for example, the IP configuration already exists. Otherwise, the role resets these values to their defaults.
Depending on whether it already exists, the procedure creates or updates the enp1s0 connection profile with the following settings:
-
A static IPv4 address -
198.51.100.20with a/24subnet mask -
A static IPv6 address -
2001:db8:1::1with a/64subnet mask -
An IPv4 default gateway -
198.51.100.254 -
An IPv6 default gateway -
2001:db8:1::fffe -
An IPv4 DNS server -
198.51.100.200 -
An IPv6 DNS server -
2001:db8:1::ffbb -
A DNS search domain -
example.com
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/ethernet-connection.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/ethernet-connection.yml
# ansible-playbook ~/ethernet-connection.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.12. Configuring a static route by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the network RHEL System Role to configure static routes.
When you run a play that uses the network RHEL System Role, the system role overrides an existing connection profile with the same name if the value of settings does not match the ones specified in the play. Therefore, always specify the whole configuration of the network connection profile in the play, even if, for example, the IP configuration already exists. Otherwise, the role resets these values to their defaults.
Depending on whether it already exists, the procedure creates or updates the enp7s0 connection profile with the following settings:
-
A static IPv4 address -
192.0.2.1with a/24subnet mask -
A static IPv6 address -
2001:db8:1::1with a/64subnet mask -
An IPv4 default gateway -
192.0.2.254 -
An IPv6 default gateway -
2001:db8:1::fffe -
An IPv4 DNS server -
192.0.2.200 -
An IPv6 DNS server -
2001:db8:1::ffbb -
A DNS search domain -
example.com Static routes:
-
198.51.100.0/24with gateway192.0.2.10 -
2001:db8:2::/64with gateway2001:db8:1::10
-
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/add-static-routes.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/add-static-routes.yml
# ansible-playbook ~/add-static-routes.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed nodes:
Display the IPv4 routes:
ip -4 route
# ip -4 route ... 198.51.100.0/24 via 192.0.2.10 dev enp7s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the IPv6 routes:
ip -6 route
# ip -6 route ... 2001:db8:2::/64 via 2001:db8:1::10 dev enp7s0 metric 1024 pref mediumCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.13. Configuring an ethtool offload feature by using the network RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the network RHEL System Role to configure ethtool features of a NetworkManager connection.
When you run a play that uses the network RHEL System Role, the system role overrides an existing connection profile with the same name if the value of settings does not match the ones specified in the play. Therefore, always specify the whole configuration of the network connection profile in the play, even if, for example the IP configuration, already exists. Otherwise the role resets these values to their defaults.
Depending on whether it already exists, the procedure creates or updates the enp1s0 connection profile with the following settings:
-
A static
IPv4address -198.51.100.20with a/24subnet mask -
A static
IPv6address -2001:db8:1::1with a/64subnet mask -
An
IPv4default gateway -198.51.100.254 -
An
IPv6default gateway -2001:db8:1::fffe -
An
IPv4DNS server -198.51.100.200 -
An
IPv6DNS server -2001:db8:1::ffbb -
A DNS search domain -
example.com ethtoolfeatures:- Generic receive offload (GRO): disabled
- Generic segmentation offload (GSO): enabled
- TX stream control transmission protocol (SCTP) segmentation: disabled
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/configure-ethernet-device-with-ethtool-features.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/configure-ethernet-device-with-ethtool-features.yml
# ansible-playbook ~/configure-ethernet-device-with-ethtool-features.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.14. Network states for the network RHEL System role Link kopierenLink in die Zwischenablage kopiert!
The network RHEL system role supports state configurations in playbooks to configure the devices. For this, use the network_state variable followed by the state configurations.
Benefits of using the network_state variable in a playbook:
- Using the declarative method with the state configurations, you can configure interfaces, and the NetworkManager creates a profile for these interfaces in the background.
-
With the
network_statevariable, you can specify the options that you require to change, and all the other options will remain the same as they are. However, with thenetwork_connectionsvariable, you must specify all settings to change the network connection profile.
For example, to create an Ethernet connection with dynamic IP address settings, use the following vars block in your playbook:
| Playbook with state configurations | Regular playbook |
|
|
|
For example, to only change the connection status of dynamic IP address settings that you created as above, use the following vars block in your playbook:
| Playbook with state configurations | Regular playbook |
|
|
|
Chapter 9. Configuring firewalld using System Roles Link kopierenLink in die Zwischenablage kopiert!
You can use the firewall System Role to configure settings of the firewalld service on multiple clients at once. This solution:
- Provides an interface with efficient input settings.
-
Keeps all intended
firewalldparameters in one place.
After you run the firewall role on the control node, the System Role applies the firewalld parameters to the managed node immediately and makes them persistent across reboots.
9.1. Introduction to the firewall RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
RHEL System Roles is a set of contents for the Ansible automation utility. This content together with the Ansible automation utility provides a consistent configuration interface to remotely manage multiple systems.
The rhel-system-roles.firewall role from the RHEL System Roles was introduced for automated configurations of the firewalld service. The rhel-system-roles package contains this System Role, and also the reference documentation.
To apply the firewalld parameters on one or more systems in an automated fashion, use the firewall System Role variable in a playbook. A playbook is a list of one or more plays that is written in the text-based YAML format.
You can use an inventory file to define a set of systems that you want Ansible to configure.
With the firewall role you can configure many different firewalld parameters, for example:
- Zones.
- The services for which packets should be allowed.
- Granting, rejection, or dropping of traffic access to ports.
- Forwarding of ports or port ranges for a zone.
9.2. Resetting the firewalld settings using the firewall RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the firewall RHEL system role, you can reset the firewalld settings to their default state. If you add the previous:replaced parameter to the variable list, the System Role removes all existing user-defined settings and resets firewalld to the defaults. If you combine the previous:replaced parameter with other settings, the firewall role removes all existing settings before applying new ones.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on the them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/reset-firewalld.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/configuring-a-dmz.yml
# ansible-playbook ~/configuring-a-dmz.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run this command as
rooton the managed node to check all the zones:firewall-cmd --list-all-zones
# firewall-cmd --list-all-zonesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Forwarding incoming traffic from one local port to a different local port Link kopierenLink in die Zwischenablage kopiert!
With the firewall role you can remotely configure firewalld parameters with persisting effect on multiple managed hosts.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on the them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/port_forwarding.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/port_forwarding.yml
# ansible-playbook ~/port_forwarding.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed host, display the
firewalldsettings:firewall-cmd --list-forward-ports
# firewall-cmd --list-forward-portsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.4. Configuring ports using System Roles Link kopierenLink in die Zwischenablage kopiert!
You can use the RHEL firewall System Role to open or close ports in the local firewall for incoming traffic and make the new configuration persist across reboots. For example you can configure the default zone to permit incoming traffic for the HTTPS service.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on the them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/opening-a-port.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
permanent: trueoption makes the new settings persistent across reboots.Run the playbook:
ansible-playbook ~/opening-a-port.yml
# ansible-playbook ~/opening-a-port.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed node, verify that the
443/tcpport associated with theHTTPSservice is open:firewall-cmd --list-ports
# firewall-cmd --list-ports 443/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.5. Configuring a DMZ firewalld zone by using the firewalld RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
As a system administrator, you can use the firewall System Role to configure a dmz zone on the enp1s0 interface to permit HTTPS traffic to the zone. In this way, you enable external users to access your web servers.
Perform this procedure on the Ansible control node.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on the them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a playbook file, for example
~/configuring-a-dmz.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/configuring-a-dmz.yml
# ansible-playbook ~/configuring-a-dmz.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed node, view detailed information about the
dmzzone:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Variables of the postfix role in System Roles Link kopierenLink in die Zwischenablage kopiert!
The postfix role variables allow the user to install, configure, and start the postfix Mail Transfer Agent (MTA).
The following role variables are defined in this section:
-
postfix_conf: It includes key/value pairs of all the supportedpostfixconfiguration parameters. By default, thepostfix_confdoes not have a value.
postfix_conf: relayhost: example.com
postfix_conf:
relayhost: example.com
If your scenario requires removing any existing configuration and apply the desired configuration on top of a clean postfix installation, specify the previous: replaced option within the postfix_conf dictionary:
An example with the previous: replaced option:
postfix_conf: relayhost: example.com previous: replaced
postfix_conf:
relayhost: example.com
previous: replaced
-
postfix_check: It determines if a check has been executed before starting thepostfixto verify the configuration changes. The default value is true.
For example:
postfix_check: true
postfix_check: true
-
postfix_backup: It determines whether a single backup copy of the configuration is created. By default thepostfix_backupvalue is false.
To overwrite any previous backup run the following command:
*cp /etc/postfix/main.cf /etc/postfix/main.cf.backup*
# *cp /etc/postfix/main.cf /etc/postfix/main.cf.backup*
If the postfix_backup value is changed to true, you must also set the postfix_backup_multiple value to false.
For example:
postfix_backup: true postfix_backup_multiple: false
postfix_backup: true
postfix_backup_multiple: false
-
postfix_backup_multiple: It determines if the role will make a timestamped backup copy of the configuration.
To keep multiple backup copies, run the following command:
*cp /etc/postfix/main.cf /etc/postfix/main.cf.$(date -Isec)*
# *cp /etc/postfix/main.cf /etc/postfix/main.cf.$(date -Isec)*
By default the value of postfix_backup_multiple is true. The postfix_backup_multiple:true setting overrides postfix_backup. If you want to use postfix_backup you must set the postfix_backup_multiple:false.
-
postfix_manage_firewall: Integrates thepostfixrole with thefirewallrole to manage port access. By default, the variable is set tofalse. If you want to automatically manage port access from thepostfixrole, set the variable totrue. -
postfix_manage_selinux: Integrates thepostfixrole with theselinuxrole to manage port access. By default, the variable is set tofalse. If you want to automatically manage port access from thepostfixrole, set the variable totrue.
The configuration parameters cannot be removed. Before running the postfix role, set the postfix_conf to all the required configuration parameters and use the file module to remove /etc/postfix/main.cf.
Chapter 11. Configuring SELinux using System Roles Link kopierenLink in die Zwischenablage kopiert!
11.1. Introduction to the selinux System Role Link kopierenLink in die Zwischenablage kopiert!
RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. The selinux System Role enables the following actions:
- Cleaning local policy modifications related to SELinux booleans, file contexts, ports, and logins.
- Setting SELinux policy booleans, file contexts, ports, and logins.
- Restoring file contexts on specified files or directories.
- Managing SELinux modules.
The following table provides an overview of input variables available in the selinux System Role.
| Role variable | Description | CLI alternative |
|---|---|---|
| selinux_policy | Chooses a policy protecting targeted processes or Multi Level Security protection. |
|
| selinux_state | Switches SELinux modes. |
|
| selinux_booleans | Enables and disables SELinux booleans. |
|
| selinux_fcontexts | Adds or removes a SELinux file context mapping. |
|
| selinux_restore_dirs | Restores SELinux labels in the file-system tree. |
|
| selinux_ports | Sets SELinux labels on ports. |
|
| selinux_logins | Sets users to SELinux user mapping. |
|
| selinux_modules | Installs, enables, disables, or removes SELinux modules. |
|
The /usr/share/doc/rhel-system-roles/selinux/example-selinux-playbook.yml example playbook installed by the rhel-system-roles package demonstrates how to set the targeted policy in enforcing mode. The playbook also applies several local policy modifications and restores file contexts in the /tmp/test_dir/ directory.
For a detailed reference on selinux role variables, install the rhel-system-roles package, and see the README.md or README.html files in the /usr/share/doc/rhel-system-roles/selinux/ directory.
11.2. Using the selinux System Role to apply SELinux settings on multiple systems Link kopierenLink in die Zwischenablage kopiert!
Follow the steps to prepare and apply an Ansible playbook with your verified SELinux settings.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
selinuxSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed. - An inventory file which lists the managed nodes.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Prepare your playbook. You can either start from the scratch or modify the example playbook installed as a part of the
rhel-system-rolespackage:cp /usr/share/doc/rhel-system-roles/selinux/example-selinux-playbook.yml my-selinux-playbook.yml vi my-selinux-playbook.yml
# cp /usr/share/doc/rhel-system-roles/selinux/example-selinux-playbook.yml my-selinux-playbook.yml # vi my-selinux-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change the content of the playbook to fit your scenario. For example, the following part ensures that the system installs and enables the
selinux-local-1.ppSELinux module:selinux_modules: - { path: "selinux-local-1.pp", priority: "400" }selinux_modules: - { path: "selinux-local-1.pp", priority: "400" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and exit the text editor.
Run your playbook on the host1, host2, and host3 systems:
ansible-playbook -i host1,host2,host3 my-selinux-playbook.yml
# ansible-playbook -i host1,host2,host3 my-selinux-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 12. Configuring logging by using RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
As a system administrator, you can use the logging System Role to configure a RHEL host as a logging server to collect logs from many client systems.
12.1. The logging System Role Link kopierenLink in die Zwischenablage kopiert!
With the logging System Role, you can deploy logging configurations on local and remote hosts.
To apply a logging System Role on one or more systems, you define the logging configuration in a playbook. A playbook is a list of one or more plays. Playbooks are human-readable, and they are written in the YAML format. For more information about playbooks, see Working with playbooks in Ansible documentation.
The set of systems that you want to configure according to the playbook is defined in an inventory file. For more information on creating and using inventories, see How to build your inventory in Ansible documentation.
Logging solutions provide multiple ways of reading logs and multiple logging outputs.
For example, a logging system can receive the following inputs:
- local files,
-
systemd/journal, - another logging system over the network.
In addition, a logging system can have the following outputs:
-
logs stored in the local files in the
/var/logdirectory, - logs sent to Elasticsearch,
- logs forwarded to another logging system.
With the logging System Role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files.
12.2. logging System Role parameters Link kopierenLink in die Zwischenablage kopiert!
In a logging System Role playbook, you define the inputs in the logging_inputs parameter, outputs in the logging_outputs parameter, and the relationships between the inputs and outputs in the logging_flows parameter. The logging System Role processes these variables with additional options to configure the logging system. You can also enable encryption or an automatic port management.
Currently, the only available logging system in the logging System Role is Rsyslog.
logging_inputs: List of inputs for the logging solution.-
name: Unique name of the input. Used in thelogging_flows: inputs list and a part of the generatedconfigfile name. type: Type of the input element. The type specifies a task type which corresponds to a directory name inroles/rsyslog/{tasks,vars}/inputs/.basics: Inputs configuring inputs fromsystemdjournal orunixsocket.-
kernel_message: Loadimklogif set totrue. Default tofalse. -
use_imuxsock: Useimuxsockinstead ofimjournal. Default tofalse. -
ratelimit_burst: Maximum number of messages that can be emitted withinratelimit_interval. Default to20000ifuse_imuxsockis false. Default to200ifuse_imuxsockis true. -
ratelimit_interval: Interval to evaluateratelimit_burst. Default to 600 seconds ifuse_imuxsockis false. Default to 0 ifuse_imuxsockis true. 0 indicates rate limiting is turned off. -
persist_state_interval: Journal state is persisted everyvaluemessages. Default to10. Effective only whenuse_imuxsockis false.
-
-
files: Inputs configuring inputs from local files. -
remote: Inputs configuring inputs from the other logging system over network.
-
state: State of the configuration file.presentorabsent. Default topresent.
-
logging_outputs: List of outputs for the logging solution.-
files: Outputs configuring outputs to local files. -
forwards: Outputs configuring outputs to another logging system. -
remote_files: Outputs configuring outputs from another logging system to local files.
-
logging_flows: List of flows that define relationships betweenlogging_inputsandlogging_outputs. Thelogging_flowsvariable has the following keys:-
name: Unique name of the flow -
inputs: List oflogging_inputsname values -
outputs: List oflogging_outputsname values.
-
-
logging_manage_firewall: If set totrue, the variable uses thefirewallrole to automatically manage port access from within theloggingrole. -
logging_manage_selinux: If set totrue, the variable uses theselinuxrole to automatically manage port access from within theloggingrole.
12.3. Applying a local logging System Role Link kopierenLink in die Zwischenablage kopiert!
Prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines. Each machine records logs locally.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
loggingSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
vi logging-playbook.yml
# vi logging-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the playbook on a specific inventory:
ansible-playbook -i inventory-file /path/to/file/logging-playbook.yml
# ansible-playbook -i inventory-file /path/to/file/logging-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
inventory-fileis the inventory file. -
logging-playbook.ymlis the playbook you use.
-
Verification
Test the syntax of the
/etc/rsyslog.conffile:rsyslogd -N 1
# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the system sends messages to the log:
Send a test message:
logger test
# logger testCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
/var/log/messageslog, for example:cat /var/log/messages Aug 5 13:48:31 hostname root[6778]: test
# cat /var/log/messages Aug 5 13:48:31 hostname root[6778]: testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where `hostname` is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case
root.
12.4. Filtering logs in a local logging System Role Link kopierenLink in die Zwischenablage kopiert!
You can deploy a logging solution which filters the logs based on the rsyslog property-based filter.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
loggingSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
- Red Hat Ansible Core is installed
-
The
rhel-system-rolespackage is installed - An inventory file which lists the managed nodes.
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using this configuration, all messages that contain the
errorstring are logged in/var/log/errors.log, and all other messages are logged in/var/log/others.log.You can replace the
errorproperty value with the string by which you want to filter.You can modify the variables according to your preferences.
Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/playbook.yml
# ansible-playbook -i inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Test the syntax of the
/etc/rsyslog.conffile:rsyslogd -N 1
# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the system sends messages that contain the
errorstring to the log:Send a test message:
logger error
# logger errorCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the
/var/log/errors.loglog, for example:cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error
# cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: errorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
hostnameis the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot.
12.5. Applying a remote logging solution using the logging System Role Link kopierenLink in die Zwischenablage kopiert!
Follow these steps to prepare and apply a Red Hat Ansible Core playbook to configure a remote logging solution. In this playbook, one or more clients take logs from systemd-journal and forward them to a remote server. The server receives remote input from remote_rsyslog and remote_files and outputs the logs to local files in directories named by remote host names.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
loggingSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed. - An inventory file which lists the managed nodes.
-
The
You do not have to have the rsyslog package installed, because the System Role installs rsyslog when deployed.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
vi logging-playbook.yml
# vi logging-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following content into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
host1.example.comis the logging server.NoteYou can modify the parameters in the playbook to fit your needs.
WarningThe logging solution works only with the ports defined in the SELinux policy of the server or client system and open in the firewall. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, modify the SELinux policy on the client and server systems.
Create an inventory file that lists your servers and clients:
Create a new file and open it in a text editor, for example:
vi inventory.ini
# vi inventory.iniCopy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following content into the inventory file:
[servers] server ansible_host=host1.example.com [clients] client ansible_host=host2.example.com
[servers] server ansible_host=host1.example.com [clients] client ansible_host=host2.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
host1.example.comis the logging server. -
host2.example.comis the logging client.
-
Run the playbook on your inventory.
ansible-playbook -i /path/to/file/inventory.ini /path/to/file/_logging-playbook.yml
# ansible-playbook -i /path/to/file/inventory.ini /path/to/file/_logging-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
inventory.iniis the inventory file. -
logging-playbook.ymlis the playbook you created.
-
Verification
On both the client and the server system, test the syntax of the
/etc/rsyslog.conffile:rsyslogd -N 1
# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the client system sends messages to the server:
On the client system, send a test message:
logger test
# logger testCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the server system, view the
/var/log/messageslog, for example:cat /var/log/messages Aug 5 13:48:31 host2.example.com root[6778]: test
# cat /var/log/messages Aug 5 13:48:31 host2.example.com root[6778]: testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
host2.example.comis the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot.
12.6. Using the logging System Role with TLS Link kopierenLink in die Zwischenablage kopiert!
Transport Layer Security (TLS) is a cryptographic protocol designed to securely communicate over the computer network.
As an administrator, you can use the logging RHEL System Role to configure secure transfer of logs using Red Hat Ansible Automation Platform.
12.6.1. Configuring client logging with TLS Link kopierenLink in die Zwischenablage kopiert!
You can use the logging System Role to configure logging in RHEL systems that are logged on a local machine and can transfer logs to the remote logging system with TLS by running an Ansible playbook.
This procedure configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure TLS.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node.
Procedure
Create a
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The playbook uses the following parameters:
logging_pki_files-
Using this parameter you can configure TLS and has to pass
ca_cert_src,cert_src, andprivate_key_srcparameters. ca_cert-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to cert. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents local CA cert file path which is copied to the target host. If
ca_certis specified, it is copied to the location. cert_src-
Represents the local cert file path which is copied to the target host. If
certis specified, it is copied to the location. private_key_src-
Represents the local key file path which is copied to the target host. If
private_keyis specified, it is copied to the location. tls-
Using this parameter ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set
tls: true.
Verify playbook syntax:
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file playbook.yml
# ansible-playbook -i inventory_file playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.6.2. Configuring server logging with TLS Link kopierenLink in die Zwischenablage kopiert!
You can use the logging System Role to configure logging in RHEL systems as a server and can receive logs from the remote logging system with TLS by running an Ansible playbook.
This procedure configures TLS on all hosts in the server group in the Ansible inventory.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure TLS.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node.
Procedure
Create a
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The playbook uses the following parameters:
logging_pki_files-
Using this parameter you can configure TLS and has to pass
ca_cert_src,cert_src, andprivate_key_srcparameters. ca_cert-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pemand the file name is set by the user. cert-
Represents the path to cert. Default path is
/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. private_key-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pemand the file name is set by the user. ca_cert_src-
Represents local CA cert file path which is copied to the target host. If
ca_certis specified, it is copied to the location. cert_src-
Represents the local cert file path which is copied to the target host. If
certis specified, it is copied to the location. private_key_src-
Represents the local key file path which is copied to the target host. If
private_keyis specified, it is copied to the location. tls-
Using this parameter ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set
tls: true.
Verify playbook syntax:
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file playbook.yml
# ansible-playbook -i inventory_file playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.7. Using the logging System Roles with RELP Link kopierenLink in die Zwischenablage kopiert!
Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss.
The RELP sender transfers log entries in form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery.
You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system.
Administrators can use the logging System Role to configure the logging system to reliably send and receive log entries.
12.7.1. Configuring client logging with RELP Link kopierenLink in die Zwischenablage kopiert!
You can use the logging System Role to configure logging in RHEL systems that are logged on a local machine and can transfer logs to the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure RELP.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node.
Procedure
Create a
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The playbooks uses following settings:
-
target: This is a required parameter that specifies the host name where the remote logging system is running. -
port: Port number the remote logging system is listening. tls: Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set thetlsvariable tofalse. By defaulttlsparameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert,cert,private_key} and/or {ca_cert_src,cert_src,private_key_src}.-
If {
ca_cert_src,cert_src,private_key_src} triplet is set, the default locations/etc/pki/tls/certsand/etc/pki/tls/privateare used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If {
ca_cert,cert,private_key} triplet is set, files are expected to be on the default path before the logging configuration. - If both the triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If {
-
ca_cert: Represents the path to CA certificate. Default path is/etc/pki/tls/certs/ca.pemand the file name is set by the user. -
cert: Represents the path to cert. Default path is/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. -
private_key: Represents the path to private key. Default path is/etc/pki/tls/private/server-key.pemand the file name is set by the user. -
ca_cert_src: Represents local CA cert file path which is copied to the target host. If ca_cert is specified, it is copied to the location. -
cert_src: Represents the local cert file path which is copied to the target host. If cert is specified, it is copied to the location. -
private_key_src: Represents the local key file path which is copied to the target host. If private_key is specified, it is copied to the location. -
pki_authmode: Accepts the authentication mode asnameorfingerprint. -
permitted_servers: List of servers that will be allowed by the logging client to connect and send logs over TLS. -
inputs: List of logging input dictionary. -
outputs: List of logging output dictionary.
-
Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook -i inventory_file playbook.yml
# ansible-playbook -i inventory_file playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.7.2. Configuring server logging with RELP Link kopierenLink in die Zwischenablage kopiert!
You can use the logging System Role to configure logging in RHEL systems as a server and can receive logs from the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have permissions to run playbooks on managed nodes on which you want to configure RELP.
- The managed nodes are listed in the inventory file on the control node.
-
The
ansibleandrhel-system-rolespackages are installed on the control node.
Procedure
Create a
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The playbooks uses following settings:
-
port: Port number the remote logging system is listening. tls: Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set thetlsvariable tofalse. By defaulttlsparameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert,cert,private_key} and/or {ca_cert_src,cert_src,private_key_src}.-
If {
ca_cert_src,cert_src,private_key_src} triplet is set, the default locations/etc/pki/tls/certsand/etc/pki/tls/privateare used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If {
ca_cert,cert,private_key} triplet is set, files are expected to be on the default path before the logging configuration. - If both the triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If {
-
ca_cert: Represents the path to CA certificate. Default path is/etc/pki/tls/certs/ca.pemand the file name is set by the user. -
cert: Represents the path to cert. Default path is/etc/pki/tls/certs/server-cert.pemand the file name is set by the user. -
private_key: Represents the path to private key. Default path is/etc/pki/tls/private/server-key.pemand the file name is set by the user. -
ca_cert_src: Represents local CA cert file path which is copied to the target host. If ca_cert is specified, it is copied to the location. -
cert_src: Represents the local cert file path which is copied to the target host. If cert is specified, it is copied to the location. -
private_key_src: Represents the local key file path which is copied to the target host. If private_key is specified, it is copied to the location. -
pki_authmode: Accepts the authentication mode asnameorfingerprint. -
permitted_clients: List of clients that will be allowed by the logging server to connect and send logs over TLS. -
inputs: List of logging input dictionary. -
outputs: List of logging output dictionary.
-
Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook -i inventory_file playbook.yml
# ansible-playbook -i inventory_file playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 13. Configuring the systemd journal by using the journald RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the journald System Role you can automate the systemd journal, and configure persistent logging by using the Red Hat Ansible Automation Platform.
13.1. Variables for the journald RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
The journald System Role provides a set of variables for customizing the behavior of journald logging service. The role includes the following variables:
| Role Variable | Description |
|---|---|
|
|
Use this boolean variable to configure |
|
|
Use this variable to specify the maximum size, in megabytes, that journal files can occupy on disk. Refer to the default sizing calculation described in |
|
|
Use this variable to specify the maximum number of journal files you want to keep while respecting the |
|
| Use this variable to specify the maximum size, in megabytes, of a single journal file. |
|
|
Use this boolean variable to configure |
|
|
Use this boolean variable to apply compression to |
|
|
Use this variable to specify the time, in minutes, after which |
13.2. Configuring persistent logging by using the journald System Role Link kopierenLink in die Zwischenablage kopiert!
As a system administrator, you can configure persistent logging by using the journald System Role. The following example shows how to set up the journald System Role variables in a playbook to achieve the following goals:
- Configuring persistent logging
- Specifying the maximum size of disk space for journal files
-
Configuring
journaldto keep log data separate for each user - Defining the synchronization interval
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow As a result, the
journaldservice stores your logs persistently on a disk to the maximum size of 2048 MB, and keeps log data separate for each user. The synchronization happens every minute.Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml -i inventory_file
# ansible-playbook --syntax-check playbook.yml -i inventory_fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/playbook.yml
# ansible-playbook -i inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3. Additional resources Link kopierenLink in die Zwischenablage kopiert!
-
The
journald.conf(5)man page -
The
ansible-playbook(1)man page
Chapter 14. Configuring secure communication by using the ssh and sshd RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
As an administrator, you can use the sshd System Role to configure SSH servers and the ssh System Role to configure SSH clients consistently on any number of RHEL systems at the same time by using Red Hat Ansible Automation Platform.
14.1. ssh Server System Role variables Link kopierenLink in die Zwischenablage kopiert!
In an sshd System Role playbook, you can define the parameters for the SSH configuration file according to your preferences and limitations.
If you do not configure these variables, the System Role produces an sshd_config file that matches the RHEL defaults.
In all cases, Booleans correctly render as yes and no in sshd configuration. You can define multi-line configuration items using lists. For example:
sshd_ListenAddress: - 0.0.0.0 - '::'
sshd_ListenAddress:
- 0.0.0.0
- '::'
renders as:
ListenAddress 0.0.0.0 ListenAddress ::
ListenAddress 0.0.0.0
ListenAddress ::
Variables for the sshd System Role
sshd_enable-
If set to
False, the role is completely disabled. Defaults toTrue. sshd_skip_defaults-
If set to
True, the System Role does not apply default values. Instead, you specify the complete set of configuration defaults by using either thesshddict, orsshd_Keyvariables. Defaults toFalse. sshd_manage_service-
If set to
False, the service is not managed, which means it is not enabled on boot and does not start or reload. Defaults toTrueexcept when running inside a container or AIX, because the Ansible service module does not currently supportenabledfor AIX. sshd_allow_reload-
If set to
False,sshddoes not reload after a change of configuration. This can help with troubleshooting. To apply the changed configuration, reloadsshdmanually. Defaults to the same value assshd_manage_serviceexcept on AIX, wheresshd_manage_servicedefaults toFalsebutsshd_allow_reloaddefaults toTrue. sshd_install_serviceIf set to
True, the role installs service files for thesshdservice. This overrides files provided in the operating system. Do not set toTrueunless you are configuring a second instance and you also change thesshd_servicevariable. Defaults toFalse.The role uses the files pointed by the following variables as templates:
sshd_service_template_service (default: templates/sshd.service.j2) sshd_service_template_at_service (default: templates/sshd@.service.j2) sshd_service_template_socket (default: templates/sshd.socket.j2)
sshd_service_template_service (default: templates/sshd.service.j2) sshd_service_template_at_service (default: templates/sshd@.service.j2) sshd_service_template_socket (default: templates/sshd.socket.j2)Copy to Clipboard Copied! Toggle word wrap Toggle overflow sshd_service-
This variable changes the
sshdservice name, which is useful for configuring a secondsshdservice instance. sshdA dict that contains configuration. For example:
sshd: Compression: yes ListenAddress: - 0.0.0.0sshd: Compression: yes ListenAddress: - 0.0.0.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow sshd_OptionNameYou can define options by using simple variables consisting of the
sshd_prefix and the option name instead of a dict. The simple variables override values in thesshddict.. For example:sshd_Compression: no
sshd_Compression: noCopy to Clipboard Copied! Toggle word wrap Toggle overflow sshd_matchandsshd_match_1tosshd_match_9-
A list of dicts or just a dict for a Match section. Note that these variables do not override match blocks as defined in the
sshddict. All of the sources will be reflected in the resulting configuration file.
Secondary variables for the sshd System Role
You can use these variables to override the defaults that correspond to each supported platform.
sshd_packages- You can override the default list of installed packages using this variable.
sshd_config_owner,sshd_config_group, andsshd_config_mode-
You can set the ownership and permissions for the
opensshconfiguration file that this role produces using these variables. sshd_config_file-
The path where this role saves the
opensshserver configuration produced. sshd_config_namespaceThe default value of this variable is null, which means that the role defines the entire content of the configuration file including system defaults. Alternatively, you can use this variable to invoke this role from other roles or from multiple places in a single playbook on systems that do not support drop-in directory. The
sshd_skip_defaultsvariable is ignored and no system defaults are used in this case.When this variable is set, the role places the configuration that you specify to configuration snippets in an existing configuration file under the given namespace. If your scenario requires applying the role several times, you need to select a different namespace for each application.
NoteLimitations of the
opensshconfiguration file still apply. For example, only the first option specified in a configuration file is effective for most of the configuration options.Technically, the role places snippets in "Match all" blocks, unless they contain other match blocks, to ensure they are applied regardless of the previous match blocks in the existing configuration file. This allows configuring any non-conflicting options from different roles invocations.
sshd_binary-
The path to the
sshdexecutable ofopenssh. sshd_service-
The name of the
sshdservice. By default, this variable contains the name of thesshdservice that the target platform uses. You can also use it to set the name of the customsshdservice when the role uses thesshd_install_servicevariable. sshd_verify_hostkeys-
Defaults to
auto. When set toauto, this lists all host keys that are present in the produced configuration file, and generates any paths that are not present. Additionally, permissions and file owners are set to default values. This is useful if the role is used in the deployment stage to make sure the service is able to start on the first attempt. To disable this check, set this variable to an empty list[]. sshd_hostkey_owner,sshd_hostkey_group,sshd_hostkey_mode-
Use these variables to set the ownership and permissions for the host keys from
sshd_verify_hostkeys. sshd_sysconfig-
On RHEL-based systems, this variable configures additional details of the
sshdservice. If set totrue, this role manages also the/etc/sysconfig/sshdconfiguration file based on the following configuration. Defaults tofalse. sshd_sysconfig_override_crypto_policy-
In RHEL, when set to
true, this variable overrides the system-wide crypto policy. Defaults tofalse. sshd_sysconfig_use_strong_rng-
On RHEL-based systems, this variable can force
sshdto reseed theopensslrandom number generator with the number of bytes given as the argument. The default is0, which disables this functionality. Do not turn this on if the system does not have a hardware random number generator.
14.2. Configuring OpenSSH servers using the sshd System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the sshd System Role to configure multiple SSH servers by running an Ansible playbook.
You can use the sshd System Role with other System Roles that change SSH and SSHD configuration, for example the Identity Management RHEL System Roles. To prevent the configuration from being overwritten, make sure that the sshd role uses namespaces (RHEL 8 and earlier versions) or a drop-in directory (RHEL 9).
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
sshdSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Copy the example playbook for the
sshdSystem Role:cp /usr/share/doc/rhel-system-roles/sshd/example-root-login-playbook.yml path/custom-playbook.yml
# cp /usr/share/doc/rhel-system-roles/sshd/example-root-login-playbook.yml path/custom-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the copied playbook by using a text editor, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The playbook configures the managed node as an SSH server configured so that:
-
password and
rootuser login is disabled -
password and
rootuser login is enabled only from the subnet192.0.2.0/24
You can modify the variables according to your preferences. For more details, see SSH Server System Role variables.
-
password and
Optional: Verify playbook syntax.
ansible-playbook --syntax-check path/custom-playbook.yml
# ansible-playbook --syntax-check path/custom-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Log in to the SSH server:
ssh user1@10.1.1.1
$ ssh user1@10.1.1.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
user1is a user on the SSH server. -
10.1.1.1is the IP address of the SSH server.
-
Check the contents of the
sshd_configfile on the SSH server:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that you can connect to the server as root from the
192.0.2.0/24subnet:Determine your IP address:
hostname -I
$ hostname -I 192.0.2.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the IP address is within the
192.0.2.1-192.0.2.254range, you can connect to the server.Connect to the server as
root:ssh root@10.1.1.1
$ ssh root@10.1.1.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3. ssh System Role variables Link kopierenLink in die Zwischenablage kopiert!
In an ssh System Role playbook, you can define the parameters for the client SSH configuration file according to your preferences and limitations.
If you do not configure these variables, the System Role produces a global ssh_config file that matches the RHEL defaults.
In all cases, booleans correctly render as yes or no in ssh configuration. You can define multi-line configuration items using lists. For example:
LocalForward: - 22 localhost:2222 - 403 localhost:4003
LocalForward:
- 22 localhost:2222
- 403 localhost:4003
renders as:
LocalForward 22 localhost:2222 LocalForward 403 localhost:4003
LocalForward 22 localhost:2222
LocalForward 403 localhost:4003
The configuration options are case sensitive.
Variables for the ssh System Role
ssh_user-
You can define an existing user name for which the System Role modifies user-specific configuration. The user-specific configuration is saved in
~/.ssh/configof the given user. The default value is null, which modifies global configuration for all users. ssh_skip_defaults-
Defaults to
auto. If set toauto, the System Role writes the system-wide configuration file/etc/ssh/ssh_configand keeps the RHEL defaults defined there. Creating a drop-in configuration file, for example by defining thessh_drop_in_namevariable, automatically disables thessh_skip_defaultsvariable. ssh_drop_in_nameDefines the name for the drop-in configuration file, which is placed in the system-wide drop-in directory. The name is used in the template
/etc/ssh/ssh_config.d/{ssh_drop_in_name}.confto reference the configuration file to be modified. If the system does not support drop-in directory, the default value is null. If the system supports drop-in directories, the default value is00-ansible.WarningIf the system does not support drop-in directories, setting this option will make the play fail.
The suggested format is
NN-name, whereNNis a two-digit number used for ordering the configuration files andnameis any descriptive name for the content or the owner of the file.ssh- A dict that contains configuration options and their respective values.
ssh_OptionName-
You can define options by using simple variables consisting of the
ssh_prefix and the option name instead of a dict. The simple variables override values in thesshdict. ssh_additional_packages-
This role automatically installs the
opensshandopenssh-clientspackages, which are needed for the most common use cases. If you need to install additional packages, for example,openssh-keysignfor host-based authentication, you can specify them in this variable. ssh_config_fileThe path to which the role saves the configuration file produced. Default value:
-
If the system has a drop-in directory, the default value is defined by the template
/etc/ssh/ssh_config.d/{ssh_drop_in_name}.conf. -
If the system does not have a drop-in directory, the default value is
/etc/ssh/ssh_config. -
if the
ssh_uservariable is defined, the default value is~/.ssh/config.
-
If the system has a drop-in directory, the default value is defined by the template
ssh_config_owner,ssh_config_group,ssh_config_mode-
The owner, group and modes of the created configuration file. By default, the owner of the file is
root:root, and the mode is0644. Ifssh_useris defined, the mode is0600, and the owner and group are derived from the user name specified in thessh_uservariable.
14.4. Configuring OpenSSH clients using the ssh System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the ssh System Role to configure multiple SSH clients by running an Ansible playbook.
You can use the ssh System Role with other System Roles that change SSH and SSHD configuration, for example the Identity Management RHEL System Roles. To prevent the configuration from being overwritten, make sure that the ssh role uses a drop-in directory (default from RHEL 8).
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
sshSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This playbook configures the
rootuser’s SSH client preferences on the managed nodes with the following configurations:- Compression is enabled.
-
ControlMaster multiplexing is set to
auto. -
The
examplealias for connecting to theexample.comhost isuser1. -
The
examplehost alias is created, which represents a connection to theexample.comhost the withuser1user name. - X11 forwarding is disabled.
Optionally, you can modify these variables according to your preferences. For more details, see
sshSystem Role variables .Optional: Verify playbook syntax.
ansible-playbook --syntax-check path/custom-playbook.yml
# ansible-playbook --syntax-check path/custom-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file path/custom-playbook.yml
# ansible-playbook -i inventory_file path/custom-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the managed node has the correct configuration by opening the SSH configuration file in a text editor, for example:
vi ~root/.ssh/config
# vi ~root/.ssh/configCopy to Clipboard Copied! Toggle word wrap Toggle overflow After application of the example playbook shown above, the configuration file should have the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.5. Using the sshd System Role for non-exclusive configuration Link kopierenLink in die Zwischenablage kopiert!
Normally, applying the sshd System Role overwrites the entire configuration. This may be problematic if you have previously adjusted the configuration, for example with a different System Role or playbook. To apply the sshd System Role for only selected configuration options while keeping other options in place, you can use the non-exclusive configuration.
In RHEL 8 and earlier, you can apply the non-exclusive configuration with a configuration snippet.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
sshdSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-corepackage is installed. - An inventory file which lists the managed nodes.
- A playbook for a different RHEL System Role.
-
The
Procedure
Add a configuration snippet with the
sshd_config_namespacevariable to the playbook:Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you apply the playbook to the inventory, the role adds the following snippet, if not already present, to the
/etc/ssh/sshd_configfile.BEGIN sshd system role managed block: namespace <my-application> END sshd system role managed block: namespace <my-application>
# BEGIN sshd system role managed block: namespace <my-application> Match all AcceptEnv LANG LS_COLORS EDITOR # END sshd system role managed block: namespace <my-application>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml -i inventory_file
# ansible-playbook --syntax-check playbook.yml -i inventory_fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. Configuring VPN connections with IPsec by using the vpn RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the vpn System Role, you can configure VPN connections on RHEL systems by using Red Hat Ansible Automation Platform. You can use it to set up host-to-host, network-to-network, VPN Remote Access Server, and mesh configurations.
For host-to-host connections, the role sets up a VPN tunnel between each pair of hosts in the list of vpn_connections using the default parameters, including generating keys as needed. Alternatively, you can configure it to create an opportunistic mesh configuration between all hosts listed. The role assumes that the names of the hosts under hosts are the same as the names of the hosts used in the Ansible inventory, and that you can use those names to configure the tunnels.
The vpn RHEL System Role currently supports only Libreswan, which is an IPsec implementation, as the VPN provider.
15.1. Creating a host-to-host VPN with IPsec using the vpn System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the vpn System Role to configure host-to-host connections by running an Ansible playbook on the control node, which will configure all the managed nodes listed in an inventory file.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
vpnSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This playbook configures the connection
managed_node1-to-managed_node2using pre-shared key authentication with keys auto-generated by the system role. Sincevpn_manage_firewallandvpn_manage_selinuxare both set to true, thevpnrole will use thefirewallandselinuxroles to manage the ports used by thevpnrole.Optional: Configure connections from managed hosts to external hosts that are not listed in the inventory file by adding the following section to the
vpn_connectionslist of hosts:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This configures two additional connections:
managed_node1-to-external_nodeandmanaged_node2-to-external_node.
The connections are configured only on the managed nodes and not on the external node.
Optional: You can specify multiple VPN connections for the managed nodes by using additional sections within
vpn_connections, for example a control plane and a data plane:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: You can modify the variables according to your preferences. For more details, see the
/usr/share/doc/rhel-system-roles/vpn/README.mdfile. Optional: Verify playbook syntax.
ansible-playbook --syntax-check /path/to/file/playbook.yml -i /path/to/file/inventory_file
# ansible-playbook --syntax-check /path/to/file/playbook.yml -i /path/to/file/inventory_fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i /path/to/file/inventory_file /path/to/file/playbook.yml
# ansible-playbook -i /path/to/file/inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed nodes, confirm that the connection is successfully loaded:
ipsec status | grep connection.name
# ipsec status | grep connection.nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace connection.name with the name of the connection from this node, for example
managed_node1-to-managed_node2.
By default, the role generates a descriptive name for each connection it creates from the perspective of each system. For example, when creating a connection between managed_node1 and managed_node2, the descriptive name of this connection on managed_node1 is managed_node1-to-managed_node2 but on managed_node2 the connection is named managed_node2-to-managed_node1.
On the managed nodes, confirm that the connection is successfully started:
ipsec trafficstatus | grep connection.name
# ipsec trafficstatus | grep connection.nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If a connection did not successfully load, manually add the connection by entering the following command. This will provide more specific information indicating why the connection failed to establish:
ipsec auto --add connection.name
# ipsec auto --add connection.nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny errors that may have occurred during the process of loading and starting the connection are reported in the logs, which can be found in
/var/log/pluto.log. Because these logs are hard to parse, try to manually add the connection to obtain log messages from the standard output instead.
15.2. Creating an opportunistic mesh VPN connection with IPsec by using the vpn System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the vpn System Role to configure an opportunistic mesh VPN connection that uses certificates for authentication by running an Ansible playbook on the control node, which will configure all the managed nodes listed in an inventory file.
Authentication with certificates is configured by defining the auth_method: cert parameter in the playbook. The vpn System Role assumes that the IPsec Network Security Services (NSS) crypto library, which is defined in the /etc/ipsec.d directory, contains the necessary certificates. By default, the node name is used as the certificate nickname. In this example, this is managed_node1. You can define different certificate names by using the cert_name attribute in your inventory.
In the following example procedure, the control node, which is the system from which you will run the Ansible playbook, shares the same classless inter-domain routing (CIDR) number as both of the managed nodes (192.0.2.0/24) and has the IP address 192.0.2.7. Therefore, the control node falls under the private policy which is automatically created for CIDR 192.0.2.0/24.
To prevent SSH connection loss during the play, a clear policy for the control node is included in the list of policies. Note that there is also an item in the policies list where the CIDR is equal to default. This is because this playbook overrides the rule from the default policy to make it private instead of private-or-clear.
Prerequisites
Access and permissions to one or more managed nodes, which are systems you want to configure with the
vpnSystem Role.-
On all the managed nodes, the NSS database in the
/etc/ipsec.ddirectory contains all the certificates necessary for peer authentication. By default, the node name is used as the certificate nickname.
-
On all the managed nodes, the NSS database in the
Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince
vpn_manage_firewallandvpn_manage_selinuxare both set to true, thevpnrole will use thefirewallandselinuxroles to manage the ports used by thevpnrole.-
Optional: You can modify the variables according to your preferences. For more details, see the
/usr/share/doc/rhel-system-roles/vpn/README.mdfile. Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/playbook.yml
# ansible-playbook -i inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 16. Setting a custom cryptographic policy by using the crypto-policies RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
As an administrator, you can use the crypto_policies RHEL System Role to quickly and consistently configure custom cryptographic policies across many different systems using the Ansible Core package.
16.1. crypto_policies System Role variables and facts Link kopierenLink in die Zwischenablage kopiert!
In a crypto_policies System Role playbook, you can define the parameters for the crypto_policies configuration file according to your preferences and limitations.
If you do not configure any variables, the System Role does not configure the system and only reports the facts.
Selected variables for the crypto_policies System Role
crypto_policies_policy- Determines the cryptographic policy the System Role applies to the managed nodes. For details about the different crypto policies, see System-wide cryptographic policies .
crypto_policies_reload-
If set to
yes, the affected services, currently theipsec,bind, andsshdservices, reload after applying a crypto policy. Defaults toyes. crypto_policies_reboot_ok-
If set to
yes, and a reboot is necessary after the System Role changes the crypto policy, it setscrypto_policies_reboot_requiredtoyes. Defaults tono.
Facts set by the crypto_policies System Role
crypto_policies_active- Lists the currently selected policy.
crypto_policies_available_policies- Lists all available policies available on the system.
crypto_policies_available_subpolicies- Lists all available subpolicies available on the system.
16.2. Setting a custom cryptographic policy using the crypto_policies System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the crypto_policies System Role to configure a large number of managed nodes consistently from a single control node.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
crypto_policiesSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can replace the FUTURE value with your preferred crypto policy, for example:
DEFAULT,LEGACY, andFIPS:OSPP.The
crypto_policies_reboot_ok: truevariable causes the system to reboot after the System Role changes the cryptographic policy.For more details, see crypto_policies System Role variables and facts .
Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file playbook.yml
# ansible-playbook -i inventory_file playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the control node, create another playbook named, for example,
verify_playbook.yml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This playbook does not change any configurations on the system, only reports the active policy on the managed nodes.
Run the playbook on the same inventory file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
"crypto_policies_active":variable shows the policy active on the managed node.
Chapter 17. Configuring NBDE by using RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
17.1. Introduction to the nbde_client and nbde_server System Roles (Clevis and Tang) Link kopierenLink in die Zwischenablage kopiert!
RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems.
You can use Ansible roles for automated deployments of Policy-Based Decryption (PBD) solutions using Clevis and Tang. The rhel-system-roles package contains these system roles, the related examples, and also the reference documentation.
The nbde_client System Role enables you to deploy multiple Clevis clients in an automated way. Note that the nbde_client role supports only Tang bindings, and you cannot use it for TPM2 bindings at the moment.
The nbde_client role requires volumes that are already encrypted using LUKS. This role supports to bind a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers - Tang servers. You can either preserve the existing volume encryption with a passphrase or remove it. After removing the passphrase, you can unlock the volume only using NBDE. This is useful when a volume is initially encrypted using a temporary key or password that you should remove after you provision the system.
If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not find any of these valid, it attempts to retrieve a passphrase from an existing binding.
PBD defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings for the same device. The default slot is slot 1.
The nbde_client role provides also the state variable. Use the present value for either creating a new binding or updating an existing one. Contrary to a clevis luks bind command, you can use state: present also for overwriting an existing binding in its device slot. The absent value removes a specified binding.
Using the nbde_client System Role, you can deploy and manage a Tang server as part of an automated disk encryption solution. This role supports the following features:
- Rotating Tang keys
- Deploying and backing up Tang keys
17.2. Using the nbde_server System Role for setting up multiple Tang servers Link kopierenLink in die Zwischenablage kopiert!
Follow the steps to prepare and apply an Ansible playbook containing your Tang server settings.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
nbde_serverSystem Role. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
On the control node:
-
The
ansible-coreandrhel-system-rolespackages are installed.
-
The
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
- An inventory file which lists the managed nodes.
Procedure
Prepare your playbook containing settings for Tang servers. You can either start from the scratch, or use one of the example playbooks from the
/usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/directory.cp /usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/simple_deploy.yml ./my-tang-playbook.yml
# cp /usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/simple_deploy.yml ./my-tang-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the playbook in a text editor of your choice, for example:
vi my-tang-playbook.yml
# vi my-tang-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required parameters. The following example playbook ensures deploying of your Tang server and a key rotation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince
nbde_server_manage_firewallandnbde_server_manage_selinuxare both set to true, thenbde_serverrole will use thefirewallandselinuxroles to manage the ports used by thenbde_serverrole.Apply the finished playbook:
ansible-playbook -i inventory-file my-tang-playbook.yml
# ansible-playbook -i inventory-file my-tang-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where: *
inventory-fileis the inventory file. *logging-playbook.ymlis the playbook you use.
To ensure that networking for a Tang pin is available during early boot by using the grubby tool on the systems where Clevis is installed:
grubby --update-kernel=ALL --args="rd.neednet=1"
# grubby --update-kernel=ALL --args="rd.neednet=1"
17.3. Using the nbde_client System Role for setting up multiple Clevis clients Link kopierenLink in die Zwischenablage kopiert!
Follow the steps to prepare and apply an Ansible playbook containing your Clevis client settings.
The nbde_client System Role supports only Tang bindings. This means that you cannot use it for TPM2 bindings at the moment.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
nbde_clientSystem Role. - Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems.
- The Ansible Core package is installed on the control machine.
-
The
rhel-system-rolespackage is installed on the system from which you want to run the playbook.
Procedure
Prepare your playbook containing settings for Clevis clients. You can either start from the scratch, or use one of the example playbooks from the
/usr/share/ansible/roles/rhel-system-roles.nbde_client/examples/directory.cp /usr/share/ansible/roles/rhel-system-roles.nbde_client/examples/high_availability.yml ./my-clevis-playbook.yml
# cp /usr/share/ansible/roles/rhel-system-roles.nbde_client/examples/high_availability.yml ./my-clevis-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the playbook in a text editor of your choice, for example:
vi my-clevis-playbook.yml
# vi my-clevis-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the required parameters. The following example playbook configures Clevis clients for automated unlocking of two LUKS-encrypted volumes by when at least one of two Tang servers is available:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the finished playbook:
ansible-playbook -i host1,host2,host3 my-clevis-playbook.yml
# ansible-playbook -i host1,host2,host3 my-clevis-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To ensure that networking for a Tang pin is available during early boot by using the grubby tool on the system where Clevis is installed:
grubby --update-kernel=ALL --args="rd.neednet=1"
# grubby --update-kernel=ALL --args="rd.neednet=1"
Chapter 18. Requesting certificates using RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
With the certificate System Role, you can use Red Hat Ansible Core to issue and manage certificates.
This chapter covers the following topics:
18.1. The certificate System Role Link kopierenLink in die Zwischenablage kopiert!
Using the certificate System Role, you can manage issuing and renewing TLS and SSL certificates using Ansible Core.
The role uses certmonger as the certificate provider, and currently supports issuing and renewing self-signed certificates and using the IdM integrated certificate authority (CA).
You can use the following variables in your Ansible playbook with the certificate System Role:
certificate_wait- to specify if the task should wait for the certificate to be issued.
certificate_requests- to represent each certificate to be issued and its parameters.
18.2. Requesting a new self-signed certificate using the certificate System Role Link kopierenLink in die Zwischenablage kopiert!
With the certificate System Role, you can use Ansible Core to issue self-signed certificates.
This process uses the certmonger provider and requests the certificate through the getcert command.
By default, certmonger automatically tries to renew the certificate before it expires. You can disable this by setting the auto_renew parameter in the Ansible playbook to no.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook.
Procedure
Optional: Create an inventory file, for example
inventory.file:*touch inventory.file*
$ *touch inventory.file*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open your inventory file and define the hosts on which you want to request the certificate, for example:
[webserver] server.idm.example.com
[webserver] server.idm.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example
request-certificate.yml:-
Set
hoststo include the hosts on which you want to request the certificate, such aswebserver. Set the
certificate_requestsvariable to include the following:-
Set the
nameparameter to the desired name of the certificate, such asmycert. -
Set the
dnsparameter to the domain to be included in the certificate, such as*.example.com. -
Set the
caparameter toself-sign.
-
Set the
Set the
rhel-system-roles.certificaterole underroles.This is the playbook file for this example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Set
- Save the file.
Run the playbook:
*ansible-playbook -i inventory.file request-certificate.yml*
$ *ansible-playbook -i inventory.file request-certificate.yml*Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3. Requesting a new certificate from IdM CA using the certificate System Role Link kopierenLink in die Zwischenablage kopiert!
With the certificate System Role, you can use anible-core to issue certificates while using an IdM server with an integrated certificate authority (CA). Therefore, you can efficiently and consistently manage the certificate trust chain for multiple systems when using IdM as the CA.
This process uses the certmonger provider and requests the certificate through the getcert command.
By default, certmonger automatically tries to renew the certificate before it expires. You can disable this by setting the auto_renew parameter in the Ansible playbook to no.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook.
Procedure
Optional: Create an inventory file, for example
inventory.file:*touch inventory.file*
$ *touch inventory.file*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open your inventory file and define the hosts on which you want to request the certificate, for example:
[webserver] server.idm.example.com
[webserver] server.idm.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example
request-certificate.yml:-
Set
hoststo include the hosts on which you want to request the certificate, such aswebserver. Set the
certificate_requestsvariable to include the following:-
Set the
nameparameter to the desired name of the certificate, such asmycert. -
Set the
dnsparameter to the domain to be included in the certificate, such aswww.example.com. -
Set the
principalparameter to specify the Kerberos principal, such asHTTP/www.example.com@EXAMPLE.COM. -
Set the
caparameter toipa.
-
Set the
Set the
rhel-system-roles.certificaterole underroles.This is the playbook file for this example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Set
- Save the file.
Run the playbook:
*ansible-playbook -i inventory.file request-certificate.yml*
$ *ansible-playbook -i inventory.file request-certificate.yml*Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.4. Specifying commands to run before or after certificate issuance using the certificate System Role Link kopierenLink in die Zwischenablage kopiert!
With the certificate Role, you can use Ansible Core to execute a command before and after a certificate is issued or renewed.
In the following example, the administrator ensures stopping the httpd service before a self-signed certificate for www.example.com is issued or renewed, and restarting it afterwards.
By default, certmonger automatically tries to renew the certificate before it expires. You can disable this by setting the auto_renew parameter in the Ansible playbook to no.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook.
Procedure
Optional: Create an inventory file, for example
inventory.file:*touch inventory.file*
$ *touch inventory.file*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open your inventory file and define the hosts on which you want to request the certificate, for example:
[webserver] server.idm.example.com
[webserver] server.idm.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example
request-certificate.yml:-
Set
hoststo include the hosts on which you want to request the certificate, such aswebserver. Set the
certificate_requestsvariable to include the following:-
Set the
nameparameter to the desired name of the certificate, such asmycert. -
Set the
dnsparameter to the domain to be included in the certificate, such aswww.example.com. -
Set the
caparameter to the CA you want to use to issue the certificate, such asself-sign. -
Set the
run_beforeparameter to the command you want to execute before this certificate is issued or renewed, such assystemctl stop httpd.service. -
Set the
run_afterparameter to the command you want to execute after this certificate is issued or renewed, such assystemctl start httpd.service.
-
Set the
Set the
rhel-system-roles.certificaterole underroles.This is the playbook file for this example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Set
- Save the file.
Run the playbook:
*ansible-playbook -i inventory.file request-certificate.yml*
$ *ansible-playbook -i inventory.file request-certificate.yml*Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 19. Configuring automatic crash dumps by using the kdump RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
To manage kdump using Ansible, you can use the kdump role, which is one of the RHEL System Roles available in RHEL 7.9.
Using the kdump role enables you to specify where to save the contents of the system’s memory for later analysis.
19.1. The kdump RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
The kdump System Role enables you to set basic kernel dump parameters on multiple systems.
19.2. kdump role parameters Link kopierenLink in die Zwischenablage kopiert!
The parameters used for the kdump RHEL System Roles are:
| Role Variable | Description |
|---|---|
| kdump_path |
The path to which |
19.3. Configuring kdump using RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
You can set basic kernel dump parameters on multiple systems using the kdump System Role by running an Ansible playbook.
The kdump role replaces the kdump configuration of the managed hosts entirely by replacing the /etc/kdump.conf file. Additionally, if the kdump role is applied, all previous kdump settings are also replaced, even if they are not specified by the role variables, by replacing the /etc/sysconfig/kdump file.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. -
You have an inventory file which lists the systems on which you want to deploy
kdump.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/playbook.yml
# ansible-playbook -i inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 20. Managing local storage using RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
To manage LVM and local file systems (FS) using Ansible, you can use the storage role, which is one of the RHEL System Roles available in RHEL 8.
Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.
For more information about RHEL System Roles and how to apply them, see Introduction to RHEL System Roles.
20.1. Introduction to the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
The storage role can manage:
- File systems on disks which have not been partitioned
- Complete LVM volume groups including their logical volumes and file systems
- MD RAID volumes and their file systems
With the storage role, you can perform the following tasks:
- Create a file system
- Remove a file system
- Mount a file system
- Unmount a file system
- Create LVM volume groups
- Remove LVM volume groups
- Create logical volumes
- Remove logical volumes
- Create RAID volumes
- Remove RAID volumes
- Create LVM volume groups with RAID
- Remove LVM volume groups with RAID
- Create encrypted LVM volume groups
- Create LVM logical volumes with RAID
20.2. Parameters that identify a storage device in the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
Your storage role configuration affects only the file systems, volumes, and pools that you list in the following variables.
storage_volumesList of file systems on all unpartitioned disks to be managed.
storage_volumescan also includeraidvolumes.Partitions are currently unsupported.
storage_poolsList of pools to be managed.
Currently the only supported pool type is LVM. With LVM, pools represent volume groups (VGs). Under each pool there is a list of volumes to be managed by the role. With LVM, each volume corresponds to a logical volume (LV) with a file system.
20.3. Example Ansible playbook to create an XFS file system on a block device Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to create an XFS file system on a block device using the default parameters.
The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition.
Example 20.1. A playbook that creates XFS on /dev/sdb
-
The volume name (
barefsin the example) is currently arbitrary. Thestoragerole identifies the volume by the disk device listed under thedisks:attribute. -
You can omit the
fs_type: xfsline because XFS is the default file system in RHEL 8. To create the file system on an LV, provide the LVM setup under the
disks:attribute, including the enclosing volume group. For details, see Example Ansible playbook to manage logical volumes.Do not provide the path to the LV device.
20.4. Example Ansible playbook to persistently mount a file system Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to immediately and persistently mount an XFS file system.
Example 20.2. A playbook that mounts a file system on /dev/sdb to /mnt/data
-
This playbook adds the file system to the
/etc/fstabfile, and mounts the file system immediately. -
If the file system on the
/dev/sdbdevice or the mount point directory do not exist, the playbook creates them.
20.5. Example Ansible playbook to manage logical volumes Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to create an LVM logical volume in a volume group.
Example 20.3. A playbook that creates a mylv logical volume in the myvg volume group
The
myvgvolume group consists of the following disks:-
/dev/sda -
/dev/sdb -
/dev/sdc
-
-
If the
myvgvolume group already exists, the playbook adds the logical volume to the volume group. -
If the
myvgvolume group does not exist, the playbook creates it. -
The playbook creates an Ext4 file system on the
mylvlogical volume, and persistently mounts the file system at/mnt.
20.6. Example Ansible playbook to enable online block discard Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to mount an XFS file system with online block discard enabled.
Example 20.4. A playbook that enables online block discard on /mnt/data/
20.7. Example Ansible playbook to create and mount an Ext4 file system Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to create and mount an Ext4 file system.
Example 20.5. A playbook that creates Ext4 on /dev/sdb and mounts it at /mnt/data
-
The playbook creates the file system on the
/dev/sdbdisk. -
The playbook persistently mounts the file system at the
/mnt/datadirectory. -
The label of the file system is
label-name.
20.8. Example Ansible playbook to create and mount an ext3 file system Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to create and mount an Ext3 file system.
Example 20.6. A playbook that creates Ext3 on /dev/sdb and mounts it at /mnt/data
-
The playbook creates the file system on the
/dev/sdbdisk. -
The playbook persistently mounts the file system at the
/mnt/datadirectory. -
The label of the file system is
label-name.
20.9. Example Ansible playbook to resize an existing Ext4 or Ext3 file system using the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to resize an existing Ext4 or Ext3 file system on a block device.
Example 20.7. A playbook that set up a single volume on a disk
-
If the volume in the previous example already exists, to resize the volume, you need to run the same playbook, just with a different value for the parameter
size. For example:
Example 20.8. A playbook that resizes ext4 on /dev/sdb
- The volume name (barefs in the example) is currently arbitrary. The Storage role identifies the volume by the disk device listed under the disks: attribute.
Using the Resizing action in other file systems can destroy the data on the device you are working on.
20.10. Example Ansible playbook to resize an existing file system on LVM using the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage RHEL System Role to resize an LVM logical volume with a file system.
Using the Resizing action in other file systems can destroy the data on the device you are working on.
Example 20.9. A playbook that resizes existing mylv1 and myvl2 logical volumes in the myvg volume group
This playbook resizes the following existing file systems:
-
The Ext4 file system on the
mylv1volume, which is mounted at/opt/mount1, resizes to 10 GiB. -
The Ext4 file system on the
mylv2volume, which is mounted at/opt/mount2, resizes to 50 GiB.
-
The Ext4 file system on the
20.11. Example Ansible playbook to create a swap volume using the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage role to create a swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device using the default parameters.
Example 20.10. A playbook that creates or modify an existing XFS on /dev/sdb
-
The volume name (
swap_fsin the example) is currently arbitrary. Thestoragerole identifies the volume by the disk device listed under thedisks:attribute.
20.12. Configuring a RAID volume using the storage System Role Link kopierenLink in die Zwischenablage kopiert!
With the storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. -
You have an inventory file detailing the systems on which you want to deploy a RAID volume using the
storageSystem Role.
Procedure
Create a new playbook.yml file with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningDevice names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, do not use specific disk names in the playbook.
Optional: Verify the playbook syntax:
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook -i inventory.file /path/to/file/playbook.yml
# ansible-playbook -i inventory.file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.13. Configuring an LVM pool with RAID using the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the storage System Role, you can configure an LVM pool with RAID on RHEL using Red Hat Ansible Automation Platform. In this section you will learn how to set up an Ansible playbook with the available parameters to configure an LVM pool with RAID.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. -
You have an inventory file detailing the systems on which you want to configure an LVM pool with RAID using the
storageSystem Role.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo create an LVM pool with RAID, you must specify the RAID type using the
raid_levelparameter.Optional. Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory.file /path/to/file/playbook.yml
# ansible-playbook -i inventory.file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.14. Example Ansible playbook to compress and deduplicate a VDO volume on LVM using the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage RHEL System Role to enable compression and deduplication of Logical Volumes (LVM) using Virtual Data Optimizer (VDO).
Example 20.11. A playbook that creates a mylv1 LVM VDO volume in the myvg volume group
In this example, the compression and deduplication pools are set to true, which specifies that the VDO is used. The following describes the usage of these parameters:
-
The
deduplicationis used to deduplicate the duplicated data stored on the storage volume. - The compression is used to compress the data stored on the storage volume, which results in more storage capacity.
-
The vdo_pool_size specifies the actual size the volume takes on the device. The virtual size of VDO volume is set by the
sizeparameter. NOTE: Because of the Storage role use of LVM VDO, only one volume per pool can use the compression and deduplication.
20.15. Creating a LUKS2 encrypted volume using the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook.
Prerequisites
-
Access and permissions to one or more managed nodes, which are systems you want to configure with the
crypto_policiesSystem Role. - An inventory file, which lists the managed nodes.
-
Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems. On the control node, the
ansible-coreandrhel-system-rolespackages are installed.
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible, ansible-playbook, connectors such as docker and podman, and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also add the other encryption parameters such as
encryption_key,encryption_cipher,encryption_key_size, andencryption_luksversion in the playbook.yml file.Optional: Verify playbook syntax:
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory.file /path/to/file/playbook.yml
# ansible-playbook -i inventory.file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the encryption status:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the created LUKS encrypted volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the
cryptsetupparameters in theplaybook.ymlfile, which thestoragerole supports:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
20.16. Example Ansible playbook to express pool volume sizes as percentage using the storage RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
This section provides an example Ansible playbook. This playbook applies the storage System Role to enable you to express Logical Manager Volumes (LVM) volume sizes as a percentage of the pool’s total size.
Example 20.12. A playbook that express volume sizes as a percentage of the pool’s total size
This example specifies the size of LVM volumes as a percentage of the pool size, for example: "60%". Additionally, you can also specify the size of LVM volumes as a percentage of the pool size in a human-readable size of the file system, for example, "10g" or "50 GiB".
Chapter 21. Configuring time synchronization by using the timesync RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the timesync RHEL System Role, you can manage time synchronization on multiple target machines on RHEL using Red Hat Ansible Automation Platform.
21.1. The timesync RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can manage time synchronization on multiple target machines using the timesync RHEL System Role.
The timesync role installs and configures an NTP or PTP implementation to operate as an NTP client or PTP replica in order to synchronize the system clock with NTP servers or grandmasters in PTP domains.
Note that using the timesync role also facilitates the migration to chrony, because you can use the same playbook on all versions of Red Hat Enterprise Linux starting with RHEL 6 regardless of whether the system uses ntp or chrony to implement the NTP protocol.
21.2. Applying the timesync System Role for a single pool of servers Link kopierenLink in die Zwischenablage kopiert!
The following example shows how to apply the timesync role in a situation with just one pool of servers.
The timesync role replaces the configuration of the given or detected provider service on the managed host. Previous settings are lost, even if they are not specified in the role variables. The only preserved setting is the choice of provider if the timesync_ntp_provider variable is not defined.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. -
You have an inventory file which lists the systems on which you want to deploy
timesyncSystem Role.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/playbook.yml
# ansible-playbook -i inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.3. Applying the timesync System Role on client servers Link kopierenLink in die Zwischenablage kopiert!
You can use the timesync role to enable Network Time Security (NTS) on NTP clients. Network Time Security (NTS) is an authentication mechanism specified for Network Time Protocol (NTP). It verifies that NTP packets exchanged between the server and client are not altered.
The timesync role replaces the configuration of the given or detected provider service on the managed host. Previous settings are lost even if they are not specified in the role variables. The only preserved setting is the choice of provider if the timesync_ntp_provider variable is not defined.
Prerequisites
-
You do not have to have Red Hat Ansible Automation Platform installed on the systems on which you want to deploy the
timesyncsolution. -
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. -
You have an inventory file which lists the systems on which you want to deploy the
timesyncSystem Role. -
The
chronyNTP provider version is 4.0 or later.
Procedure
Create a
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ptbtime1.ptb.deis an example of public server. You may want to use a different public server or your own server.Optional: Verify playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/playbook.yml
# ansible-playbook -i inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Perform a test on the client machine:
chronyc -N authdata
# chronyc -N authdata Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen ===================================================================== ptbtime1.ptb.de NTS 1 15 256 157 0 0 8 100Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check that the number of reported cookies is larger than zero.
21.4. timesync System Roles variables Link kopierenLink in die Zwischenablage kopiert!
You can pass the following variable to the timesync role:
-
timesync_ntp_servers:
| Role variable settings | Description |
|---|---|
| hostname: host.example.com | Hostname or address of the server |
| minpoll: number | Minimum polling interval. Default: 6 |
| maxpoll: number | Maximum polling interval. Default: 10 |
| iburst: yes | Flag enabling fast initial synchronization. Default: no |
| pool: yes | Flag indicating that each resolved address of the hostname is a separate NTP server. Default: no |
| nts: yes | Flag to enable Network Time Security (NTS). Default: no. Supported only with chrony >= 4.0. |
Chapter 22. Monitoring performance by using the metrics RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
As a system administrator, you can use the metrics RHEL System Role with any Ansible Automation Platform control node to monitor the performance of a system.
22.1. Introduction to the metrics System Role Link kopierenLink in die Zwischenablage kopiert!
RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. The metrics System Role configures performance analysis services for the local system and, optionally, includes a list of remote systems to be monitored by the local system. The metrics System Role enables you to use pcp to monitor your systems performance without having to configure pcp separately, as the set-up and deployment of pcp is handled by the playbook.
| Role variable | Description | Example usage |
|---|---|---|
| metrics_monitored_hosts |
List of remote hosts to be analyzed by the target host. These hosts will have metrics recorded on the target host, so ensure enough disk space exists below |
|
| metrics_retention_days | Configures the number of days for performance data retention before deletion. |
|
| metrics_graph_service |
A boolean flag that enables the host to be set up with services for performance data visualization via |
|
| metrics_query_service |
A boolean flag that enables the host to be set up with time series query services for querying recorded |
|
| metrics_provider |
Specifies which metrics collector to use to provide metrics. Currently, |
|
| metrics_manage_firewall |
Uses the |
|
| metrics_manage_selinux |
Uses the |
|
For details about the parameters used in metrics_connections and additional information about the metrics System Role, see the /usr/share/ansible/roles/rhel-system-roles.metrics/README.md file.
22.2. Using the metrics System Role to monitor your local system with visualization Link kopierenLink in die Zwischenablage kopiert!
This procedure describes how to use the metrics RHEL System Role to monitor your local system while simultaneously provisioning data visualization via Grafana.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the machine you want to monitor.
Procedure
Configure
localhostin the/etc/ansible/hostsAnsible inventory by adding the following content to the inventory:localhost ansible_connection=local
localhost ansible_connection=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Ansible playbook with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Ansible playbook:
ansible-playbook name_of_your_playbook.yml
# ansible-playbook name_of_your_playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince the
metrics_graph_serviceboolean is set to value="yes",Grafanais automatically installed and provisioned withpcpadded as a data source. Since metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role will use the firewall and selinux system roles to manage the ports used by the metrics role.-
To view visualization of the metrics being collected on your machine, access the
grafanaweb interface as described in Accessing the Grafana web UI.
22.3. Using the metrics System Role to setup a fleet of individual systems to monitor themselves Link kopierenLink in die Zwischenablage kopiert!
This procedure describes how to use the metrics System Role to set up a fleet of machines to monitor themselves.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the machine you want to use to run the playbook. - You have the SSH connection established.
Procedure
Add the name or IP of the machines you want to monitor via the playbook to the
/etc/ansible/hostsAnsible inventory file under an identifying group name enclosed in brackets:[remotes] webserver.example.com database.example.com
[remotes] webserver.example.com database.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Ansible playbook with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince
metrics_manage_firewallandmetrics_manage_selinuxare both set to true, the metrics role will use thefirewallandselinuxroles to manage the ports used by themetricsrole.Run the Ansible playbook:
ansible-playbook name_of_your_playbook.yml -k
# ansible-playbook name_of_your_playbook.yml -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Where the -k prompt for password to connect to remote system.
22.4. Using the metrics System Role to monitor a fleet of machines centrally via your local machine Link kopierenLink in die Zwischenablage kopiert!
This procedure describes how to use the metrics System Role to set up your local machine to centrally monitor a fleet of machines while also provisioning visualization of the data via grafana and querying of the data via redis.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the machine you want to use to run the playbook.
Procedure
Create an Ansible playbook with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Ansible playbook:
ansible-playbook name_of_your_playbook.yml
# ansible-playbook name_of_your_playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince the
metrics_graph_serviceandmetrics_query_servicebooleans are set to value="yes",grafanais automatically installed and provisioned withpcpadded as a data source with thepcpdata recording indexed intoredis, allowing thepcpquerying language to be used for complex querying of the data. Sincemetrics_manage_firewallandmetrics_manage_selinuxare both set to true, themetricsrole will use thefirewallandselinuxroles to manage the ports used by themetricsrole.-
To view graphical representation of the metrics being collected centrally by your machine and to query the data, access the
grafanaweb interface as described in Accessing the Grafana web UI.
22.5. Setting up authentication while monitoring a system using the metrics System Role Link kopierenLink in die Zwischenablage kopiert!
PCP supports the scram-sha-256 authentication mechanism through the Simple Authentication Security Layer (SASL) framework. The metrics RHEL System Role automates the steps to setup authentication using the scram-sha-256 authentication mechanism. This procedure describes how to setup authentication using the metrics RHEL System Role.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the machine you want to use to run the playbook.
Procedure
Include the following variables in the Ansible playbook you want to setup authentication for:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince
metrics_manage_firewallandmetrics_manage_selinuxare both set to true, themetricsrole will use thefirewallandselinuxroles to manage the ports used by themetricsrole.Run the Ansible playbook:
ansible-playbook name_of_your_playbook.yml
# ansible-playbook name_of_your_playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Verify the
saslconfiguration:pminfo -f -h "pcp://ip_adress?username=your_username" disk.dev.read
# pminfo -f -h "pcp://ip_adress?username=your_username" disk.dev.read Password: disk.dev.read inst [0 or "sda"] value 19540Copy to Clipboard Copied! Toggle word wrap Toggle overflow ip_adress should be replaced by the IP address of the host.
22.6. Using the metrics System Role to configure and enable metrics collection for SQL Server Link kopierenLink in die Zwischenablage kopiert!
This procedure describes how to use the metrics RHEL System Role to automate the configuration and enabling of metrics collection for Microsoft SQL Server via pcp on your local system.
Prerequisites
- The Ansible Core package is installed on the control machine.
-
You have the
rhel-system-rolespackage installed on the machine you want to monitor. - You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a 'trusted' connection to an SQL server. See Install SQL Server and create a database on Red Hat.
- You have installed the Microsoft ODBC driver for SQL Server for Red Hat Enterprise Linux. See Red Hat Enterprise Server and Oracle Linux.
Procedure
Configure
localhostin the/etc/ansible/hostsAnsible inventory by adding the following content to the inventory:localhost ansible_connection=local
localhost ansible_connection=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Ansible playbook that contains the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSince
metrics_manage_firewallandmetrics_manage_selinuxare both set to true, themetricsrole will use thefirewallandselinuxroles to manage the ports used by themetricsrole.Run the Ansible playbook:
ansible-playbook name_of_your_playbook.yml
# ansible-playbook name_of_your_playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Use the
pcpcommand to verify that SQL Server PMDA agent (mssql) is loaded and running:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 23. Configuring Microsoft SQL Server using the microsoft.sql.server Ansible role Link kopierenLink in die Zwischenablage kopiert!
As an administrator, you can use the microsoft.sql.server Ansible role to install, configure, and start Microsoft SQL Server (SQL Server). The microsoft.sql.server Ansible role optimizes your operating system to improve performance and throughput for the SQL Server. The role simplifies and automates the configuration of your RHEL host with recommended settings to run the SQL Server workloads.
23.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- 2 GB of RAM
-
rootaccess to the managed node where you want to configure SQL Server Pre-configured firewall
You can set the
mssql_manage_firewallvariable totrueso that the role can manage firewall automatically.Alternatively, enable the connection on the SQL Server TCP port set with the
mssql_tcp_portvariable. If you do not define this variable, the role defaults to the TCP port number1433.To add a new port, use:
firewall-cmd --add-port=xxxx/tcp --permanent firewall-cmd --reload
# firewall-cmd --add-port=xxxx/tcp --permanent # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace xxxx with the TCP port number then reload the firewall rules.
-
Optional: Create a file with the
.sqlextension containing the SQL statements and procedures to input them to SQL Server.
23.2. Installing the microsoft.sql.server Ansible role Link kopierenLink in die Zwischenablage kopiert!
The microsoft.sql.server Ansible role is part of the ansible-collection-microsoft-sql package.
Prerequisites
-
rootaccess
Procedure
Install Ansible Core which is available in the RHEL 7.9 AppStream repository:
*yum install ansible-core*
# *yum install ansible-core*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
microsoft.sql.serverAnsible role:*yum install ansible-collection-microsoft-sql*
# *yum install ansible-collection-microsoft-sql*Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3. Installing and configuring SQL server using the microsoft.sql.server Ansible role Link kopierenLink in die Zwischenablage kopiert!
You can use the microsoft.sql.server Ansible role to install and configure SQL server.
Prerequisites
- The Ansible inventory is created
Procedure
-
Create a file with the
.ymlextension. For example,mssql-server.yml. Add the following content to your
.ymlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <password> with your SQL Server password.
Run the
mssql-server.ymlansible playbook:*ansible-playbook mssql-server.yml*
# *ansible-playbook mssql-server.yml*Copy to Clipboard Copied! Toggle word wrap Toggle overflow
23.4. TLS variables Link kopierenLink in die Zwischenablage kopiert!
You can use the following variables to configure the Transport Level Security (TLS) protocol.
| Role variable | Description |
|---|---|
| mssql_tls_enable | This variable enables or disables TLS encryption.
The
When set to |
| mssql_tls_cert | To define this variable, enter the path to the TLS certificate file. |
| mssql_tls_private_key | To define this variable, enter the path to the private key file. |
| mssql_tls_remote_src |
Defines if the role searches for
When set to the default
When set to |
| mssql_tls_version | Define this variable to select which TSL version to use.
The default is |
| mssql_tls_force |
Set this variable to
The default is |
23.5. Accepting EULA for MLServices Link kopierenLink in die Zwischenablage kopiert!
You must accept all the EULA for the open-source distributions of Python and R packages to install the required SQL Server Machine Learning Services (MLServices).
See /usr/share/doc/mssql-server for the license terms.
| Role variable | Description |
|---|---|
| mssql_accept_microsoft_sql_server_standard_eula |
This variable determines whether to accept the terms and conditions for installing the
To accept the terms and conditions set this variable to
The default is |
23.6. Accepting EULAs for Microsoft ODBC 17 Link kopierenLink in die Zwischenablage kopiert!
You must accept all the EULAs to install the Microsoft Open Database Connectivity (ODBC) driver.
See /usr/share/doc/msodbcsql17/LICENSE.txt and /usr/share/doc/mssql-tools/LICENSE.txt for the license terms.
| Role variable | Description |
|---|---|
| mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula |
This variable determines whether to accept the terms and conditions for installing the
To accept the terms and conditions set this variable to
The default is |
| mssql_accept_microsoft_cli_utilities_for_sql_server_eula |
This variable determines whether to accept the terms and conditions for installing the
To accept the terms and conditions set this variable to
The default is |
23.7. High availability variables Link kopierenLink in die Zwischenablage kopiert!
You can configure high availability for Microsoft SQL Server with the variables from the table below.
| Variable | Description |
|---|---|
|
|
The default value is
When it is set to
|
|
|
This variable specifies which type of replica you can configure on the host. You can set this variable to |
|
|
The default port is The role uses this TCP port to replicate data for an Always On availability group. |
|
| You must define the name of the certificate to secure transactions between members of an Always On availability group. |
|
| You must set the password for the master key to use with the certificate. |
|
| You must set the password for the private key to use with the certificate. |
|
|
The default value is
If it is set to |
|
| You must define the name of the endpoint to configure. |
|
| You must define the name of the availability group to configure. |
|
| You can define a list of the databases to replicate, otherwise the role creates a cluster without replicating databases. |
|
| The SQL Server Pacemaker resource agent utilizes this user to perform database health checks and manage state transitions from replica to primary server. |
|
|
The password for the |
|
|
The default value is
This variable defines if this role runs the
Note that the
To work around this limitation, the
If you want the |
Note, this role backs up the database to the /var/opt/mssql/data/ directory.
For examples on how to use high availability variables for Microsoft SQL Server:
-
If you install the role from Automation Hub, see the
~/.ansible/collections/ansible_collections/microsoft/sql/roles/server/README.mdfile on your server. -
If you install the role from a package, open the
/usr/share/microsoft/sql-server/README.htmlfile in your browser.
Chapter 24. Configuring a system for session recording using the tlog RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the tlog RHEL System Role, you can configure a system for terminal session recording on RHEL using Red Hat Ansible Automation Platform.
24.1. The tlog System Role Link kopierenLink in die Zwischenablage kopiert!
You can configure a RHEL system for terminal session recording on RHEL using the tlog RHEL System Role.
You can configure the recording to take place per user or user group by means of the SSSD service.
24.2. Components and parameters of the tlog System Role Link kopierenLink in die Zwischenablage kopiert!
The Session Recording solution has the following components:
-
The
tlogutility - System Security Services Daemon (SSSD)
- Optional: The web console interface
The parameters used for the tlog RHEL System Role are:
| Role Variable | Description |
|---|---|
| tlog_use_sssd (default: yes) | Configure session recording with SSSD, the preferred way of managing recorded users or groups |
| tlog_scope_sssd (default: none) | Configure SSSD recording scope - all / some / none |
| tlog_users_sssd (default: []) | YAML list of users to be recorded |
| tlog_groups_sssd (default: []) | YAML list of groups to be recorded |
-
For details about the parameters used in
tlogand additional information about thetlogSystem Role, see the/usr/share/ansible/roles/rhel-system-roles.tlog/README.mdfile.
24.3. Deploying the tlog RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
Follow these steps to prepare and apply an Ansible playbook to configure a RHEL system to log session recording data to the systemd journal.
Prerequisites
-
You have set SSH keys for access from the control node to the target system where the
tlogSystem Role will be configured. -
You have at least one system that you want to configure the
tlogSystem Role. - The Ansible Core package is installed on the control machine.
-
The
rhel-system-rolespackage is installed on the control machine.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where,
tlog_scope_sssd:-
somespecifies you want to record only certain users and groups, notallornone.
-
tlog_users_sssd:-
recorded-userspecifies the user you want to record a session from. Note that this does not add the user for you. You must set the user by yourself.
-
Optionally, verify the playbook syntax.
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i IP_Address /path/to/file/playbook.yml -v
# ansible-playbook -i IP_Address /path/to/file/playbook.yml -vCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a result, the playbook installs the tlog RHEL System Role on the system you specified. The role includes tlog-rec-session, a terminal session I/O logging program, that acts as the login shell for a user. It also creates an SSSD configuration drop file that can be used by the users and groups that you define. SSSD parses and reads these users and groups, and replaces their user shell with tlog-rec-session. Additionally, if the cockpit package is installed on the system, the playbook also installs the cockpit-session-recording package, which is a Cockpit module that allows you to view and play recordings in the web console interface.
Verification steps
To verify that the SSSD configuration drop file is created in the system, perform the following steps:
Navigate to the folder where the SSSD configuration drop file is created:
cd /etc/sssd/conf.d
# cd /etc/sssd/conf.dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the file content:
cat /etc/sssd/conf.d/sssd-session-recording.conf
# cat /etc/sssd/conf.d/sssd-session-recording.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You can see that the file contains the parameters you set in the playbook.
24.4. Deploying the tlog RHEL System Role for excluding lists of groups or users Link kopierenLink in die Zwischenablage kopiert!
You can use the tlog System Role to support the SSSD session recording configuration options exclude_users and exclude_groups. Follow these steps to prepare and apply an Ansible playbook to configure a RHEL system to exclude users or groups from having their sessions recorded and logged in the systemd journal.
Prerequisites
-
You have set SSH keys for access from the control node to the target system on which you want to configure the
tlogSystem Role. -
You have at least one system on which you want to configure the
tlogSystem Role. - The Ansible Core package is installed on the control machine.
-
The
rhel-system-rolespackage is installed on the control machine.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where,
tlog_scope_sssd:-
all: specifies that you want to record all users and groups.
-
tlog_exclude_users_sssd:- user names: specifies the user names of the users you want to exclude from the session recording.
tlog_exclude_groups_sssd:-
adminsspecifies the group you want to exclude from the session recording.
-
Optionally, verify the playbook syntax;
ansible-playbook --syntax-check playbook.yml
# ansible-playbook --syntax-check playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i IP_Address /path/to/file/playbook.yml -v
# ansible-playbook -i IP_Address /path/to/file/playbook.yml -vCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a result, the playbook installs the tlog RHEL System Role on the system you specified. The role includes tlog-rec-session, a terminal session I/O logging program, that acts as the login shell for a user. It also creates an /etc/sssd/conf.d/sssd-session-recording.conf SSSD configuration drop file that can be used by users and groups except those that you defined as excluded. SSSD parses and reads these users and groups, and replaces their user shell with tlog-rec-session. Additionally, if the cockpit package is installed on the system, the playbook also installs the cockpit-session-recording package, which is a Cockpit module that allows you to view and play recordings in the web console interface.
Verification steps
To verify that the SSSD configuration drop file is created in the system, perform the following steps:
Navigate to the folder where the SSSD configuration drop file is created:
cd /etc/sssd/conf.d
# cd /etc/sssd/conf.dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the file content:
cat sssd-session-recording.conf
# cat sssd-session-recording.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You can see that the file contains the parameters you set in the playbook.
24.5. Recording a session using the deployed tlog System Role in the CLI Link kopierenLink in die Zwischenablage kopiert!
After you have deployed the tlog System Role in the system you have specified, you are able to record a user terminal session using the command-line interface (CLI).
Prerequisites
-
You have deployed the
tlogSystem Role in the target system. -
The SSSD configuration drop file was created in the
/etc/sssd/conf.ddirectory. See Deploying the Terminal Session Recording RHEL System Role.
Procedure
Create a user and assign a password for this user:
useradd recorded-user passwd recorded-user
# useradd recorded-user # passwd recorded-userCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the system as the user you just created:
ssh recorded-user@localhost
# ssh recorded-user@localhostCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Type "yes" when the system prompts you to type yes or no to authenticate.
Insert the recorded-user’s password.
The system displays a message about your session being recorded.
ATTENTION! Your session is being recorded!
ATTENTION! Your session is being recorded!Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you have finished recording the session, type:
exit
# exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow The system logs out from the user and closes the connection with the localhost.
As a result, the user session is recorded, stored and you can play it using a journal.
Verification steps
To view your recorded session in the journal, do the following steps:
Run the command below:
journalctl -o verbose -r
# journalctl -o verbose -rCopy to Clipboard Copied! Toggle word wrap Toggle overflow Search for the
MESSAGEfield of thetlog-recrecorded journal entry.journalctl -xel _EXE=/usr/bin/tlog-rec-session
# journalctl -xel _EXE=/usr/bin/tlog-rec-sessionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.6. Watching a recorded session using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can play a user session recording from a journal using the command-line interface (CLI).
Prerequisites
- You have recorded a user session. See Recording a session using the deployed tlog System Role in the CLI .
Procedure
On the CLI terminal, play the user session recording:
journalctl -o verbose -r
# journalctl -o verbose -rCopy to Clipboard Copied! Toggle word wrap Toggle overflow Search for the
tlogrecording:/tlog-rec
$ /tlog-recCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can see details such as:
- The username for the user session recording
-
The
out_txtfield, a raw output encode of the recorded session - The identifier number TLOG_REC=ID_number
- Copy the identifier number TLOG_REC=ID_number.
Playback the recording using the identifier number TLOG_REC=ID_number.
tlog-play -r journal -M TLOG_REC=ID_number
# tlog-play -r journal -M TLOG_REC=ID_numberCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a result, you can see the user session recording terminal output being played back.
Chapter 25. Configuring a high-availability cluster by using the ha_cluster RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the ha_cluster System Role, you can configure and manage a high-availability cluster that uses the Pacemaker high availability cluster resource manager.
25.1. ha_cluster System Role variables Link kopierenLink in die Zwischenablage kopiert!
In an ha_cluster System Role playbook, you define the variables for a high availability cluster according to the requirements of your cluster deployment.
The variables you can set for an ha_cluster System Role are as follows.
ha_cluster_enable_repos-
A boolean flag that enables the repositories containing the packages that are needed by the
ha_clusterSystem Role. When this variable is set totrue, the default value, you have active subscription coverage for RHEL and the RHEL High Availability Add-On on the systems that you will use as your cluster members or the System Role will fail. ha_cluster_manage_firewallA boolean flag that determines whether the
ha_clusterSystem Role manages the firewall. Whenha_cluster_manage_firewallis set totrue, the firewall high availability service and thefence-virtport are enabled. Whenha_cluster_manage_firewallis set tofalse, theha_clusterSystem Role does not manage the firewall. If your system is running thefirewalldservice, you must set the parameter totruein your playbook.You can use the
ha_cluster_manage_firewallparameter to add ports, but you cannot use the parameter to remove ports. To remove ports, use thefirewallSystem Role directly.As of RHEL 7.9, the firewall is no longer configured by default, because it is configured only when
ha_cluster_manage_firewallis set totrue.ha_cluster_manage_selinuxA boolean flag that determines whether the
ha_clusterSystem Role manages the ports belonging to the firewall high availability service using theselinuxSystem Role. Whenha_cluster_manage_selinuxis set totrue, the ports belonging to the firewall high availability service are associated with the SELinux port typecluster_port_t. Whenha_cluster_manage_selinuxis set tofalse, theha_clusterSystem Role does not manage SELinux.If your system is running the
selinuxservice, you must set this parameter totruein your playbook. Firewall configuration is a prerequisite for managing SELinux. If the firewall is not installed, the managing SELinux policy is skipped.You can use the
ha_cluster_manage_selinuxparameter to add policy, but you cannot use the parameter to remove policy. To remove policy, use theselinuxSystem Role directly.ha_cluster_cluster_presentA boolean flag which, if set to
true, determines that HA cluster will be configured on the hosts according to the variables passed to the role. Any cluster configuration not specified in the role and not supported by the role will be lost.If
ha_cluster_cluster_presentis set tofalse, all HA cluster configuration will be removed from the target hosts.The default value of this variable is
true.The following example playbook removes all cluster configuration on
node1andnode2Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_start_on_boot-
A boolean flag that determines whether cluster services will be configured to start on boot. The default value of this variable is
true. ha_cluster_fence_agent_packages-
List of fence agent packages to install. The default value of this variable is
fence-agents-all,fence-virt. ha_cluster_extra_packagesList of additional packages to be installed. The default value of this variable is no packages.
This variable can be used to install additional packages not installed automatically by the role, for example custom resource agents.
It is possible to specify fence agents as members of this list. However,
ha_cluster_fence_agent_packagesis the recommended role variable to use for specifying fence agents, so that its default value is overridden.ha_cluster_hacluster_password-
A string value that specifies the password of the
haclusteruser. Thehaclusteruser has full access to a cluster. It is recommended that you vault encrypt the password, as described in Encrypting content with Ansible Vault. There is no default password value, and this variable must be specified. ha_cluster_corosync_key_srcThe path to Corosync
authkeyfile, which is the authentication and encryption key for Corosync communication. It is highly recommended that you have a uniqueauthkeyvalue for each cluster. The key should be 256 bytes of random data.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes.
If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_pacemaker_key_srcThe path to the Pacemaker
authkeyfile, which is the authentication and encryption key for Pacemaker communication. It is highly recommended that you have a uniqueauthkeyvalue for each cluster. The key should be 256 bytes of random data.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes.
If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_fence_virt_key_srcThe path to the
fence-virtorfence-xvmpre-shared key file, which is the location of the authentication key for thefence-virtorfence-xvmfence agent.If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes. If the
ha_clusterSystem Role generates a new key in this fashion, you should copy the key to your nodes' hypervisor to ensure that fencing works.If this variable is set,
ha_cluster_regenerate_keysis ignored for this key.The default value of this variable is null.
ha_cluster_pcsd_public_key_srcr,ha_cluster_pcsd_private_key_srcThe path to the
pcsdTLS certificate and private key. If this is not specified, a certificate-key pair already present on the nodes will be used. If a certificate-key pair is not present, a random new one will be generated.If you specify a private key value for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault.
If these variables are set,
ha_cluster_regenerate_keysis ignored for this certificate-key pair.The default value of these variables is null.
ha_cluster_pcsd_certificatesCreates a
pcsdprivate key and certificate using thecertificateSystem Role.If your system is not configured with a
pcsdprivate key and certificate, you can create them in one of two ways:-
Set the
ha_cluster_pcsd_certificatesvariable. When you set theha_cluster_pcsd_certificatesvariable, thecertificateSystem Role is used internally and it creates the private key and certificate forpcsdas defined. -
Do not set the
ha_cluster_pcsd_public_key_src,ha_cluster_pcsd_private_key_src, or theha_cluster_pcsd_certificatesvariables. If you do not set any of these variables, theha_clusterSystem Role will createpcsdcertificates by means ofpcsditself. The value ofha_cluster_pcsd_certificatesis set to the value of the variablecertificate_requestsas specified in thecertificateSystem Role. For more information about thecertificateSystem Role, see Requesting certificates using RHEL System Roles.
-
Set the
The following operational considerations apply to the use of the
ha_cluster_pcsd_certificatevariable:-
Unless you are using IPA and joining the systems to an IPA domain, the
certificateSystem Role creates self-signed certificates. In this case, you must explicitly configure trust settings outside of the context of RHEL System Roles. System Roles do not support configuring trust settings. -
When you set the
ha_cluster_pcsd_certificatesvariable, do not set theha_cluster_pcsd_public_key_srcandha_cluster_pcsd_private_key_srcvariables. -
When you set the
ha_cluster_pcsd_certificatesvariable,ha_cluster_regenerate_keysis ignored for this certificate - key pair.
-
Unless you are using IPA and joining the systems to an IPA domain, the
The default value of this variable is
[].For an example
ha_clusterSystem Role playbook that creates TLS certificates and key files in a high availability cluster, see Creating pcsd TLS certificates and key files for a high availability cluster.ha_cluster_regenerate_keys-
A boolean flag which, when set to
true, determines that pre-shared keys and TLS certificates will be regenerated. For more information about when keys and certificates will be regenerated, see the descriptions of theha_cluster_corosync_key_src,ha_cluster_pacemaker_key_src,ha_cluster_fence_virt_key_src,ha_cluster_pcsd_public_key_src, andha_cluster_pcsd_private_key_srcvariables. -
The default value of this variable is
false. ha_cluster_pcs_permission_listConfigures permissions to manage a cluster using
pcsd. The items you configure with this variable are as follows:-
type-userorgroup -
name- user or group name allow_list- Allowed actions for the specified user or group:-
read- View cluster status and settings -
write- Modify cluster settings except permissions and ACLs -
grant- Modify cluster permissions and ACLs -
full- Unrestricted access to a cluster including adding and removing nodes and access to keys and certificates
-
-
The structure of the
ha_cluster_pcs_permission_listvariable and its default values are as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_cluster_name-
The name of the cluster. This is a string value with a default of
my-cluster. ha_cluster_transportSets the cluster transport method. The items you configure with this variable are as follows:
-
type(optional) - Transport type:knet,udp, orudpu. Theudpandudputransport types support only one link. Encryption is always disabled forudpandudpu. Defaults toknetif not specified. -
options(optional) - List of name-value dictionaries with transport options. -
links(optional) - List of list of name-value dictionaries. Each list of name-value dictionaries holds options for one Corosync link. It is recommended that you set thelinknumbervalue for each link. Otherwise, the first list of dictionaries is assigned by default to the first link, the second one to the second link, and so on. -
compression(optional) - List of name-value dictionaries configuring transport compression. Supported only with theknettransport type. crypto(optional) - List of name-value dictionaries configuring transport encryption. By default, encryption is enabled. Supported only with theknettransport type.For a list of allowed options, see the
pcs -h cluster setuphelp page or thesetupdescription in theclustersection of thepcs(8) man page. For more detailed descriptions, see thecorosync.conf(5) man page.The structure of the
ha_cluster_transportvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterSystem Role playbook that configures a transport method, see Configuring Corosync values in a high availability cluster.
-
ha_cluster_totemConfigures Corosync totem. For a list of allowed options, see the
pcs -h cluster setuphelp page or thesetupdescription in theclustersection of thepcs(8) man page. For a more detailed description, see thecorosync.conf(5) man page.The structure of the
ha_cluster_totemvariable is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clusterSystem Role playbook that configures a Corosync totem, see Configuring Corosync values in a high availability cluster.ha_cluster_quorumConfigures cluster quorum. You can configure the following items for cluster quorum:
-
options(optional) - List of name-value dictionaries configuring quorum. Allowed options are:auto_tie_breaker,last_man_standing,last_man_standing_window, andwait_for_all. For information about quorum options, see thevotequorum(5) man page. -
device(optional) -
-
Configures the cluster to use a quorum device. By default, no quorum device is used. ** model (mandatory) - Specifies a quorum device model. Only net is supported
+ ** model_options (optional) - List of name-value dictionaries configuring the specified quorum device model. For model net, you must specify host and algorithm options.
+ Use the pcs-address option to set a custom pcsd address and port to connect to the qnetd host. If you do not specify this option, the role connects to the default pcsd port on the host.
+ ** generic_options (optional) - List of name-value dictionaries setting quorum device options that are not model specific.
+ ** heuristics_options (optional) - List of name-value dictionaries configuring quorum device heuristics.
+ For information about quorum device options, see the corosync-qdevice(8) man page. The generic options are sync_timeout and timeout. For model net options see the quorum.device.net section. For heuristics options, see the quorum.device.heuristics section.
+ To regenerate a quorum device TLS certificate, set the ha_cluster_regenerate_keys variable to true.
+ :: The structure of the ha_cluster_quorum variable is as follows:
+
+ For an example ha_cluster System Role playbook that configures cluster quorum, see Configuring Corosync values in a high availability cluster. For an example ha_cluster System Role playbook that configures a cluster using a quorum device, see Configuring a high availability cluster using a quorum device.
ha_cluster_sbd_enabledA boolean flag which determines whether the cluster can use the SBD node fencing mechanism. The default value of this variable is
false.For an example
ha_clusterSystem Role playbook that enables SBD, see Configuring a high availability cluster with SBD node fencing.ha_cluster_sbd_optionsList of name-value dictionaries specifying SBD options. Supported options are:
-
delay-start- defaults tono -
startmode- defaults toalways -
timeout-action- defaults toflush,reboot watchdog-timeout- defaults to5For information about these options, see the
Configuration via environmentsection of thesbd(8) man page.
-
For an example
ha_clusterSystem Role playbook that configures SBD options, see Configuring a high availability cluster with SBD node fencing.When using SBD, you can optionally configure watchdog and SBD devices for each node in an inventory. For information about configuring watchdog and SBD devices in an inventory file, see Specifying an inventory for the ha_cluster System Role.
ha_cluster_cluster_propertiesList of sets of cluster properties for Pacemaker cluster-wide configuration. Only one set of cluster properties is supported.
The structure of a set of cluster properties is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no properties are set.
The following example playbook configures a cluster consisting of
node1andnode2and sets thestonith-enabledandno-quorum-policycluster properties.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ha_cluster_resource_primitivesThis variable defines pacemaker resources configured by the System Role, including stonith resources, including stonith resources. You can configure the following items for each resource:
-
id(mandatory) - ID of a resource. -
agent(mandatory) - Name of a resource or stonith agent, for exampleocf:pacemaker:Dummyorstonith:fence_xvm. It is mandatory to specifystonith:for stonith agents. For resource agents, it is possible to use a short name, such asDummy, instead ofocf:pacemaker:Dummy. However, if several agents with the same short name are installed, the role will fail as it will be unable to decide which agent should be used. Therefore, it is recommended that you use full names when specifying a resource agent. -
instance_attrs(optional) - List of sets of the resource’s instance attributes. Currently, only one set is supported. The exact names and values of attributes, as well as whether they are mandatory or not, depend on the resource or stonith agent. -
meta_attrs(optional) - List of sets of the resource’s meta attributes. Currently, only one set is supported. operations(optional) - List of the resource’s operations.-
action(mandatory) - Operation action as defined by pacemaker and the resource or stonith agent. -
attrs(mandatory) - Operation options, at least one option must be specified.
-
-
The structure of the resource definition that you configure with the
ha_clusterSystem Role is as follows.Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resources are defined.
For an example
ha_clusterSystem Role playbook that includes resource configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_groupsThis variable defines pacemaker resource groups configured by the System Role. You can configure the following items for each resource group:
-
id(mandatory) - ID of a group. -
resources(mandatory) - List of the group’s resources. Each resource is referenced by its ID and the resources must be defined in theha_cluster_resource_primitivesvariable. At least one resource must be listed. -
meta_attrs(optional) - List of sets of the group’s meta attributes. Currently, only one set is supported.
-
The structure of the resource group definition that you configure with the
ha_clusterSystem Role is as follows.Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resource groups are defined.
For an example
ha_clusterSystem Role playbook that includes resource group configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_resource_clonesThis variable defines pacemaker resource clones configured by the System Role. You can configure the following items for a resource clone:
-
resource_id(mandatory) - Resource to be cloned. The resource must be defined in theha_cluster_resource_primitivesvariable or theha_cluster_resource_groupsvariable. -
promotable(optional) - Indicates whether the resource clone to be created is a promotable clone, indicated astrueorfalse. -
id(optional) - Custom ID of the clone. If no ID is specified, it will be generated. A warning will be displayed if this option is not supported by the cluster. -
meta_attrs(optional) - List of sets of the clone’s meta attributes. Currently, only one set is supported.
-
The structure of the resource clone definition that you configure with the
ha_clusterSystem Role is as follows.Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, no resource clones are defined.
For an example
ha_clusterSystem Role playbook that includes resource clone configuration, see Configuring a high availability cluster with fencing and resources.ha_cluster_constraints_locationThis variable defines resource location constraints. Resource location constraints indicate which nodes a resource can run on. You can specify a resources specified by a resource ID or by a pattern, which can match more than one resource. You can specify a node by a node name or by a rule.
You can configure the following items for a resource location constraint:
-
resource(mandatory) - Specification of a resource the constraint applies to. -
node(mandatory) - Name of a node the resource should prefer or avoid. -
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.score- Sets the weight of the constraint.-
A positive
scorevalue means the resource prefers running on the node. -
A negative
scorevalue means the resource should avoid running on the node. -
A
scorevalue of-INFINITYmeans the resource must avoid running on the node. -
If
scoreis not specified, the score value defaults toINFINITY.
-
A positive
-
By default no resource location constraints are defined.
The structure of a resource location constraint specifying a resource ID and node name is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The items that you configure for a resource location constraint that specifies a resource pattern are the same items that you configure for a resource location constraint that specifies a resource ID, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows:
-
pattern(mandatory) - POSIX extended regular expression resource IDs are matched against.
-
The structure of a resource location constraint specifying a resource pattern and node name is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource location constraint that specifies a resource ID and a rule:
resource(mandatory) - Specification of a resource the constraint applies to.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
rule(mandatory) - Constraint rule written usingpcssyntax. For further information, see theconstraint locationsection of thepcs(8) man page. - Other items to specify have the same meaning as for a resource constraint that does not specify a rule.
The structure of a resource location constraint that specifies a resource ID and a rule is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The items that you configure for a resource location constraint that specifies a resource pattern and a rule are the same items that you configure for a resource location constraint that specifies a resource ID and a rule, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows:
-
pattern(mandatory) - POSIX extended regular expression resource IDs are matched against.
-
The structure of a resource location constraint that specifies a resource pattern and a rule is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clustersystem role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_colocationThis variable defines resource colocation constraints. Resource colocation constraints indicate that the location of one resource depends on the location of another one. There are two types of colocation constraints: a simple colocation constraint for two resources, and a set colocation constraint for multiple resources.
You can configure the following items for a simple resource colocation constraint:
resource_follower(mandatory) - A resource that should be located relative toresource_leader.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
resource_leader(mandatory) - The cluster will decide where to put this resource first and then decide where to putresource_follower.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.score- Sets the weight of the constraint.-
Positive
scorevalues indicate the resources should run on the same node. -
Negative
scorevalues indicate the resources should run on different nodes. -
A
scorevalue of+INFINITYindicates the resources must run on the same node. -
A
scorevalue of-INFINITYindicates the resources must run on different nodes. -
If
scoreis not specified, the score value defaults toINFINITY.
-
Positive
By default no resource colocation constraints are defined.
The structure of a simple resource colocation constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set colocation constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
id(optional) - Same values as for a simple colocation constraint. -
options(optional) - Same values as for a simple colocation constraint.
The structure of a resource set colocation constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clustersystem role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_orderThis variable defines resource order constraints. Resource order constraints indicate the order in which certain resource actions should occur. There are two types of resource order constraints: a simple order constraint for two resources, and a set order constraint for multiple resources.
You can configure the following items for a simple resource order constraint:
resource_first(mandatory) - Resource that theresource_thenresource depends on.-
id(mandatory) - Resource ID. -
action(optional) - The action that must complete before an action can be initiated for theresource_thenresource. Allowed values:start,stop,promote,demote.
-
resource_then(mandatory) - The dependent resource.-
id(mandatory) - Resource ID. -
action(optional) - The action that the resource can execute only after the action on theresource_firstresource has completed. Allowed values:start,stop,promote,demote.
-
-
id(optional) - ID of the constraint. If not specified, it will be autogenerated. -
options(optional) - List of name-value dictionaries.
By default no resource order constraints are defined.
The structure of a simple resource order constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set order constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
id(optional) - Same values as for a simple order constraint. -
options(optional) - Same values as for a simple order constraint.
The structure of a resource set order constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clustersystem role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_constraints_ticketThis variable defines resource ticket constraints. Resource ticket constraints indicate the resources that depend on a certain ticket. There are two types of resource ticket constraints: a simple ticket constraint for one resource, and a ticket order constraint for multiple resources.
You can configure the following items for a simple resource ticket constraint:
resource(mandatory) - Specification of a resource the constraint applies to.-
id(mandatory) - Resource ID. -
role(optional) - The resource role to which the constraint is limited:Started,Unpromoted,Promoted.
-
-
ticket(mandatory) - Name of a ticket the resource depends on. -
id(optional) - ID of the constraint. If not specified, it will be autogenerated. options(optional) - List of name-value dictionaries.-
loss-policy(optional) - Action to perform on the resource if the ticket is revoked.
-
By default no resource ticket constraints are defined.
The structure of a simple resource ticket constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can configure the following items for a resource set ticket constraint:
resource_sets(mandatory) - List of resource sets.-
resource_ids(mandatory) - List of resources in a set. -
options(optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint.
-
-
ticket(mandatory) - Same value as for a simple ticket constraint. -
id(optional) - Same value as for a simple ticket constraint. -
options(optional) - Same values as for a simple ticket constraint.
The structure of a resource set ticket constraint is as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an example
ha_clustersystem role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints.ha_cluster_qnetd(RHEL 8.8 and later) This variable configures a
qnetdhost which can then serve as an external quorum device for clusters.You can configure the following items for a
qnetdhost:-
present(optional) - Iftrue, configure aqnetdinstance on the host. Iffalse, removeqnetdconfiguration from the host. The default value isfalse. If you set thistrue, you must setha_cluster_cluster_presenttofalse. -
start_on_boot(optional) - Configures whether theqnetdinstance should start automatically on boot. The default value istrue. -
regenerate_keys(optional) - Set this variable totrueto regenerate theqnetdTLS certificate. If you regenerate the certificate, you must either re-run the role for each cluster to connect it to theqnetdhost again or runpcsmanually.
-
You cannot run
qnetdon a cluster node because fencing would disruptqnetdoperation.For an example
ha_clusterSystem Role playbook that configures a cluster using a quorum device, see Configuring a cluster using a quorum device.
25.2. Specifying an inventory for the ha_cluster System Role Link kopierenLink in die Zwischenablage kopiert!
When configuring an HA cluster using the ha_cluster System Role playbook, you configure the names and addresses of the nodes for the cluster in an inventory.
25.2.1. Configuring node names and addresses in an inventory Link kopierenLink in die Zwischenablage kopiert!
For each node in an inventory, you can optionally specify the following items:
-
node_name- the name of a node in a cluster. -
pcs_address- an address used bypcsto communicate with the node. It can be a name, FQDN or an IP address and it can include a port number. -
corosync_addresses- list of addresses used by Corosync. All nodes which form a particular cluster must have the same number of addresses and the order of the addresses matters.
The following example shows an inventory with targets node1 and node2. node1 and node2 must be either fully qualified domain names or must otherwise be able to connect to the nodes as when, for example, the names are resolvable through the /etc/hosts file.
25.2.2. Configuring watchdog and SBD devices in an inventory Link kopierenLink in die Zwischenablage kopiert!
When using SBD, you can optionally configure watchdog and SBD devices for each node in an inventory. Even though all SBD devices must be shared to and accesible from all nodes, each node can use different names for the devices. Watchdog devices can be different for each node as well. For information on the SBD variables you can set in a system role playbook, see the entries for ha_cluster_sbd_enabled and ha_cluster_sbd_options in ha_cluster System Role variables.
For each node in an inventory, you can optionally specify the following items:
-
sbd_watchdog- Watchdog device to be used by SBD. Defaults to/dev/watchdogif not set. -
sbd_devices- Devices to use for exchanging SBD messages and for monitoring. Defaults to empty list if not set.
The following example shows an inventory that configures watchdog and SBD devices for targets node1 and node2.
25.3. Creating pcsd TLS certificates and key files for a high availability cluster Link kopierenLink in die Zwischenablage kopiert!
You can use the ha_cluster System Role to create TLS certificates and key files in a high availability cluster. When you run this playbook, the ha_cluster System Role uses the certificate System Role internally to manage TLS certificates.
Prerequisites
The
ansible-coreand therhel-system-rolespackages are installed on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.- The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
The ha_cluster System Role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role.
Create a playbook file, for example
new-cluster.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a cluster running the
firewalldandselinuxservices and creates a self-signedpcsdcertificate and private key files in/var/lib/pcsd. Thepcsdcertificate has the file nameFILENAME.crtand the key file is namedFILENAME.key.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory new-cluster.yml
# ansible-playbook -i inventory new-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
25.4. Configuring a high availability cluster running no resources Link kopierenLink in die Zwischenablage kopiert!
The following procedure uses the ha_cluster System Role, to create a high availability cluster with no fencing configured and which runs no resources.
Prerequisites
You have
ansible-coreinstalled on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
The ha_cluster System Role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role.
Create a playbook file, for example
new-cluster.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a cluster running the
firewalldandselinuxservices with no fencing configured and which runs no resources.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory new-cluster.yml
# ansible-playbook -i inventory new-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.5. Configuring a high availability cluster with fencing and resources Link kopierenLink in die Zwischenablage kopiert!
The following procedure uses the ha_cluster System Role to create a high availability cluster that includes a fencing device, cluster resources, resource groups, and a cloned resource.
Prerequisites
You have
ansible-coreinstalled on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
The ha_cluster System Role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role.
Create a playbook file, for example
new-cluster.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a cluster running the
firewalldandselinuxservices. The cluster includes fencing, several resources, and a resource group. It also includes a resource clone for the resource group.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory new-cluster.yml
# ansible-playbook -i inventory new-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.6. Configuring a high availability cluster with resource constraints Link kopierenLink in die Zwischenablage kopiert!
The following procedure uses the ha_cluster system role to create a high availability cluster that includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints.
Prerequisites
You have
ansible-coreinstalled on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role.
Create a playbook file, for example
new-cluster.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a cluster running the
firewalldandselinuxservices. The cluster includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory new-cluster.yml
# ansible-playbook -i inventory new-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.7. Configuring Corosync values in a high availability cluster Link kopierenLink in die Zwischenablage kopiert!
The following procedure uses the ha_cluster System Role to create a high availability cluster that configures Corosync values.
Prerequisites
You have
ansible-coreinstalled on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
The ha_cluster System Role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role.
Create a playbook file, for example
new-cluster.yml.NoteWhen creating your playbook file for production, Vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a cluster running the
firewalldandselinuxservices that configures Corosync properties.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory new-cluster.yml
# ansible-playbook -i inventory new-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.8. Configuring a high availability cluster with SBD node fencing Link kopierenLink in die Zwischenablage kopiert!
The following procedure uses the ha_cluster System Role to create a high availability cluster that uses SBD node fencing.
Prerequisites
You have
ansible-coreinstalled on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
The ha_cluster System Role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role. You can optionally configure watchdog and SBD devices for each node in the cluster in an inventory file.
Create a playbook file, for example
new-cluster.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a cluster running the
firewalldandselinuxservices that uses SBD fencing.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory new-cluster.yml
# ansible-playbook -i inventory new-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.9. Configuring a high availability cluster using a quorum device Link kopierenLink in die Zwischenablage kopiert!
To configure a high availability cluster with a separate quorum device by using the ha_cluster System Role, first set up the quorum device. After setting up the quorum device, you can use the device in any number of clusters.
25.9.1. Configuring a quorum device Link kopierenLink in die Zwischenablage kopiert!
To configure a quorum device using the ha_cluster System Role, follow these steps. Note that you cannot run a quorum device on a cluster node.
Prerequisites
The
ansible-coreand therhel-system-rolespackages are installed on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.- The system that you will use to run the quorum device has active subscription coverage for RHEL and the RHEL High Availability Add-On.
The ha_cluster System Role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
Create a playbook file, for example
qdev-playbook.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a quorum device on a system running the
firewalldandselinuxservices.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the host node for the quorum device.
ansible-playbook -i nodeQ, qdev-playbook.yml
# ansible-playbook -i nodeQ, qdev-playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.9.2. Configuring a cluster to use a quorum device Link kopierenLink in die Zwischenablage kopiert!
To configure a cluster to use a quorum device, follow these steps.
Prerequisites
You have
ansible-coreinstalled on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- You have configured a quorum device.
The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role.
Create a playbook file, for example
new-cluster.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a cluster running the
firewalldandselinuxservices that uses a quorum device.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory new-cluster.yml
# ansible-playbook -i inventory new-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.10. Configuring an Apache HTTP server in a high availability cluster with the ha_cluster System Role Link kopierenLink in die Zwischenablage kopiert!
This procedure configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster using the ha_cluster System Role.
Prerequisites
You have
ansible-coreinstalled on the node from which you want to run the playbook.NoteYou do not need to have
ansible-coreinstalled on the cluster member nodes.-
You have the
rhel-system-rolespackage installed on the system from which you want to run the playbook. - The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On.
- Your system includes a public virtual IP address, required for Apache.
- Your system includes shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device.
- You have configured an LVM logical volume with an XFS file system, as described in Configuring an LVM volume with an XFS file system in a Pacemaker cluster.
- You have configured an Apache HTTP server, as described in Configuring an Apache HTTP Server.
- Your system includes an APC power switch that will be used to fence the cluster nodes.
The ha_cluster System Role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the role will be lost.
Procedure
- Create an inventory file specifying the nodes in the cluster, as described in Specifying an inventory for the ha_cluster System Role.
Create a playbook file, for example
http-cluster.yml.NoteWhen creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault.
The following example playbook file configures a previously-created Apache HTTP server in an active/passive two-node HA cluster running the
firewalldandselinuxservices.This example uses an APC power switch with a host name of
zapc.example.com. If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining theha_cluster_fence_agent_packagesvariable, as in this example.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
ansible-playbook -i inventory http-cluster.yml
# ansible-playbook -i inventory http-cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you use the
apacheresource agent to manage Apache, it does not usesystemd. Because of this, you must edit thelogrotatescript supplied with Apache so that it does not usesystemctlto reload Apache.Remove the following line in the
/etc/logrotate.d/httpdfile on each node in the cluster./bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the line you removed with the following three lines, specifying
/var/run/httpd-website.pidas the PID file path where website is the name of the Apache resource. In this example, the Apache resource name isWebsite./usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true
/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q $(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node,
z1.example.com.If you find that the resources you configured are not running, you can run the
pcs resource debug-start resourcecommand to test the resource configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2resource to view the sample display, consisting of the simple word "Hello".Hello
HelloCopy to Clipboard Copied! Toggle word wrap Toggle overflow To test whether the resource group running on
z1.example.comfails over to nodez2.example.com, put nodez1.example.cominstandbymode, after which the node will no longer be able to host resources.pcs node standby z1.example.com
[root@z1 ~]# pcs node standby z1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow After putting node
z1instandbymode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running onz2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The web site at the defined IP address should still display, without interruption.
To remove
z1fromstandbymode, enter the following command.pcs node unstandby z1.example.com
[root@z1 ~]# pcs node unstandby z1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemoving a node from
standbymode does not in itself cause the resources to fail back over to that node. This will depend on theresource-stickinessvalue for the resources. For information about theresource-stickinessmeta attribute, see Configuring a resource to prefer its current node.
Chapter 26. Installing and configuring web console with the cockpit RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the cockpit RHEL System Role, you can install and configure the web console in your system.
26.1. The cockpit System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the cockpit System Role to automatically deploy and enable the web console and thus be able to manage your RHEL systems from a web browser.
26.2. Variables for the cockpit RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
The parameters used for the cockpit RHEL System Roles are:
| Role Variable | Description |
|---|---|
| cockpit_packages: (default: default) | Sets one of the predefined package sets: default, minimal, or full. * cockpit_packages: (default: default) - most common pages and on-demand install UI * cockpit_packages: (default: minimal) - just the Overview, Terminal, Logs, Accounts, and Metrics pages; minimal dependencies * cockpit_packages: (default: full) - all available pages Optionally, specify your own selection of cockpit packages you want to install. |
| cockpit_enabled: (default:true) | Configures if the web console web server is enabled to start automatically at boot |
| cockpit_started: (default:true) | Configures if the web console should be started |
| cockpit_config: (default: nothing) |
You can apply settings in the |
| cockpit_port: (default: 9090) | The web console runs on port 9090 by default. You can change the port using this option. |
| cockpit_manage_firewall: (default: false) |
Allows the |
| cockpit_manage_selinux: (default: false) |
Allows the |
| cockpit_certificates: (default: nothing) |
Allows the |
26.3. Installing the web console by using the cockpit RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the cockpit System Role to install and enable the RHEL web console.
By default, the RHEL web console uses a self-signed certificate. For security reasons, you can specify a certificate that was issued by a trusted certificate authority instead.
In this example, you use the cockpit System Role to:
- Install the RHEL web console.
-
Allow the web console to manage
firewalld. -
Set the web console to use a certificate from the
ipatrusted certificate authority instead of using a self-signed certificate. - Set the web console to use a custom port 9050.
You do not have to call the firewall or certificate System Roles in the playbook to manage the Firewall or create the certificate. The cockpit System Role calls them automatically as needed.
Prerequisites
- Access and permissions to one or more managed nodes.
Access and permissions to a control node.
On the control node:
- Red Hat Ansible Engine is installed.
-
The
rhel-system-rolespackage is installed. - An inventory file exists that lists the managed nodes.
Procedure
Create a new
playbook.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify the playbook syntax:
ansible-playbook --syntax-check -i inventory_file playbook.yml
# ansible-playbook --syntax-check -i inventory_file playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/playbook.yml
# ansible-playbook -i inventory_file /path/to/file/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 27. Managing containers by using the podman RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
With the podman RHEL System Role, you can manage Podman configuration, containers, and systemd services which run Podman containers.
27.1. The podman RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the podman RHEL System Role to manage Podman configuration, containers, and systemd services which run Podman containers.
27.2. Variables for the podman RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
The parameters used for the podman RHEL System Role are:
| Variable | Description |
|---|---|
|
| Describes a podman pod and corresponding systemd unit to manage.
|
|
|
If true, the role ensures host directories specified in host mounts in Note Directories must be specified as absolute paths (for root containers), or paths relative to the home directory (for non-root containers), in order for the role to manage them. Anything else is ignored.
The role applies its default ownership or permissions to the directories. If you need to set ownership or permissions, see |
|
|
It is a dict. If using |
|
| It is a list of dict. Specifies ports that you want the role to manage in the firewall. This uses the same format as used by the firewall RHEL System Role. |
|
| It is a list of dict. Specifies ports that you want the role to manage the SELinux policy for ports used by the role. This uses the same format as used by the selinux RHEL System Role. |
|
|
Specifies the name of the user to use for all rootless containers. You can also specify per-container username with Note The user must already exist. |
|
|
Specifies the name of the group to use for all rootless containers. You can also specify a per-container group name with Note The group must already exist. |
|
|
Defines the |
|
|
Defines the |
|
|
Defines the |
|
|
Defines the |
|
|
Defines the |
Chapter 28. Integrating RHEL systems directly with AD using RHEL System Roles Link kopierenLink in die Zwischenablage kopiert!
With the ad_integration System Role, you can automate a direct integration of a RHEL system with Active Directory (AD) using Red Hat Ansible Automation Platform.
This chapter covers the following topics:
28.1. The ad_integration System Role Link kopierenLink in die Zwischenablage kopiert!
Using the ad_integration System Role, you can directly connect a RHEL system to Active Directory (AD).
The role uses the following components:
- SSSD to interact with the central identity and authentication source
-
realmdto detect available AD domains and configure the underlying RHEL system services, in this case SSSD, to connect to the selected AD domain
The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles.
28.2. Variables for the ad_integration RHEL System Role Link kopierenLink in die Zwischenablage kopiert!
The ad_integration RHEL System Role uses the following parameters:
| Role Variable | Description |
|---|---|
| ad_integration_realm | Active Directory realm, or domain name to join. |
| ad_integration_password | The password of the user used to authenticate with when joining the machine to the realm. Do not use plain text. Instead, use Ansible Vault to encrypt the value. |
| ad_integration_manage_crypto_policies |
If
Default: |
| ad_integration_allow_rc4_crypto |
If
Providing this variable automatically sets
Default: |
| ad_integration_timesync_source |
Hostname or IP address of time source to synchronize the system clock with. Providing this variable automatically sets |
28.3. Connecting a RHEL system directly to AD using the ad_integration System Role Link kopierenLink in die Zwischenablage kopiert!
You can use the ad_integration System Role to configure a direct integration between a RHEL system and an AD domain by running an Ansible playbook.
Starting with RHEL8, RHEL no longer supports RC4 encryption by default. If it is not possible to enable AES in the AD domain, you must enable the AD-SUPPORT crypto policy and allow RC4 encryption in the playbook.
Time between the RHEL server and AD must be synchronized. You can ensure this by using the timesync System Role in the playbook.
In this example, the RHEL system joins the domain.example.com AD domain, using the AD Administrator user and the password for this user stored in the Ansible vault. The playbook also sets the AD-SUPPORT crypto policy and allows RC4 encryption. To ensure time synchronization between the RHEL system and AD, the playbook sets the adserver.domain.example.com server as the timesync source.
Prerequisites
- Access and permissions to one or more managed nodes.
Access and permissions to a control node.
On the control node:
- Red Hat Ansible Engine is installed.
-
The
rhel-system-rolespackage is installed. - An inventory file which lists the managed nodes.
The following ports on the AD domain controllers are open and accessible from the RHEL server:
Expand Table 28.1. Ports Required for Direct Integration of Linux Systems into AD Using the ad_integration System Role Source Port Destination Port Protocol Service 1024:65535
53
UDP and TCP
DNS
1024:65535
389
UDP and TCP
LDAP
1024:65535
636
TCP
LDAPS
1024:65535
88
UDP and TCP
Kerberos
1024:65535
464
UDP and TCP
Kerberos change/set password (
kadmin)1024:65535
3268
TCP
LDAP Global Catalog
1024:65535
3269
TCP
LDAP Global Catalog SSL/TLS
1024:65535
123
UDP
NTP/Chrony (Optional)
1024:65535
323
UDP
NTP/Chrony (Optional)
Procedure
Create a new
ad_integration.ymlfile with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify playbook syntax.
ansible-playbook --syntax-check ad_integration.yml -i inventory_file
# ansible-playbook --syntax-check ad_integration.yml -i inventory_fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook on your inventory file:
ansible-playbook -i inventory_file /path/to/file/ad_integration.yml
# ansible-playbook -i inventory_file /path/to/file/ad_integration.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display an AD user details, such as the
administratoruser:getent passwd administrator@ad.example.com administrator@ad.example.com:*:1450400500:1450400513:Administrator:/home/administrator@ad.example.com:/bin/bash
getent passwd administrator@ad.example.com administrator@ad.example.com:*:1450400500:1450400513:Administrator:/home/administrator@ad.example.com:/bin/bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.4. Additional resources Link kopierenLink in die Zwischenablage kopiert!
-
The
/usr/share/ansible/roles/rhel-system-roles.ad_integration/README.mdfile. -
man ansible-playbook(1)