Chapter 15. Configuring logging by using the RHEL system role
As a system administrator, you can use the logging
RHEL system role to configure a Red Hat Enterprise Linux host as a logging server to collect logs from many client systems.
15.1. The logging
RHEL system role
With the logging
RHEL system role, you can deploy logging configurations on local and remote hosts.
Logging solutions provide multiple ways of reading logs and multiple logging outputs.
For example, a logging system can receive the following inputs:
- Local files
-
systemd/journal
- Another logging system over the network
In addition, a logging system can have the following outputs:
-
Logs stored in the local files in the
/var/log
directory - Logs sent to Elasticsearch
- Logs forwarded to another logging system
With the logging
RHEL system role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal
in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory - RHEL system roles
15.2. Applying a local logging
RHEL system role
Prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines. Each machine records logs locally.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
You do not have to have the rsyslog
package installed, because the RHEL system role installs rsyslog
when deployed.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploying basics input and implicit files output hosts: managed-node-01.example.com roles: - rhel-system-roles.logging vars: logging_inputs: - name: system_input type: basics logging_outputs: - name: files_output type: files logging_flows: - name: flow1 inputs: [system_input] outputs: [files_output]
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.
Verify that the system sends messages to the log:
Send a test message:
# logger test
View the
/var/log/messages
log, for example:# cat /var/log/messages Aug 5 13:48:31 <hostname> root[6778]: test
Where
<hostname>
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory
15.3. Filtering logs in a local logging
RHEL system role
You can deploy a logging solution which filters the logs based on the rsyslog
property-based filter.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
You do not have to have the rsyslog
package installed, because the RHEL system role installs rsyslog
when deployed.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploying files input and configured files output hosts: managed-node-01.example.com roles: - rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]
Using this configuration, all messages that contain the
error
string are logged in/var/log/errors.log
, and all other messages are logged in/var/log/others.log
.You can replace the
error
property value with the string by which you want to filter.You can modify the variables according to your preferences.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.
Verify that the system sends messages that contain the
error
string to the log:Send a test message:
# logger error
View the
/var/log/errors.log
log, for example:# cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory
15.4. Applying a remote logging solution by using the logging
RHEL system role
Follow these steps to prepare and apply a Red Hat Ansible Core playbook to configure a remote logging solution. In this playbook, one or more clients take logs from systemd-journal
and forward them to a remote server. The server receives remote input from remote_rsyslog
and remote_files
and outputs the logs to local files in directories named by remote host names.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
You do not have to have the rsyslog
package installed, because the RHEL system role installs rsyslog
when deployed.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploying remote input and remote_files output hosts: managed-node-01.example.com roles: - rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploying basics input and forwards output hosts: managed-node-02.example.com roles: - rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]
Where
<host1.example.com>
is the logging server.NoteYou can modify the parameters in the playbook to fit your needs.
WarningThe logging solution works only with the ports defined in the SELinux policy of the server or client system and open in the firewall. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, modify the SELinux policy on the client and server systems.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
On both the client and the server system, test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.
Verify that the client system sends messages to the server:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/<host2.example.com>/messages
log, for example:# cat /var/log/<host2.example.com>/messages Aug 5 13:48:31 <host2.example.com> root[6778]: test
Where
<host2.example.com>
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory
15.5. Using the logging
RHEL system role with TLS
Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network.
As an administrator, you can use the logging
RHEL system role to configure a secure transfer of logs using Red Hat Ansible Automation Platform.
15.5.1. Configuring client logging with TLS
You can use an Ansible playbook with the logging
RHEL system role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption.
This procedure creates a private key and certificate, and configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network.
You do not have to call the certificate
RHEL system role in the playbook to create the certificate. The logging
RHEL system role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The managed nodes are enrolled in an IdM domain.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploying files input and forwards output with certs hosts: managed-node-01.example.com roles: - rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]
The playbook uses the following parameters:
logging_certificates
-
The value of this parameter is passed on to
certificate_requests
in thecertificate
RHEL system role and used to create a private key and certificate. logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert
,ca_cert_src
,cert
,cert_src
,private_key
,private_key_src
, andtls
.NoteIf you are using
logging_certificates
to create the files on the target node, do not useca_cert_src
,cert_src
, andprivate_key_src
, which are used to copy files not created bylogging_certificates
.ca_cert
-
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert
. Do not use this if usinglogging_certificates
. cert_src
-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert
. Do not use this if usinglogging_certificates
. private_key_src
-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key
. Do not use this if usinglogging_certificates
. tls
-
Setting this parameter to
true
ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false
.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory - Requesting certificates using RHEL system roles.
15.5.2. Configuring server logging with TLS
You can use an Ansible playbook with the logging
RHEL system role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption.
This procedure creates a private key and certificate, and configures TLS on all hosts in the server group in the Ansible inventory.
You do not have to call the certificate
RHEL system role in the playbook to create the certificate. The logging
RHEL system role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The managed nodes are enrolled in an IdM domain.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploying remote input and remote_files output with certs hosts: managed-node-01.example.com roles: - rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]
The playbook uses the following parameters:
logging_certificates
-
The value of this parameter is passed on to
certificate_requests
in thecertificate
RHEL system role and used to create a private key and certificate. logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert
,ca_cert_src
,cert
,cert_src
,private_key
,private_key_src
, andtls
.NoteIf you are using
logging_certificates
to create the files on the target node, do not useca_cert_src
,cert_src
, andprivate_key_src
, which are used to copy files not created bylogging_certificates
.ca_cert
-
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert
. Do not use this if usinglogging_certificates
. cert_src
-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert
. Do not use this if usinglogging_certificates
. private_key_src
-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key
. Do not use this if usinglogging_certificates
. tls
-
Setting this parameter to
true
ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false
.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory - Requesting certificates using RHEL system roles.
15.6. Using the logging
RHEL system roles with RELP
Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss.
The RELP sender transfers log entries in form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery.
You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system.
Administrators can use the logging
RHEL system role to configure the logging system to reliably send and receive log entries.
15.6.1. Configuring client logging with RELP
You can use the logging
RHEL system role to configure logging in RHEL systems that are logged on a local machine and can transfer logs to the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the clients
group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploying basic input and relp output hosts: managed-node-01.example.com roles: - rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]
The playbook uses following settings:
target
- This is a required parameter that specifies the host name where the remote logging system is running.
port
- Port number the remote logging system is listening.
tls
Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the
tls
variable tofalse
. By defaulttls
parameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert
,cert
,private_key
} and/or {ca_cert_src
,cert_src
,private_key_src
}.-
If the {
ca_cert_src
,cert_src
,private_key_src
} triplet is set, the default locations/etc/pki/tls/certs
and/etc/pki/tls/private
are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert
,cert
,private_key
} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
ca_cert
-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to certificate. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents local CA certificate file path which is copied to the target host. If
ca_cert
is specified, it is copied to the location. cert_src
-
Represents the local certificate file path which is copied to the target host. If
cert
is specified, it is copied to the location. private_key_src
-
Represents the local key file path which is copied to the target host. If
private_key
is specified, it is copied to the location. pki_authmode
-
Accepts the authentication mode as
name
orfingerprint
. permitted_servers
- List of servers that will be allowed by the logging client to connect and send logs over TLS.
inputs
- List of logging input dictionary.
outputs
- List of logging output dictionary.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory
15.6.2. Configuring server logging with RELP
You can use the logging
RHEL system role to configure logging in RHEL systems as a server and can receive logs from the remote logging system with RELP by running an Ansible playbook.
This procedure configures RELP on all hosts in the server
group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploying remote input and remote_files output hosts: managed-node-01.example.com roles: - rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output
The playbooks uses the following settings:
port
- Port number the remote logging system is listening.
tls
Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the
tls
variable tofalse
. By defaulttls
parameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert
,cert
,private_key
} and/or {ca_cert_src
,cert_src
,private_key_src
}.-
If the {
ca_cert_src
,cert_src
,private_key_src
} triplet is set, the default locations/etc/pki/tls/certs
and/etc/pki/tls/private
are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert
,cert
,private_key
} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
ca_cert
-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents local CA certificate file path which is copied to the target host. If
ca_cert
is specified, it is copied to the location. cert_src
-
Represents the local certificate file path which is copied to the target host. If
cert
is specified, it is copied to the location. private_key_src
-
Represents the local key file path which is copied to the target host. If
private_key
is specified, it is copied to the location. pki_authmode
-
Accepts the authentication mode as
name
orfingerprint
. permitted_clients
- List of clients that will be allowed by the logging server to connect and send logs over TLS.
inputs
- List of logging input dictionary.
outputs
- List of logging output dictionary.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory