Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 8. Configuring logging
Most services in Red Hat Enterprise Linux log status messages, warnings, and errors. You can use the rsyslogd
service to log these entries to local files or to a remote logging server.
8.1. Configuring a remote logging solution
To ensure that logs from various machines in your environment are recorded centrally on a logging server, you can configure the Rsyslog application to record logs that fit specific criteria from the client system to the server.
8.1.1. The Rsyslog logging service
The Rsyslog application, in combination with the systemd-journald
service, provides local and remote logging support in Red Hat Enterprise Linux. The rsyslogd
daemon continuously reads syslog
messages received by the systemd-journald
service from the Journal. rsyslogd
then filters and processes these syslog
events and records them to rsyslog
log files or forwards them to other services according to its configuration.
The rsyslogd
daemon also provides extended filtering, encryption protected relaying of messages, input and output modules, and support for transportation using the TCP and UDP protocols.
In /etc/rsyslog.conf
, which is the main configuration file for rsyslog
, you can specify the rules according to which rsyslogd
handles the messages. Generally, you can classify messages by their source and topic (facility) and urgency (priority), and then assign an action that should be performed when a message fits these criteria.
In /etc/rsyslog.conf
, you can also see a list of log files maintained by rsyslogd
. Most log files are located in the /var/log/
directory. Some applications, such as httpd
and samba
, store their log files in a subdirectory within /var/log/
.
Additional resources
-
The
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
8.1.2. Installing Rsyslog documentation
The Rsyslog application has extensive online documentation that is available at https://www.rsyslog.com/doc/, but you can also install the rsyslog-doc
documentation package locally.
Prerequisites
-
You have activated the
AppStream
repository on your system. -
You are authorized to install new packages using
sudo
.
Procedure
Install the
rsyslog-doc
package:# yum install rsyslog-doc
Verification
Open the
/usr/share/doc/rsyslog/html/index.html
file in a browser of your choice, for example:$ firefox /usr/share/doc/rsyslog/html/index.html &
8.1.3. Configuring a server for remote logging over TCP
The Rsyslog application enables you to both run a logging server and configure individual systems to send their log files to the logging server. To use remote logging through TCP, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems.
With the Rsyslog application, you can maintain a centralized logging system where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, you can configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues cannot be configured for connections using the UDP protocol.
The omfwd
plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, it does not have to be loaded.
By default, rsyslog
uses TCP on port 514
.
Prerequisites
- Rsyslog is installed on the server system.
-
You are logged in as
root
on the server. -
The
policycoreutils-python-utils
package is installed for the optional step using thesemanage
command. -
The
firewalld
service is running.
Procedure
Optional: To use a different port for
rsyslog
traffic, add thesyslogd_port_t
SELinux type to port. For example, enable port30514
:# semanage port -a -t syslogd_port_t -p tcp 30514
Optional: To use a different port for
rsyslog
traffic, configurefirewalld
to allow incomingrsyslog
traffic on that port. For example, allow TCP traffic on port30514
:# firewall-cmd --zone=<zone-name> --permanent --add-port=30514/tcp success # firewall-cmd --reload
Create a new file in the
/etc/rsyslog.d/
directory named, for example,remotelog.conf
, and insert the following content:# Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides TCP syslog reception module(load="imtcp") # Adding this ruleset to process remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imtcp" port="30514" ruleset="remote1")
-
Save the changes to the
/etc/rsyslog.d/remotelog.conf
file. Test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run... rsyslogd: End of config validation run. Bye.
Make sure the
rsyslog
service is running and enabled on the logging server:# systemctl status rsyslog
Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Your log server is now configured to receive and store log files from the other systems in your environment.
Additional resources
-
rsyslogd(8)
,rsyslog.conf(5)
,semanage(8)
, andfirewall-cmd(1)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
8.1.4. Configuring remote logging to a server over TCP
You can configure a system for forwarding log messages to a server over the TCP protocol. The omfwd
plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it.
Prerequisites
-
The
rsyslog
package is installed on the client systems that should report to the server. - You have configured the server for remote logging.
- The specified port is permitted in SELinux and open in firewall.
-
The system contains the
policycoreutils-python-utils
package, which provides thesemanage
command for adding a non-standard port to the SELinux configuration.
Procedure
Create a new file in the
/etc/rsyslog.d/
directory named, for example,10-remotelog.conf
, and insert the following content:*.* action(type="omfwd" queue.type="linkedlist" queue.filename="example_fwd" action.resumeRetryCount="-1" queue.saveOnShutdown="on" target="example.com" port="30514" protocol="tcp" )
Where:
-
The
queue.type="linkedlist"
setting enables a LinkedList in-memory queue, -
The
queue.filename
setting defines a disk storage. The backup files are created with theexample_fwd
prefix in the working directory specified by the preceding globalworkDirectory
directive. -
The
action.resumeRetryCount -1
setting preventsrsyslog
from dropping messages when retrying to connect if server is not responding, -
The
queue.saveOnShutdown="on"
setting saves in-memory data ifrsyslog
shuts down. The last line forwards all received messages to the logging server. Port specification is optional.
With this configuration,
rsyslog
sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only ifrsyslog
runs out of the configured memory queue space or needs to shut down, which benefits the system performance.
NoteRsyslog processes configuration files
/etc/rsyslog.d/
in the lexical order.-
The
Restart the
rsyslog
service.# systemctl restart rsyslog
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/messages
log, for example:# cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test
Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the
logger
command, in this caseroot
.
Additional resources
-
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
8.1.5. Configuring TLS-encrypted remote logging
By default, Rsyslog sends remote-logging communication in the plain text format. If your scenario requires to secure this communication channel, you can encrypt it using TLS.
To use encrypted transport through TLS, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems.
You can use either the ossl
network stream driver (OpenSSL) or the gtls
stream driver (GnuTLS).
If you have a separate system with higher security, for example, a system that is not connected to any network or has stricter authorizations, use the separate system as the certifying authority (CA).
You can customize your connection settings with stream drivers on the server side on the global
, module
, and input
levels, and on the client side on the global
and action
levels. The more specific configuration overrides the more general configuration. This means, for example, that you can use ossl
in global settings for most connections and gtls
on the input and action settings only for specific connections.
Prerequisites
-
You have
root
access to both the client and server systems. The following packages are installed on the server and the client systems:
-
The
rsyslog
package. -
For the
ossl
network stream driver, thersyslog-openssl
package. -
For the
gtls
network stream driver, thersyslog-gnutls
package. -
For generating certificates by using the
certtool
command, thegnutls-utils
package.
-
The
On your logging server, the following certificates are in the
/etc/pki/ca-trust/source/anchors/
directory and your system configuration is updated by using theupdate-ca-trust
command:-
ca-cert.pem
- a CA certificate that can verify keys and certificates on logging servers and clients. -
server-cert.pem
- a public key of the logging server. -
server-key.pem
- a private key of the logging server.
-
On your logging clients, the following certificates are in the
/etc/pki/ca-trust/source/anchors/
directory and your system configuration is updated by usingupdate-ca-trust
:-
ca-cert.pem
- a CA certificate that can verify keys and certificates on logging servers and clients. -
client-cert.pem
- a public key of a client. -
client-key.pem
- a private key of a client.
-
Procedure
Configure the server for receiving encrypted logs from your client systems:
-
Create a new file in the
/etc/rsyslog.d/
directory named, for example,securelogser.conf
. To encrypt the communication, the configuration file must contain paths to certificate files on your server, a selected authentication method, and a stream driver that supports TLS encryption. Add the following lines to the
/etc/rsyslog.d/securelogser.conf
file:# Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/server-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/server-key.pem" ) # TCP listener module( load="imtcp" PermittedPeer=["client1.example.com", "client2.example.com"] StreamDriver.AuthMode="x509/name" StreamDriver.Mode="1" StreamDriver.Name="ossl" ) # Start up listener at port 514 input( type="imtcp" port="514" )
NoteIf you prefer the GnuTLS driver, use the
StreamDriver.Name="gtls"
configuration option. See the documentation installed with thersyslog-doc
package for more information about less strict authentication modes thanx509/name
.-
Save the changes to the
/etc/rsyslog.d/securelogser.conf
file. Verify the syntax of the
/etc/rsyslog.conf
file and any files in the/etc/rsyslog.d/
directory:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1)... rsyslogd: End of config validation run. Bye.
Make sure the
rsyslog
service is running and enabled on the logging server:# systemctl status rsyslog
Restart the
rsyslog
service:# systemctl restart rsyslog
Optional: If Rsyslog is not enabled, ensure the
rsyslog
service starts automatically after reboot:# systemctl enable rsyslog
-
Create a new file in the
Configure clients for sending encrypted logs to the server:
-
On a client system, create a new file in the
/etc/rsyslog.d/
directory named, for example,securelogcli.conf
. Add the following lines to the
/etc/rsyslog.d/securelogcli.conf
file:# Set certificate files global( DefaultNetstreamDriverCAFile="/etc/pki/ca-trust/source/anchors/ca-cert.pem" DefaultNetstreamDriverCertFile="/etc/pki/ca-trust/source/anchors/client-cert.pem" DefaultNetstreamDriverKeyFile="/etc/pki/ca-trust/source/anchors/client-key.pem" ) # Set up the action for all messages *.* action( type="omfwd" StreamDriver="ossl" StreamDriverMode="1" StreamDriverPermittedPeers="server.example.com" StreamDriverAuthMode="x509/name" target="server.example.com" port="514" protocol="tcp" )
NoteIf you prefer the GnuTLS driver, use the
StreamDriver.Name="gtls"
configuration option.-
Save the changes to the
/etc/rsyslog.d/securelogcli.conf
file. Verify the syntax of the
/etc/rsyslog.conf
file and other files in the/etc/rsyslog.d/
directory:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1)... rsyslogd: End of config validation run. Bye.
Make sure the
rsyslog
service is running and enabled on the logging server:# systemctl status rsyslog
Restart the
rsyslog
service:# systemctl restart rsyslog
Optional: If Rsyslog is not enabled, ensure the
rsyslog
service starts automatically after reboot:# systemctl enable rsyslog
-
On a client system, create a new file in the
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/messages
log, for example:# cat /var/log/remote/msg/<hostname>/root.log Feb 25 03:53:17 <hostname> root[6064]: test
Where
<hostname>
is the hostname of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
certtool(1)
,openssl(1)
,update-ca-trust(8)
,rsyslogd(8)
, andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package at/usr/share/doc/rsyslog/html/index.html
. - Using the logging system role with TLS.
8.1.6. Configuring a server for receiving remote logging information over UDP
The Rsyslog application enables you to configure a system to receive logging information from remote systems. To use remote logging through UDP, configure both the server and the client. The receiving server collects and analyzes the logs sent by one or more client systems. By default, rsyslog
uses UDP on port 514
to receive log information from remote systems.
Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client systems over the UDP protocol.
Prerequisites
- Rsyslog is installed on the server system.
-
You are logged in as
root
on the server. -
The
policycoreutils-python-utils
package is installed for the optional step using thesemanage
command. -
The
firewalld
service is running.
Procedure
Optional: To use a different port for
rsyslog
traffic than the default port514
:Add the
syslogd_port_t
SELinux type to the SELinux policy configuration, replacingportno
with the port number you wantrsyslog
to use:# semanage port -a -t syslogd_port_t -p udp portno
Configure
firewalld
to allow incomingrsyslog
traffic, replacingportno
with the port number andzone
with the zone you wantrsyslog
to use:# firewall-cmd --zone=zone --permanent --add-port=portno/udp success # firewall-cmd --reload
Reload the firewall rules:
# firewall-cmd --reload
Create a new
.conf
file in the/etc/rsyslog.d/
directory, for example,remotelogserv.conf
, and insert the following content:# Define templates before the rules that use them # Per-Host templates for remote systems template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log") } # Provides UDP syslog reception module(load="imudp") # This ruleset processes remote messages ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") } input(type="imudp" port="514" ruleset="remote1")
Where
514
is the port numberrsyslog
uses by default. You can specify a different port instead.Verify the syntax of the
/etc/rsyslog.conf
file and all.conf
files in the/etc/rsyslog.d/
directory:# rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run...
Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Additional resources
-
rsyslogd(8)
,rsyslog.conf(5)
,semanage(8)
, andfirewall-cmd(1)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
8.1.7. Configuring remote logging to a server over UDP
You can configure a system for forwarding log messages to a server over the UDP protocol. The omfwd
plug-in provides forwarding over UDP or TCP. The default protocol is UDP. Because the plug-in is built in, you do not have to load it.
Prerequisites
-
The
rsyslog
package is installed on the client systems that should report to the server. - You have configured the server for remote logging as described in Configuring a server for receiving remote logging information over UDP.
Procedure
Create a new
.conf
file in the/etc/rsyslog.d/
directory, for example,10-remotelogcli.conf
, and insert the following content:*.* action(type="omfwd" queue.type="linkedlist" queue.filename="example_fwd" action.resumeRetryCount="-1" queue.saveOnShutdown="on" target="example.com" port="portno" protocol="udp" )
Where:
-
The
queue.type="linkedlist"
setting enables a LinkedList in-memory queue. -
The
queue.filename
setting defines a disk storage. The backup files are created with theexample_fwd
prefix in the working directory specified by the preceding globalworkDirectory
directive. -
The
action.resumeRetryCount -1
setting preventsrsyslog
from dropping messages when retrying to connect if the server is not responding. -
The
enabled queue.saveOnShutdown="on"
setting saves in-memory data ifrsyslog
shuts down. -
The
portno
value is the port number you wantrsyslog
to use. The default value is514
. The last line forwards all received messages to the logging server, port specification is optional.
With this configuration,
rsyslog
sends messages to the server but keeps messages in memory if the remote server is not reachable. A file on disk is created only ifrsyslog
runs out of the configured memory queue space or needs to shut down, which benefits the system performance.
NoteRsyslog processes configuration files
/etc/rsyslog.d/
in the lexical order.-
The
Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/remote/msg/hostname/root.log
log, for example:# cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package at/usr/share/doc/rsyslog/html/index.html
.
8.1.8. Load balancing helper in Rsyslog
The RebindInterval
setting specifies an interval at which the current connection is broken and is re-established. This setting applies to TCP, UDP, and RELP traffic. The load balancers perceive it as a new connection and forward the messages to another physical target system.
The RebindInterval
setting proves to be helpful in scenarios when a target system has changed its IP address. The Rsyslog application caches the IP address when the connection establishes, therefore, the messages are sent to the same server. If the IP address changes, the UDP packets will be lost until the Rsyslog service restarts. Re-establishing the connection will ensure the IP to be resolved by DNS again.
action(type=”omfwd” protocol=”tcp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omfwd” protocol=”udp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omrelp” RebindInterval=”250” target=”example.com” port=”6514” …)
8.1.9. Configuring reliable remote logging
With the Reliable Event Logging Protocol (RELP), you can send and receive syslog
messages over TCP with a much reduced risk of message loss. RELP provides reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable. To use RELP, configure the imrelp
input module, which runs on the server and receives the logs, and the omrelp
output module, which runs on the client and sends logs to the logging server.
Prerequisites
-
You have installed the
rsyslog
,librelp
, andrsyslog-relp
packages on the server and the client systems. - The specified port is permitted in SELinux and open in the firewall.
Procedure
Configure the client system for reliable remote logging:
On the client system, create a new
.conf
file in the/etc/rsyslog.d/
directory named, for example,relpclient.conf
, and insert the following content:module(load="omrelp") *.* action(type="omrelp" target="_target_IP_" port="_target_port_")
Where:
-
target_IP
is the IP address of the logging server. -
target_port
is the port of the logging server.
-
-
Save the changes to the
/etc/rsyslog.d/relpclient.conf
file. Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Configure the server system for reliable remote logging:
On the server system, create a new
.conf
file in the/etc/rsyslog.d/
directory named, for example,relpserv.conf
, and insert the following content:ruleset(name="relp"){ *.* action(type="omfile" file="_log_path_") } module(load="imrelp") input(type="imrelp" port="_target_port_" ruleset="relp")
Where:
-
log_path
specifies the path for storing messages. -
target_port
is the port of the logging server. Use the same value as in the client configuration file.
-
-
Save the changes to the
/etc/rsyslog.d/relpserv.conf
file. Restart the
rsyslog
service.# systemctl restart rsyslog
Optional: If
rsyslog
is not enabled, ensure thersyslog
service starts automatically after reboot:# systemctl enable rsyslog
Verification
To verify that the client system sends messages to the server, follow these steps:
On the client system, send a test message:
# logger test
On the server system, view the log at the specified
log_path
, for example:# cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
rsyslogd(8)
andrsyslog.conf(5)
man pages. -
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file.
8.1.10. Supported Rsyslog modules
To expand the functionality of the Rsyslog application, you can use specific modules. Modules provide additional inputs (Input Modules), outputs (Output Modules), and other functionalities. A module can also provide additional configuration directives that become available after you load the module.
You can list the input and output modules installed on your system by entering the following command:
# ls /usr/lib64/rsyslog/{i,o}m*
You can view the list of all available rsyslog
modules in the /usr/share/doc/rsyslog/html/configuration/modules/idx_output.html
file after you install the rsyslog-doc
package.
8.1.11. Configuring the netconsole service to log kernel messages to a remote host
When logging to disk or using a serial console is not possible, you can use the netconsole
kernel module and the same-named service to log kernel messages over a network to a remote rsyslog
service.
Prerequisites
-
A system log service, such as
rsyslog
is installed on the remote host. - The remote system log service is configured to receive incoming log entries from this host.
Procedure
Install the
netconsole-service
package:# yum install netconsole-service
Edit the
/etc/sysconfig/netconsole
file and set theSYSLOGADDR
parameter to the IP address of the remote host:# SYSLOGADDR=192.0.2.1
Enable and start the
netconsole
service:# systemctl enable --now netconsole
Verification
-
Display the
/var/log/messages
file on the remote system log server.
Additional resources
8.1.12. Additional resources
-
Documentation installed with the
rsyslog-doc
package in the/usr/share/doc/rsyslog/html/index.html
file -
rsyslog.conf(5)
andrsyslogd(8)
man pages on your system - Configuring system logging without journald or with minimized journald usage Knowledgebase article
- Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article
- The Using the Logging system role chapter
8.2. Using the logging
system role
As a system administrator, you can use the logging
system role to configure a Red Hat Enterprise Linux host as a logging server to collect logs from many client systems.
8.2.1. Filtering local log messages by using the logging
RHEL system role
You can use the property-based filter of the logging
RHEL system role to filter your local log messages based on various conditions. As a result, you can achieve for example:
- Log clarity: In a high-traffic environment, logs can grow rapidly. The focus on specific messages, like errors, can help to identify problems faster.
- Optimized system performance: Excessive amount of logs is usually connected with system performance degradation. Selective logging for only the important events can prevent resource depletion, which enables your systems to run more efficiently.
- Enhanced security: Efficient filtering through security messages, like system errors and failed logins, helps to capture only the relevant logs. This is important for detecting breaches and meeting compliance standards.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]
The settings specified in the example playbook include the following:
logging_inputs
-
Defines a list of logging input dictionaries. The
type: basics
option covers inputs fromsystemd
journal or Unix socket. logging_outputs
-
Defines a list of logging output dictionaries. The
type: files
option supports storing logs in the local files, usually in the/var/log/
directory. Theproperty: msg
;property: contains
; andproperty_value: error
options specify that all logs that contain theerror
string are stored in the/var/log/errors.log
file. Theproperty: msg
;property: !contains
; andproperty_value: error
options specify that all other logs are put in the/var/log/others.log
file. You can replace theerror
value with the string by which you want to filter. logging_flows
-
Defines a list of logging flow dictionaries to specify relationships between
logging_inputs
andlogging_outputs
. Theinputs: [files_input]
option specifies a list of inputs, from which processing of logs starts. Theoutputs: [files_output0, files_output1]
option specifies a list of outputs, to which the logs are sent.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
On the managed node, test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run... rsyslogd: End of config validation run. Bye.
On the managed node, verify that the system sends messages that contain the
error
string to the log:Send a test message:
# logger error
View the
/var/log/errors.log
log, for example:# cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error
Where
hostname
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory -
rsyslog.conf(5)
andsyslog(3)
manual pages
8.2.2. Applying a remote logging solution by using the logging
RHEL system role
You can use the logging
RHEL system role to configure a remote logging solution, where one or more clients take logs from the systemd-journal
service and forward them to a remote server. The server receives remote input from the remote_rsyslog
and remote_files
configurations, and outputs the logs to local files in directories named by remote host names.
As a result, you can cover use cases where you need for example:
- Centralized log management: Collecting, accessing, and managing log messages of multiple machines from a single storage point simplifies day-to-day monitoring and troubleshooting tasks. Also, this use case reduces the need to log into individual machines to check the log messages.
- Enhanced security: Storing log messages in one central place increases chances they are in a secure and tamper-proof environment. Such an environment makes it easier to detect and respond to security incidents more effectively and to meet audit requirements.
- Improved efficiency in log analysis: Correlating log messages from multiple systems is important for fast troubleshooting of complex problems that span multiple machines or services. That way you can quickly analyze and cross-reference events from different sources.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - Define the ports in the SELinux policy of the server or client system and open the firewall for those ports. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, see modify the SELinux policy on the client and server systems.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]
The settings specified in the first play of the example playbook include the following:
logging_inputs
-
Defines a list of logging input dictionaries. The
type: remote
option covers remote inputs from the other logging system over the network. Theudp_ports: [ 601 ]
option defines a list of UDP port numbers to monitor. Thetcp_ports: [ 601 ]
option defines a list of TCP port numbers to monitor. If bothudp_ports
andtcp_ports
is set,udp_ports
is used andtcp_ports
is dropped. logging_outputs
-
Defines a list of logging output dictionaries. The
type: remote_files
option makes output store logs to the local files per remote host and program name originated the logs. logging_flows
-
Defines a list of logging flow dictionaries to specify relationships between
logging_inputs
andlogging_outputs
. Theinputs: [remote_udp_input, remote_tcp_input]
option specifies a list of inputs, from which processing of logs starts. Theoutputs: [remote_files_output]
option specifies a list of outputs, to which the logs are sent.
The settings specified in the second play of the example playbook include the following:
logging_inputs
-
Defines a list of logging input dictionaries. The
type: basics
option covers inputs fromsystemd
journal or Unix socket. logging_outputs
-
Defines a list of logging output dictionaries. The
type: forwards
option supports sending logs to the remote logging server over the network. Theseverity: info
option refers to log messages of the informative importance. Thefacility: mail
option refers to the type of system program that is generating the log message. Thetarget: <host1.example.com>
option specifies the hostname of the remote logging server. Theudp_port: 601
/tcp_port: 601
options define the UDP/TCP ports on which the remote logging server listens. logging_flows
-
Defines a list of logging flow dictionaries to specify relationships between
logging_inputs
andlogging_outputs
. Theinputs: [basic_input]
option specifies a list of inputs, from which processing of logs starts. Theoutputs: [forward_output0, forward_output1]
option specifies a list of outputs, to which the logs are sent.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
On both the client and the server system, test the syntax of the
/etc/rsyslog.conf
file:# rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.
Verify that the client system sends messages to the server:
On the client system, send a test message:
# logger test
On the server system, view the
/var/log/<host2.example.com>/messages
log, for example:# cat /var/log/<host2.example.com>/messages Aug 5 13:48:31 <host2.example.com> root[6778]: test
Where
<host2.example.com>
is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this caseroot
.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory -
rsyslog.conf(5)
andsyslog(3)
manual pages
8.2.3. Using the logging
RHEL system role with TLS
Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network.
You can use the logging
RHEL system role to configure a secure transfer of log messages, where one or more clients take logs from the systemd-journal
service and transfer them to a remote server while using TLS.
Typically, TLS for transferring logs in a remote logging solution is used when sending sensitive data over less trusted or public networks, such as the Internet. Also, by using certificates in TLS you can ensure that the client is forwarding logs to the correct and trusted server. This prevents attacks like "man-in-the-middle".
8.2.3.1. Configuring client logging with TLS
You can use the logging
RHEL system role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption.
This procedure creates a private key and a certificate. Next, it configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network.
You do not have to call the certificate
RHEL system role in the playbook to create the certificate. The logging
RHEL system role calls it automatically when the logging_certificates
variable is set.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The managed nodes are enrolled in an IdM domain.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]
The settings specified in the example playbook include the following:
logging_certificates
-
The value of this parameter is passed on to
certificate_requests
in thecertificate
RHEL system role and used to create a private key and certificate. logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert
,ca_cert_src
,cert
,cert_src
,private_key
,private_key_src
, andtls
.NoteIf you are using
logging_certificates
to create the files on the managed node, do not useca_cert_src
,cert_src
, andprivate_key_src
, which are used to copy files not created bylogging_certificates
.ca_cert
-
Represents the path to the CA certificate file on the managed node. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate file on the managed node. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to the private key file on the managed node. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert
. Do not use this if usinglogging_certificates
. cert_src
-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert
. Do not use this if usinglogging_certificates
. private_key_src
-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key
. Do not use this if usinglogging_certificates
. tls
-
Setting this parameter to
true
ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false
.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory -
/usr/share/ansible/roles/rhel-system-roles.certificate/README.md
file -
/usr/share/doc/rhel-system-roles/certificate/
directory - Requesting certificates using RHEL system roles.
-
rsyslog.conf(5)
andsyslog(3)
manual pages
8.2.3.2. Configuring server logging with TLS
You can use the logging
RHEL system role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption.
This procedure creates a private key and a certificate. Next, it configures TLS on all hosts in the server group in the Ansible inventory.
You do not have to call the certificate
RHEL system role in the playbook to create the certificate. The logging
RHEL system role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - The managed nodes are enrolled in an IdM domain.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]
The settings specified in the example playbook include the following:
logging_certificates
-
The value of this parameter is passed on to
certificate_requests
in thecertificate
RHEL system role and used to create a private key and certificate. logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters:
ca_cert
,ca_cert_src
,cert
,cert_src
,private_key
,private_key_src
, andtls
.NoteIf you are using
logging_certificates
to create the files on the managed node, do not useca_cert_src
,cert_src
, andprivate_key_src
, which are used to copy files not created bylogging_certificates
.ca_cert
-
Represents the path to the CA certificate file on the managed node. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate file on the managed node. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to the private key file on the managed node. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by
ca_cert
. Do not use this if usinglogging_certificates
. cert_src
-
Represents the path to a certificate file on the control node which is copied to the target host to the location specified by
cert
. Do not use this if usinglogging_certificates
. private_key_src
-
Represents the path to a private key file on the control node which is copied to the target host to the location specified by
private_key
. Do not use this if usinglogging_certificates
. tls
-
Setting this parameter to
true
ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can settls: false
.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory - Requesting certificates using RHEL system roles.
-
rsyslog.conf(5)
andsyslog(3)
manual pages
8.2.4. Using the logging
RHEL system roles with RELP
Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss.
The RELP sender transfers log entries in the form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery.
You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system. To achieve that use case, you can use the logging
RHEL system role to configure the logging system to reliably send and receive log entries.
8.2.4.1. Configuring client logging with RELP
You can use the logging
RHEL system role to configure a transfer of log messages stored locally to the remote logging system with RELP.
This procedure configures RELP on all hosts in the clients
group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]
The settings specified in the example playbook include the following:
target
- This is a required parameter that specifies the host name where the remote logging system is running.
port
- Port number the remote logging system is listening.
tls
Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the
tls
variable tofalse
. By defaulttls
parameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert
,cert
,private_key
} and/or {ca_cert_src
,cert_src
,private_key_src
}.-
If the {
ca_cert_src
,cert_src
,private_key_src
} triplet is set, the default locations/etc/pki/tls/certs
and/etc/pki/tls/private
are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert
,cert
,private_key
} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
ca_cert
-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to certificate. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents local CA certificate file path which is copied to the managed node. If
ca_cert
is specified, it is copied to the location. cert_src
-
Represents the local certificate file path which is copied to the managed node. If
cert
is specified, it is copied to the location. private_key_src
-
Represents the local key file path which is copied to the managed node. If
private_key
is specified, it is copied to the location. pki_authmode
-
Accepts the authentication mode as
name
orfingerprint
. permitted_servers
- List of servers that will be allowed by the logging client to connect and send logs over TLS.
inputs
- List of logging input dictionary.
outputs
- List of logging output dictionary.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory -
rsyslog.conf(5)
andsyslog(3)
manual pages
8.2.4.2. Configuring server logging with RELP
You can use the logging
RHEL system role to configure a server for receiving log messages from the remote logging system with RELP.
This procedure configures RELP on all hosts in the server
group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output
The settings specified in the example playbook include the following:
port
- Port number the remote logging system is listening.
tls
Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the
tls
variable tofalse
. By defaulttls
parameter is set to true while working with RELP and requires key/certificates and triplets {ca_cert
,cert
,private_key
} and/or {ca_cert_src
,cert_src
,private_key_src
}.-
If the {
ca_cert_src
,cert_src
,private_key_src
} triplet is set, the default locations/etc/pki/tls/certs
and/etc/pki/tls/private
are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet -
If the {
ca_cert
,cert
,private_key
} triplet is set, files are expected to be on the default path before the logging configuration. - If both triplets are set, files are transferred from local path from control node to specific path of the managed node.
-
If the {
ca_cert
-
Represents the path to CA certificate. Default path is
/etc/pki/tls/certs/ca.pem
and the file name is set by the user. cert
-
Represents the path to the certificate. Default path is
/etc/pki/tls/certs/server-cert.pem
and the file name is set by the user. private_key
-
Represents the path to private key. Default path is
/etc/pki/tls/private/server-key.pem
and the file name is set by the user. ca_cert_src
-
Represents local CA certificate file path which is copied to the managed node. If
ca_cert
is specified, it is copied to the location. cert_src
-
Represents the local certificate file path which is copied to the managed node. If
cert
is specified, it is copied to the location. private_key_src
-
Represents the local key file path which is copied to the managed node. If
private_key
is specified, it is copied to the location. pki_authmode
-
Accepts the authentication mode as
name
orfingerprint
. permitted_clients
- List of clients that will be allowed by the logging server to connect and send logs over TLS.
inputs
- List of logging input dictionary.
outputs
- List of logging output dictionary.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.logging/README.md
file -
/usr/share/doc/rhel-system-roles/logging/
directory -
rsyslog.conf(5)
andsyslog(3)
manual pages
8.2.5. Additional resources
- Preparing a control node and managed nodes to use RHEL system roles
-
Documentation installed with the
rhel-system-roles
package in/usr/share/ansible/roles/rhel-system-roles.logging/README.html
. - RHEL system roles
-
ansible-playbook(1)
man page on your system