Deploying different types of servers
Setting up and configuring web servers and reverse proxies, network file services, database servers, mail transport agents, and printers
Abstract
Providing feedback on Red Hat documentation
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Setting up the Apache HTTP web server
1.1. Introduction to the Apache HTTP web server
A web server is a network service that serves content to a client over the web. This typically means web pages, but any other documents can be served as well. Web servers are also known as HTTP servers, as they use the hypertext transport protocol (HTTP).
The Apache HTTP Server, httpd
, is an open source web server developed by the Apache Software Foundation.
If you are upgrading from a previous release of Red Hat Enterprise Linux, you have to update the httpd
service configuration accordingly. This section reviews some of the newly added features, and guides you through the update of prior configuration files.
1.2. Notable changes in the Apache HTTP Server
The Apache HTTP Server has been updated from version 2.4.6 in RHEL 7 to version 2.4.37 in RHEL 8. This updated version includes several new features, but maintains backwards compatibility with the RHEL 7 version at the level of configuration and Application Binary Interface (ABI) of external modules.
New features include:
-
HTTP/2
support is now provided by themod_http2
package, which is a part of thehttpd
module. -
systemd socket activation is supported. See
httpd.socket(8)
man page for more details.
Multiple new modules have been added:
-
mod_proxy_hcheck
- a proxy health-check module -
mod_proxy_uwsgi
- a Web Server Gateway Interface (WSGI) proxy -
mod_proxy_fdpass
- provides support for the passing the socket of the client to another process -
mod_cache_socache
- an HTTP cache using, for example, memcache backend -
mod_md
- an ACME protocol SSL/TLS certificate service
-
The following modules now load by default:
-
mod_request
-
mod_macro
-
mod_watchdog
-
-
A new subpackage,
httpd-filesystem
, has been added, which contains the basic directory layout for the Apache HTTP Server including the correct permissions for the directories. -
Instantiated service support,
httpd@.service
has been introduced. See thehttpd.service
man page for more information.
-
A new
httpd-init.service
replaces the%post script
to create a self-signedmod_ssl
key pair.
-
Automated TLS certificate provisioning and renewal using the Automatic Certificate Management Environment (ACME) protocol is now supported with the
mod_md
package (for use with certificate providers such asLet’s Encrypt
). -
The Apache HTTP Server now supports loading TLS certificates and private keys from hardware security tokens directly from
PKCS#11
modules. As a result, amod_ssl
configuration can now usePKCS#11
URLs to identify the TLS private key, and, optionally, the TLS certificate in theSSLCertificateKeyFile
andSSLCertificateFile
directives. A new
ListenFree
directive in the/etc/httpd/conf/httpd.conf
file is now supported.Similarly to the
Listen
directive,ListenFree
provides information about IP addresses, ports, or IP address-and-port combinations that the server listens to. However, withListenFree
, theIP_FREEBIND
socket option is enabled by default. Hence,httpd
is allowed to bind to a nonlocal IP address or to an IP address that does not exist yet. This allowshttpd
to listen on a socket without requiring the underlying network interface or the specified dynamic IP address to be up at the time whenhttpd
is trying to bind to it.Note that the
ListenFree
directive is currently available only in RHEL 8.For more details on
ListenFree
, see the following table:Table 1.1. ListenFree directive’s syntax, status, and modules Syntax Status Modules ListenFree [IP-address:]portnumber [protocol]
MPM
event, worker, prefork, mpm_winnt, mpm_netware, mpmt_os2
Other notable changes include:
The following modules have been removed:
-
mod_file_cache
mod_nss
Use
mod_ssl
as a replacement. For details about migrating frommod_nss
, see Section 1.14, “Exporting a private key and certificates from an NSS database to use them in an Apache web server configuration”.-
mod_perl
-
-
The default type of the DBM authentication database used by the Apache HTTP Server in RHEL 8 has been changed from
SDBM
todb5
. -
The
mod_wsgi
module for the Apache HTTP Server has been updated to Python 3. WSGI applications are now supported only with Python 3, and must be migrated from Python 2. The multi-processing module (MPM) configured by default with the Apache HTTP Server has changed from a multi-process, forked model (known as
prefork
) to a high-performance multi-threaded model,event
.Any third-party modules that are not thread-safe need to be replaced or removed. To change the configured MPM, edit the
/etc/httpd/conf.modules.d/00-mpm.conf
file. See thehttpd.service(8)
man page for more information.- The minimum UID and GID allowed for users by suEXEC are now 1000 and 500, respectively (previously 100 and 100).
-
The
/etc/sysconfig/httpd
file is no longer a supported interface for setting environment variables for thehttpd
service. Thehttpd.service(8)
man page has been added for the systemd service. -
Stopping the
httpd
service now uses a “graceful stop” by default. -
The
mod_auth_kerb
module has been replaced by themod_auth_gssapi
module.
1.3. Updating the configuration
To update the configuration files from the Apache HTTP Server version used in Red Hat Enterprise Linux 7, choose one of the following options:
-
If
/etc/sysconfig/httpd
is used to set environment variables, create a systemd drop-in file instead. - If any third-party modules are used, ensure they are compatible with a threaded MPM.
- If suexec is used, ensure user and group IDs meet the new minimums.
You can check the configuration for possible errors by using the following command:
# apachectl configtest
Syntax OK
1.4. The Apache configuration files
The httpd
, by default, reads the configuration files after start. You can see the list of the locations of configuration files in the table below.
Path | Description |
---|---|
| The main configuration file. |
| An auxiliary directory for configuration files that are included in the main configuration file. |
| An auxiliary directory for configuration files which load installed dynamic modules packaged in Red Hat Enterprise Linux. In the default configuration, these configuration files are processed first. |
Although the default configuration is suitable for most situations, you can use also other configuration options. For any changes to take effect, restart the web server first.
To check the configuration for possible errors, type the following at a shell prompt:
# apachectl configtest
Syntax OK
To make the recovery from mistakes easier, make a copy of the original file before editing it.
1.5. Managing the httpd service
This section describes how to start, stop, and restart the httpd
service.
Prerequisites
- The Apache HTTP Server is installed.
Procedure
To start the
httpd
service, enter:# systemctl start httpd
To stop the
httpd
service, enter:# systemctl stop httpd
To restart the
httpd
service, enter:# systemctl restart httpd
1.6. Setting up a single-instance Apache HTTP Server
You can set up a single-instance Apache HTTP Server to serve static HTML content.
Follow the procedure if the web server should provide the same content for all domains associated with the server. If you want to provide different content for different domains, set up name-based virtual hosts. For details, see Configuring Apache name-based virtual hosts.
Procedure
Install the
httpd
package:# yum install httpd
If you use
firewalld
, open the TCP port80
in the local firewall:# firewall-cmd --permanent --add-port=80/tcp # firewall-cmd --reload
Enable and start the
httpd
service:# systemctl enable --now httpd
Optional: Add HTML files to the
/var/www/html/
directory.NoteWhen adding content to
/var/www/html/
, files and directories must be readable by the user under whichhttpd
runs by default. The content owner can be the either theroot
user androot
user group, or another user or group of the administrator’s choice. If the content owner is theroot
user androot
user group, the files must be readable by other users. The SELinux context for all the files and directories must behttpd_sys_content_t
, which is applied by default to all content within the/var/www
directory.
Verification
Connect with a web browser to
http://server_IP_or_host_name/
.If the
/var/www/html/
directory is empty or does not contain anindex.html
orindex.htm
file, Apache displays theRed Hat Enterprise Linux Test Page
. If/var/www/html/
contains HTML files with a different name, you can load them by entering the URL to that file, such ashttp://server_IP_or_host_name/example.html
.
Additional resources
- Apache manual: Installing the Apache HTTP server manual.
-
See the
httpd.service(8)
man page on your system.
1.7. Configuring Apache name-based virtual hosts
Name-based virtual hosts enable Apache to serve different content for different domains that resolve to the IP address of the server.
You can set up a virtual host for both the example.com
and example.net
domain with separate document root directories. Both virtual hosts serve static HTML content.
Prerequisites
Clients and the web server resolve the
example.com
andexample.net
domain to the IP address of the web server.Note that you must manually add these entries to your DNS server.
Procedure
Install the
httpd
package:# yum install httpd
Edit the
/etc/httpd/conf/httpd.conf
file:Append the following virtual host configuration for the
example.com
domain:<VirtualHost *:80> DocumentRoot "/var/www/example.com/" ServerName example.com CustomLog /var/log/httpd/example.com_access.log combined ErrorLog /var/log/httpd/example.com_error.log </VirtualHost>
These settings configure the following:
-
All settings in the
<VirtualHost *:80>
directive are specific for this virtual host. -
DocumentRoot
sets the path to the web content of the virtual host. ServerName
sets the domains for which this virtual host serves content.To set multiple domains, add the
ServerAlias
parameter to the configuration and specify the additional domains separated with a space in this parameter.-
CustomLog
sets the path to the access log of the virtual host. ErrorLog
sets the path to the error log of the virtual host.NoteApache uses the first virtual host found in the configuration also for requests that do not match any domain set in the
ServerName
andServerAlias
parameters. This also includes requests sent to the IP address of the server.
-
All settings in the
Append a similar virtual host configuration for the
example.net
domain:<VirtualHost *:80> DocumentRoot "/var/www/example.net/" ServerName example.net CustomLog /var/log/httpd/example.net_access.log combined ErrorLog /var/log/httpd/example.net_error.log </VirtualHost>
Create the document roots for both virtual hosts:
# mkdir /var/www/example.com/ # mkdir /var/www/example.net/
If you set paths in the
DocumentRoot
parameters that are not within/var/www/
, set thehttpd_sys_content_t
context on both document roots:# semanage fcontext -a -t httpd_sys_content_t "/srv/example.com(/.*)?" # restorecon -Rv /srv/example.com/ # semanage fcontext -a -t httpd_sys_content_t "/srv/example.net(/.\*)?" # restorecon -Rv /srv/example.net/
These commands set the
httpd_sys_content_t
context on the/srv/example.com/
and/srv/example.net/
directory.Note that you must install the
policycoreutils-python-utils
package to run therestorecon
command.If you use
firewalld
, open port80
in the local firewall:# firewall-cmd --permanent --add-port=80/tcp # firewall-cmd --reload
Enable and start the
httpd
service:# systemctl enable --now httpd
Verification
Create a different example file in each virtual host’s document root:
# echo "vHost example.com" > /var/www/example.com/index.html # echo "vHost example.net" > /var/www/example.net/index.html
-
Use a browser and connect to
http://example.com
. The web server shows the example file from theexample.com
virtual host. -
Use a browser and connect to
http://example.net
. The web server shows the example file from theexample.net
virtual host.
Additional resources
1.8. Configuring Kerberos authentication for the Apache HTTP web server
To perform Kerberos authentication in the Apache HTTP web server, RHEL 8 uses the mod_auth_gssapi
Apache module. The Generic Security Services API (GSSAPI
) is an interface for applications that make requests to use security libraries, such as Kerberos. The gssproxy
service allows to implement privilege separation for the httpd
server, which optimizes this process from the security point of view.
The mod_auth_gssapi
module replaces the removed mod_auth_kerb
module.
Prerequisites
-
The
httpd
andgssproxy
packages are installed. -
The Apache web server is set up and the
httpd
service is running.
1.8.1. Setting up GSS-Proxy in an IdM environment
This procedure describes how to set up GSS-Proxy
to perform Kerberos authentication in the Apache HTTP web server.
Procedure
Enable access to the
keytab
file of HTTP/<SERVER_NAME>@realm principal by creating the service principal:# ipa service-add HTTP/<SERVER_NAME>
Retrieve the
keytab
for the principal stored in the/etc/gssproxy/http.keytab
file:# ipa-getkeytab -s $(awk '/^server =/ {print $3}' /etc/ipa/default.conf) -k /etc/gssproxy/http.keytab -p HTTP/$(hostname -f)
This step sets permissions to 400, thus only the
root
user has access to thekeytab
file. Theapache
user does not.Create the
/etc/gssproxy/80-httpd.conf
file with the following content:[service/HTTP] mechs = krb5 cred_store = keytab:/etc/gssproxy/http.keytab cred_store = ccache:/var/lib/gssproxy/clients/krb5cc_%U euid = apache
Restart and enable the
gssproxy
service:# systemctl restart gssproxy.service # systemctl enable gssproxy.service
Additional resources
-
gssproxy(8)
man pages on your system -
gssproxy-mech(8)
man pages on your system -
gssproxy.conf(5)
man pages on your system
1.9. Configuring TLS encryption on an Apache HTTP Server
By default, Apache provides content to clients using an unencrypted HTTP connection. This section describes how to enable TLS encryption and configure frequently used encryption-related settings on an Apache HTTP Server.
Prerequisites
- The Apache HTTP Server is installed and running.
1.9.1. Adding TLS encryption to an Apache HTTP Server
You can enable TLS encryption on an Apache HTTP Server for the example.com
domain.
Prerequisites
- The Apache HTTP Server is installed and running.
The private key is stored in the
/etc/pki/tls/private/example.com.key
file.For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation. Alternatively, if your CA supports the ACME protocol, you can use the
mod_md
module to automate retrieving and provisioning TLS certificates.-
The TLS certificate is stored in the
/etc/pki/tls/certs/example.com.crt
file. If you use a different path, adapt the corresponding steps of the procedure. -
The CA certificate is stored in the
/etc/pki/tls/certs/ca.crt
file. If you use a different path, adapt the corresponding steps of the procedure. - Clients and the web server resolve the host name of the server to the IP address of the web server.
Procedure
Install the
mod_ssl
package:# yum install mod_ssl
Edit the
/etc/httpd/conf.d/ssl.conf
file and add the following settings to the<VirtualHost _default_:443>
directive:Set the server name:
ServerName example.com
ImportantThe server name must match the entry set in the
Common Name
field of the certificate.Optional: If the certificate contains additional host names in the
Subject Alt Names
(SAN) field, you can configuremod_ssl
to provide TLS encryption also for these host names. To configure this, add theServerAliases
parameter with corresponding names:ServerAlias www.example.com server.example.com
Set the paths to the private key, the server certificate, and the CA certificate:
SSLCertificateKeyFile "/etc/pki/tls/private/example.com.key" SSLCertificateFile "/etc/pki/tls/certs/example.com.crt" SSLCACertificateFile "/etc/pki/tls/certs/ca.crt"
For security reasons, configure that only the
root
user can access the private key file:# chown root:root /etc/pki/tls/private/example.com.key # chmod 600 /etc/pki/tls/private/example.com.key
WarningIf the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure.
If you use
firewalld
, open port443
in the local firewall:# firewall-cmd --permanent --add-port=443/tcp # firewall-cmd --reload
Restart the
httpd
service:# systemctl restart httpd
NoteIf you protected the private key file with a password, you must enter this password each time when the
httpd
service starts.
Verification
-
Use a browser and connect to
https://example.com
.
Additional resources
1.9.2. Setting the supported TLS protocol versions on an Apache HTTP Server
By default, the Apache HTTP Server on RHEL uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For example, the DEFAULT
policy defines that only the TLSv1.2
and TLSv1.3
protocol versions are enabled in apache.
You can manually configure which TLS protocol versions your Apache HTTP Server supports. Follow the procedure if your environment requires to enable only specific TLS protocol versions, for example:
-
If your environment requires that clients can also use the weak
TLS1
(TLSv1.0) orTLS1.1
protocol. -
If you want to configure that Apache only supports the
TLSv1.2
orTLSv1.3
protocol.
Prerequisites
- TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server.
Procedure
Edit the
/etc/httpd/conf/httpd.conf
file, and add the following setting to the<VirtualHost>
directive for which you want to set the TLS protocol version. For example, to enable only theTLSv1.3
protocol:SSLProtocol -All TLSv1.3
Restart the
httpd
service:# systemctl restart httpd
Verification
Use the following command to verify that the server supports
TLSv1.3
:# openssl s_client -connect example.com:443 -tls1_3
Use the following command to verify that the server does not support
TLSv1.2
:# openssl s_client -connect example.com:443 -tls1_2
If the server does not support the protocol, the command returns an error:
140111600609088:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1543:SSL alert number 70
- Optional: Repeat the command for other TLS protocol versions.
Additional resources
-
update-crypto-policies(8)
man page on your system - Using system-wide cryptographic policies.
-
For further details about the
SSLProtocol
parameter, refer to themod_ssl
documentation in the Apache manual: Installing the Apache HTTP server manual.
1.9.3. Setting the supported ciphers on an Apache HTTP Server
By default, the Apache HTTP Server uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For the list of ciphers the system-wide crypto allows, see the /etc/crypto-policies/back-ends/openssl.config
file.
You can manually configure which ciphers your Apache HTTP Server supports. Follow the procedure if your environment requires specific ciphers.
Prerequisites
- TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server.
Procedure
Edit the
/etc/httpd/conf/httpd.conf
file, and add theSSLCipherSuite
parameter to the<VirtualHost>
directive for which you want to set the TLS ciphers:SSLCipherSuite "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!SHA1:!SHA256"
This example enables only the
EECDH+AESGCM
,EDH+AESGCM
,AES256+EECDH
, andAES256+EDH
ciphers and disables all ciphers which use theSHA1
andSHA256
message authentication code (MAC).Restart the
httpd
service:# systemctl restart httpd
Verification
To display the list of ciphers the Apache HTTP Server supports:
Install the
nmap
package:# yum install nmap
Use the
nmap
utility to display the supported ciphers:# nmap --script ssl-enum-ciphers -p 443 example.com ... PORT STATE SERVICE 443/tcp open https | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A | TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) - A | TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A ...
Additional resources
-
update-crypto-policies(8)
man page on your system - Using system-wide cryptographic policies.
- SSLCipherSuite
1.10. Configuring TLS client certificate authentication
Client certificate authentication enables administrators to allow only users who authenticate using a certificate to access resources on the web server. You can configure client certificate authentication for the /var/www/html/Example/
directory.
If the Apache HTTP Server uses the TLS 1.3 protocol, certain clients require additional configuration. For example, in Firefox, set the security.tls.enable_post_handshake_auth
parameter in the about:config
menu to true
. For further details, see Transport Layer Security version 1.3 in Red Hat Enterprise Linux 8.
Prerequisites
- TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server.
Procedure
Edit the
/etc/httpd/conf/httpd.conf
file and add the following settings to the<VirtualHost>
directive for which you want to configure client authentication:<Directory "/var/www/html/Example/"> SSLVerifyClient require </Directory>
The
SSLVerifyClient require
setting defines that the server must successfully validate the client certificate before the client can access the content in the/var/www/html/Example/
directory.Restart the
httpd
service:# systemctl restart httpd
Verification
Use the
curl
utility to access thehttps://example.com/Example/
URL without client authentication:$ curl https://example.com/Example/ curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0
The error indicates that the web server requires a client certificate authentication.
Pass the client private key and certificate, as well as the CA certificate to
curl
to access the same URL with client authentication:$ curl --cacert ca.crt --key client.key --cert client.crt https://example.com/Example/
If the request succeeds,
curl
displays theindex.html
file stored in the/var/www/html/Example/
directory.
Additional resources
1.11. Securing web applications on a web server using ModSecurity
ModSecurity is an open source web application firewall (WAF) supported by various web servers such as Apache, Nginx, and IIS, which reduces security risks in web applications. ModSecurity provides customizable rule sets for configuring your server.
The mod_security-crs
package contains the core rule set (CRS) with rules against cross-website scripting, bad user agents, SQL injection, Trojans, session hijacking, and other exploits.
1.11.1. Deploying the ModSecurity web-based application firewall for Apache
To reduce risks related to running web-based applications on your web server by deploying ModSecurity, install the mod_security
and mod_security_crs
packages for the Apache HTTP server. The mod_security_crs
package provides the core rule set (CRS) for the ModSecurity web-based application firewall (WAF) module.
Procedure
Install the
mod_security
,mod_security_crs
, andhttpd
packages:# yum install -y mod_security mod_security_crs httpd
Start the
httpd
server:# systemctl restart httpd
Verification
Verify that the ModSecurity web-based application firewall is enabled on your Apache HTTP server:
# httpd -M | grep security security2_module (shared)
Check that the
/etc/httpd/modsecurity.d/activated_rules/
directory contains rules provided bymod_security_crs
:# ls /etc/httpd/modsecurity.d/activated_rules/ ... REQUEST-921-PROTOCOL-ATTACK.conf REQUEST-930-APPLICATION-ATTACK-LFI.conf ...
1.11.2. Adding a custom rule to ModSecurity
If the rules contained in the ModSecurity core rule set (CRS) do not fit your scenario and if you want to prevent additional possible attacks, you can add your custom rules to the rule set used by the ModSecurity web-based application firewall. The following example demonstrates the addition of a simple rule. For creating more complex rules, see the reference manual on the ModSecurity Wiki website.
Prerequisites
- ModSecurity for Apache is installed and enabled.
Procedure
Open the
/etc/httpd/conf.d/mod_security.conf
file in a text editor of your choice, for example:# vi /etc/httpd/conf.d/mod_security.conf
Add the following example rule after the line starting with
SecRuleEngine On
:SecRule ARGS:data "@contains evil" "deny,status:403,msg:'param data contains evil data',id:1"
The previous rule forbids the use of resources to the user if the
data
parameter contains theevil
string.- Save the changes, and quit the editor.
Restart the
httpd
server:# systemctl restart httpd
Verification
Create a
test.html
page:# echo "mod_security test" > /var/www/html/test.html
Restart the
httpd
server:# systemctl restart httpd
Request
test.html
without malicious data in theGET
variable of the HTTP request:$ curl http://localhost/test.html?data=good mod_security test
Request
test.html
with malicious data in theGET
variable of the HTTP request:$ curl localhost/test.html?data=xxxevilxxx <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You do not have permission to access this resource.</p> </body></html>
Check the
/var/log/httpd/error_log
file, and locate the log entry about denying access with theparam data containing an evil data
message:[Wed May 25 08:01:31.036297 2022] [:error] [pid 5839:tid 139874434791168] [client ::1:45658] [client ::1] ModSecurity: Access denied with code 403 (phase 2). String match "evil" at ARGS:data. [file "/etc/httpd/conf.d/mod_security.conf"] [line "4"] [id "1"] [msg "param data contains evil data"] [hostname "localhost"] [uri "/test.html"] [unique_id "Yo4amwIdsBG3yZqSzh2GuwAAAIY"]
Additional resources
1.12. Installing the Apache HTTP Server manual
You can install the Apache HTTP Server manual. This manual provides a detailed documentation of, for example:
- Configuration parameters and directives
- Performance tuning
- Authentication settings
- Modules
- Content caching
- Security tips
- Configuring TLS encryption
After installing the manual, you can display it using a web browser.
Prerequisites
- The Apache HTTP Server is installed and running.
Procedure
Install the
httpd-manual
package:# yum install httpd-manual
Optional: By default, all clients connecting to the Apache HTTP Server can display the manual. To restrict access to a specific IP range, such as the
192.0.2.0/24
subnet, edit the/etc/httpd/conf.d/manual.conf
file and add theRequire ip 192.0.2.0/24
setting to the<Directory "/usr/share/httpd/manual">
directive:<Directory "/usr/share/httpd/manual"> ... Require ip 192.0.2.0/24 ... </Directory>
Restart the
httpd
service:# systemctl restart httpd
Verification
-
To display the Apache HTTP Server manual, connect with a web browser to
http://host_name_or_IP_address/manual/
1.13. Working with Apache modules
The httpd
service is a modular application, and you can extend it with a number of Dynamic Shared Objects (DSOs). Dynamic Shared Objects are modules that you can dynamically load or unload at runtime as necessary. You can find these modules in the /usr/lib64/httpd/modules/
directory.
1.13.1. Loading a DSO module
As an administrator, you can choose the functionality to include in the server by configuring which modules the server should load. To load a particular DSO module, use the LoadModule
directive. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.modules.d/
directory.
Prerequisites
-
You have installed the
httpd
package.
Procedure
Search for the module name in the configuration files in the
/etc/httpd/conf.modules.d/
directory:# grep mod_ssl.so /etc/httpd/conf.modules.d/*
Edit the configuration file in which the module name was found, and uncomment the
LoadModule
directive of the module:LoadModule ssl_module modules/mod_ssl.so
If the module was not found, for example, because a RHEL package does not provide the module, create a configuration file, such as
/etc/httpd/conf.modules.d/30-example.conf
with the following directive:LoadModule ssl_module modules/<custom_module>.so
Restart the
httpd
service:# systemctl restart httpd
1.13.2. Compiling a custom Apache module
You can create your own module and build it with the help of the httpd-devel
package, which contains the include files, the header files, and the APache eXtenSion
(apxs
) utility required to compile a module.
Prerequisites
-
You have the
httpd-devel
package installed.
Procedure
Build a custom module with the following command:
# apxs -i -a -c module_name.c
Verification
- Load the module the same way as described in Loading a DSO module.
1.14. Exporting a private key and certificates from an NSS database to use them in an Apache web server configuration
RHEL 8 no longer provides the mod_nss
module for the Apache web server, and Red Hat recommends using the mod_ssl
module. If you store your private key and certificates in a Network Security Services (NSS) database, for example, because you migrated the web server from RHEL 7 to RHEL 8, follow this procedure to extract the key and certificates in Privacy Enhanced Mail (PEM) format. You can then use the files in the mod_ssl
configuration as described in Configuring TLS encryption on an Apache HTTP server.
This procedure assumes that the NSS database is stored in /etc/httpd/alias/
and that you store the exported private key and certificates in the /etc/pki/tls/
directory.
Prerequisites
- The private key, the certificate, and the certificate authority (CA) certificate are stored in an NSS database.
Procedure
List the certificates in the NSS database:
# certutil -d /etc/httpd/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Example CA C,, Example Server Certificate u,u,u
You need the nicknames of the certificates in the next steps.
To extract the private key, you must temporarily export the key to a PKCS #12 file:
Use the nickname of the certificate associated with the private key, to export the key to a PKCS #12 file:
# pk12util -o /etc/pki/tls/private/export.p12 -d /etc/httpd/alias/ -n "Example Server Certificate" Enter password for PKCS12 file: password Re-enter password: password pk12util: PKCS12 EXPORT SUCCESSFUL
Note that you must set a password on the PKCS #12 file. You need this password in the next step.
Export the private key from the PKCS #12 file:
# openssl pkcs12 -in /etc/pki/tls/private/export.p12 -out /etc/pki/tls/private/server.key -nocerts -nodes Enter Import Password: password MAC verified OK
Delete the temporary PKCS #12 file:
# rm /etc/pki/tls/private/export.p12
Set the permissions on
/etc/pki/tls/private/server.key
to ensure that only theroot
user can access this file:# chown root:root /etc/pki/tls/private/server.key # chmod 0600 /etc/pki/tls/private/server.key
Use the nickname of the server certificate in the NSS database to export the CA certificate:
# certutil -d /etc/httpd/alias/ -L -n "Example Server Certificate" -a -o /etc/pki/tls/certs/server.crt
Set the permissions on
/etc/pki/tls/certs/server.crt
to ensure that only theroot
user can access this file:# chown root:root /etc/pki/tls/certs/server.crt # chmod 0600 /etc/pki/tls/certs/server.crt
Use the nickname of the CA certificate in the NSS database to export the CA certificate:
#
certutil -d /etc/httpd/alias/ -L -n "Example CA" -a -o /etc/pki/tls/certs/ca.crt
Follow Configuring TLS encryption on an Apache HTTP server to configure the Apache web server, and:
-
Set the
SSLCertificateKeyFile
parameter to/etc/pki/tls/private/server.key
. -
Set the
SSLCertificateFile
parameter to/etc/pki/tls/certs/server.crt
. -
Set the
SSLCACertificateFile
parameter to/etc/pki/tls/certs/ca.crt
.
-
Set the
Additional resources
-
certutil(1)
,pk12util(1)
, andpkcs12(1ssl)
man pages on your system
1.15. Additional resources
-
httpd(8)
-
httpd.service(8)
-
httpd.conf(5)
-
apachectl(8)
- Using GSS-Proxy for Apache httpd operation.
- Configuring applications to use cryptographic hardware through PKCS #11.
Chapter 2. Setting up and configuring NGINX
NGINX is a high performance and modular server that you can use, for example, as a:
- Web server
- Reverse proxy
- Load balancer
This section describes how to NGINX in these scenarios.
2.1. Installing and preparing NGINX
Red Hat uses Application Streams to provide different versions of NGINX. You can do the following:
- Select a stream and install NGINX
- Open the required ports in the firewall
-
Enable and start the
nginx
service
Using the default configuration, NGINX runs as a web server on port 80
and provides content from the /usr/share/nginx/html/
directory.
Prerequisites
- RHEL 8 is installed.
- The host is subscribed to the Red Hat Customer Portal.
-
The
firewalld
service is enabled and started
Procedure
Display the available NGINX module streams:
# yum module list nginx Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) Name Stream Profiles Summary nginx 1.14 [d] common [d] nginx webserver nginx 1.16 common [d] nginx webserver ... Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
If you want to install a different stream than the default, select the stream:
# yum module enable nginx:stream_version
Install the
nginx
package:# yum install nginx
Open the ports on which NGINX should provide its service in the firewall. For example, to open the default ports for HTTP (port 80) and HTTPS (port 443) in
firewalld
, enter:# firewall-cmd --permanent --add-port={80/tcp,443/tcp} # firewall-cmd --reload
Enable the
nginx
service to start automatically when the system boots:# systemctl enable nginx
Optional: Start the
nginx
service:# systemctl start nginx
If you do not want to use the default configuration, skip this step, and configure NGINX accordingly before you start the service.
The PHP module requires a specific NGINX version. Using an incompatible version can cause conflicts when upgrading to a newer NGNIX stream. When using PHP 7.2 stream and NGNIX 1.24 stream, you can resolve this issue by enabling a newer PHP stream 7.4 before installing NGINX.
Verification
Use the
yum
utility to verify that thenginx
package is installed:# yum list installed nginx Installed Packages nginx.x86_64 1:1.14.1-9.module+el8.0.0+4108+af250afe @rhel-8-for-x86_64-appstream-rpms
Ensure that the ports on which NGINX should provide its service are opened in the firewalld:
# firewall-cmd --list-ports 80/tcp 443/tcp
Verify that the
nginx
service is enabled:# systemctl is-enabled nginx enabled
Additional resources
- For details about Subscription Manager, see the Subscription Manager.
- For further details about Application Streams, modules, and installing packages, see the Installing, managing, and removing user-space components guide.
- For details about configuring firewalls, see the Securing networks guide.
2.2. Configuring NGINX as a web server that provides different content for different domains
By default, NGINX acts as a web server that provides the same content to clients for all domain names associated with the IP addresses of the server. This procedure explains how to configure NGINX:
-
To serve requests to the
example.com
domain with content from the/var/www/example.com/
directory -
To serve requests to the
example.net
domain with content from the/var/www/example.net/
directory -
To serve all other requests, for example, to the IP address of the server or to other domains associated with the IP address of the server, with content from the
/usr/share/nginx/html/
directory
Prerequisites
- NGINX is installed
Clients and the web server resolve the
example.com
andexample.net
domain to the IP address of the web server.Note that you must manually add these entries to your DNS server.
Procedure
Edit the
/etc/nginx/nginx.conf
file:By default, the
/etc/nginx/nginx.conf
file already contains a catch-all configuration. If you have deleted this part from the configuration, re-add the followingserver
block to thehttp
block in the/etc/nginx/nginx.conf
file:server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; }
These settings configure the following:
-
The
listen
directive define which IP address and ports the service listens. In this case, NGINX listens on port80
on both all IPv4 and IPv6 addresses. Thedefault_server
parameter indicates that NGINX uses thisserver
block as the default for requests matching the IP addresses and ports. -
The
server_name
parameter defines the host names for which thisserver
block is responsible. Settingserver_name
to_
configures NGINX to accept any host name for thisserver
block. -
The
root
directive sets the path to the web content for thisserver
block.
-
The
Append a similar
server
block for theexample.com
domain to thehttp
block:server { server_name example.com; root /var/www/example.com/; access_log /var/log/nginx/example.com/access.log; error_log /var/log/nginx/example.com/error.log; }
-
The
access_log
directive defines a separate access log file for this domain. -
The
error_log
directive defines a separate error log file for this domain.
-
The
Append a similar
server
block for theexample.net
domain to thehttp
block:server { server_name example.net; root /var/www/example.net/; access_log /var/log/nginx/example.net/access.log; error_log /var/log/nginx/example.net/error.log; }
Create the root directories for both domains:
# mkdir -p /var/www/example.com/ # mkdir -p /var/www/example.net/
Set the
httpd_sys_content_t
context on both root directories:# semanage fcontext -a -t httpd_sys_content_t "/var/www/example.com(/.*)?" # restorecon -Rv /var/www/example.com/ # semanage fcontext -a -t httpd_sys_content_t "/var/www/example.net(/.\*)?" # restorecon -Rv /var/www/example.net/
These commands set the
httpd_sys_content_t
context on the/var/www/example.com/
and/var/www/example.net/
directories.Note that you must install the
policycoreutils-python-utils
package to run therestorecon
commands.Create the log directories for both domains:
# mkdir /var/log/nginx/example.com/ # mkdir /var/log/nginx/example.net/
Restart the
nginx
service:# systemctl restart nginx
Verification
Create a different example file in each virtual host’s document root:
# echo "Content for example.com" > /var/www/example.com/index.html # echo "Content for example.net" > /var/www/example.net/index.html # echo "Catch All content" > /usr/share/nginx/html/index.html
-
Use a browser and connect to
http://example.com
. The web server shows the example content from the/var/www/example.com/index.html
file. -
Use a browser and connect to
http://example.net
. The web server shows the example content from the/var/www/example.net/index.html
file. -
Use a browser and connect to
http://IP_address_of_the_server
. The web server shows the example content from the/usr/share/nginx/html/index.html
file.
2.3. Adding TLS encryption to an NGINX web server
You can enable TLS encryption on an NGINX web server for the example.com
domain.
Prerequisites
- NGINX is installed.
The private key is stored in the
/etc/pki/tls/private/example.com.key
file.For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.
-
The TLS certificate is stored in the
/etc/pki/tls/certs/example.com.crt
file. If you use a different path, adapt the corresponding steps of the procedure. - The CA certificate has been appended to the TLS certificate file of the server.
- Clients and the web server resolve the host name of the server to the IP address of the web server.
-
Port
443
is open in the local firewall.
Procedure
Edit the
/etc/nginx/nginx.conf
file, and add the followingserver
block to thehttp
block in the configuration:server { listen 443 ssl; server_name example.com; root /usr/share/nginx/html; ssl_certificate /etc/pki/tls/certs/example.com.crt; ssl_certificate_key /etc/pki/tls/private/example.com.key; }
For security reasons, configure that only the
root
user can access the private key file:# chown root:root /etc/pki/tls/private/example.com.key # chmod 600 /etc/pki/tls/private/example.com.key
WarningIf the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure.
Restart the
nginx
service:# systemctl restart nginx
Verification
-
Use a browser and connect to
https://example.com
Additional resources
2.4. Configuring NGINX as a reverse proxy for the HTTP traffic
You can configure the NGINX web server to act as a reverse proxy for HTTP traffic. For example, you can use this functionality to forward requests to a specific subdirectory on a remote server. From the client perspective, the client loads the content from the host it accesses. However, NGINX loads the actual content from the remote server and forwards it to the client.
This procedure explains how to forward traffic to the /example
directory on the web server to the URL https://example.com
.
Prerequisites
- NGINX is installed as described in Installing and preparing NGINX.
- Optional: TLS encryption is enabled on the reverse proxy.
Procedure
Edit the
/etc/nginx/nginx.conf
file and add the following settings to theserver
block that should provide the reverse proxy:location /example { proxy_pass https://example.com; }
The
location
block defines that NGINX passes all requests in the/example
directory tohttps://example.com
.Set the
httpd_can_network_connect
SELinux boolean parameter to1
to configure that SELinux allows NGINX to forward traffic:# setsebool -P httpd_can_network_connect 1
Restart the
nginx
service:# systemctl restart nginx
Verification
-
Use a browser and connect to
http://host_name/example
and the content ofhttps://example.com
is shown.
2.5. Configuring NGINX as an HTTP load balancer
You can use the NGINX reverse proxy feature to load-balance traffic. This procedure describes how to configure NGINX as an HTTP load balancer that sends requests to different servers, based on which of them has the least number of active connections. If both servers are not available, the procedure also defines a third host for fallback reasons.
Prerequisites
- NGINX is installed as described in Installing and preparing NGINX.
Procedure
Edit the
/etc/nginx/nginx.conf
file and add the following settings:http { upstream backend { least_conn; server server1.example.com; server server2.example.com; server server3.example.com backup; } server { location / { proxy_pass http://backend; } } }
The
least_conn
directive in the host group namedbackend
defines that NGINX sends requests toserver1.example.com
orserver2.example.com
, depending on which host has the least number of active connections. NGINX usesserver3.example.com
only as a backup in case that the other two hosts are not available.With the
proxy_pass
directive set tohttp://backend
, NGINX acts as a reverse proxy and uses thebackend
host group to distribute requests based on the settings of this group.Instead of the
least_conn
load balancing method, you can specify:- No method to use round robin and distribute requests evenly across servers.
-
ip_hash
to send requests from one client address to the same server based on a hash calculated from the first three octets of the IPv4 address or the whole IPv6 address of the client. -
hash
to determine the server based on a user-defined key, which can be a string, a variable, or a combination of both. Theconsistent
parameter configures that NGINX distributes requests across all servers based on the user-defined hashed key value. -
random
to send requests to a randomly selected server.
Restart the
nginx
service:# systemctl restart nginx
2.6. Additional resources
- For the official NGINX documentation see https://nginx.org/en/docs/. Note that Red Hat does not maintain this documentation and that it might not work with the NGINX version you have installed.
- Configuring applications to use cryptographic hardware through PKCS #11.
Chapter 3. Using Samba as a server
Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise Linux. The SMB protocol is used to access resources on a server, such as file shares and shared printers. Additionally, Samba implements the Distributed Computing Environment Remote Procedure Call (DCE RPC) protocol used by Microsoft Windows.
You can run Samba as:
- An Active Directory (AD) or NT4 domain member
- A standalone server
An NT4 Primary Domain Controller (PDC) or Backup Domain Controller (BDC)
NoteRed Hat supports the PDC and BDC modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains.
Red Hat does not support running Samba as an AD domain controller (DC).
Independently of the installation mode, you can optionally share directories and printers. This enables Samba to act as a file and print server.
3.1. Understanding the different Samba services and modes
The samba
package provides multiple services. Depending on your environment and the scenario you want to configure, you require one or more of these services and configure Samba in different modes.
3.1.1. The Samba services
Samba provides the following services:
smbd
This service provides file sharing and printing services using the SMB protocol. Additionally, the service is responsible for resource locking and for authenticating connecting users. For authenticating domain members,
smbd
requireswinbindd
. Thesmb
systemd
service starts and stops thesmbd
daemon.To use the
smbd
service, install thesamba
package.nmbd
This service provides host name and IP resolution using the NetBIOS over IPv4 protocol. Additionally to the name resolution, the
nmbd
service enables browsing the SMB network to locate domains, work groups, hosts, file shares, and printers. For this, the service either reports this information directly to the broadcasting client or forwards it to a local or master browser. Thenmb
systemd
service starts and stops thenmbd
daemon.Note that modern SMB networks use DNS to resolve clients and IP addresses. For Kerberos a working DNS setup is required.
To use the
nmbd
service, install thesamba
package.winbindd
This service provides an interface for the Name Service Switch (NSS) to use AD or NT4 domain users and groups on the local system. This enables, for example, domain users to authenticate to services hosted on a Samba server or to other local services. The
winbind
systemd
service starts and stops thewinbindd
daemon.If you set up Samba as a domain member,
winbindd
must be started before thesmbd
service. Otherwise, domain users and groups are not available to the local system..To use the
winbindd
service, install thesamba-winbind
package.ImportantRed Hat only supports running Samba as a server with the
winbindd
service to provide domain users and groups to the local system. Due to certain limitations, such as missing Windows access control list (ACL) support and NT LAN Manager (NTLM) fallback, SSSD is not supported.
3.1.2. The Samba security services
The security
parameter in the [global]
section in the /etc/samba/smb.conf
file manages how Samba authenticates users that are connecting to the service. Depending on the mode you install Samba in, the parameter must be set to different values:
- On an AD domain member, set
security = ads
In this mode, Samba uses Kerberos to authenticate AD users.
For details about setting up Samba as a domain member, see Setting up Samba as an AD domain member server.
- On a standalone server, set
security = user
In this mode, Samba uses a local database to authenticate connecting users.
For details about setting up Samba as a standalone server, see Setting up Samba as a standalone server.
- On an NT4 PDC or BDC, set
security = user
- In this mode, Samba authenticates users to a local or LDAP database.
- On an NT4 domain member, set
security = domain
In this mode, Samba authenticates connecting users to an NT4 PDC or BDC. You cannot use this mode on AD domain members.
For details about setting up Samba as a domain member, see Setting up Samba as an AD domain member server.
Additional resources
-
security
parameter in thesmb.conf(5)
man page on your system
3.1.3. Scenarios when Samba services and Samba client utilities load and reload their configuration
The following describes when Samba services and utilities load and reload their configuration:
Samba services reload their configuration:
- Automatically every 3 minutes
-
On manual request, for example, when you run the
smbcontrol all reload-config
command.
- Samba client utilities read their configuration only when you start them.
Note that certain parameters, such as security
require a restart of the smb
service to take effect and a reload is not sufficient.
Additional resources
-
The
How configuration changes are applied
section in thesmb.conf(5)
man page on your system -
smbd(8)
,nmbd(8)
, andwinbindd(8)
man pages on your system
3.1.4. Editing the Samba configuration in a safe way
Samba services automatically reload their configuration every 3 minutes. To prevent that the services reload the changes before you have verified the configuration using the testparm
utility, you can edit the Samba configuration in a safe way.
Prerequisites
- Samba is installed.
Procedure
Create a copy of the
/etc/samba/smb.conf
file:# cp /etc/samba/smb.conf /etc/samba/samba.conf.copy
- Edit the copied file and make the required changes.
Verify the configuration in the
/etc/samba/samba.conf.copy
file:# testparm -s /etc/samba/samba.conf.copy
If
testparm
reports errors, fix them and run the command again.Override the
/etc/samba/smb.conf
file with the new configuration:# mv /etc/samba/samba.conf.copy /etc/samba/smb.conf
Wait until the Samba services automatically reload their configuration or manually reload the configuration:
# smbcontrol all reload-config
3.2. Verifying the smb.conf file by using the testparm utility
The testparm
utility verifies that the Samba configuration in the /etc/samba/smb.conf
file is correct. The utility detects invalid parameters and values, but also incorrect settings, such as for ID mapping. If testparm
reports no problem, the Samba services will successfully load the /etc/samba/smb.conf
file. Note that testparm
cannot verify that the configured services will be available or work as expected.
Red Hat recommends that you verify the /etc/samba/smb.conf
file by using testparm
after each modification of this file.
Prerequisites
- You installed Samba.
-
The
/etc/samba/smb.conf
file exits.
Procedure
Run the
testparm
utility as theroot
user:# testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Unknown parameter encountered: "log levell" Processing section "[example_share]" Loaded services file OK. ERROR: The idmap range for the domain * (tdb) overlaps with the range of DOMAIN (ad)! Server role: ROLE_DOMAIN_MEMBER Press enter to see a dump of your service definitions # Global parameters [global] ... [example_share] ...
The previous example output reports a non-existent parameter and an incorrect ID mapping configuration.
-
If
testparm
reports incorrect parameters, values, or other errors in the configuration, fix the problem and run the utility again.
3.3. Setting up Samba as a standalone server
You can set up Samba as a server that is not a member of a domain. In this installation mode, Samba authenticates users to a local database instead of to a central DC. Additionally, you can enable guest access to allow users to connect to one or multiple services without authentication.
3.3.1. Setting up the server configuration for the standalone server
You can set up the server configuration for a Samba standalone server.
Procedure
Install the
samba
package:# yum install samba
Edit the
/etc/samba/smb.conf
file and set the following parameters:[global] workgroup = Example-WG netbios name = Server security = user log file = /var/log/samba/%m.log log level = 1
This configuration defines a standalone server named
Server
within theExample-WG
work group. Additionally, this configuration enables logging on a minimal level (1
) and log files will be stored in the/var/log/samba/
directory. Samba will expand the%m
macro in thelog file
parameter to the NetBIOS name of connecting clients. This enables individual log files for each client.Optional: Configure file or printer sharing. See:
Verify the
/etc/samba/smb.conf
file:# testparm
If you set up shares that require authentication, create the user accounts.
For details, see Creating and enabling local user accounts.
Open the required ports and reload the firewall configuration by using the
firewall-cmd
utility:# firewall-cmd --permanent --add-service=samba # firewall-cmd --reload
Enable and start the
smb
service:# systemctl enable --now smb
Additional resources
-
smb.conf(5)
man page on your system
3.3.2. Creating and enabling local user accounts
To enable users to authenticate when they connect to a share, you must create the accounts on the Samba host both in the operating system and in the Samba database. Samba requires the operating system account to validate the Access Control Lists (ACL) on file system objects and the Samba account to authenticate connecting users.
If you use the passdb backend = tdbsam
default setting, Samba stores user accounts in the /var/lib/samba/private/passdb.tdb
database.
You can create a local Samba user named example
.
Prerequisites
- Samba is installed and configured as a standalone server.
Procedure
Create the operating system account:
# useradd -M -s /sbin/nologin example
This command adds the
example
account without creating a home directory. If the account is only used to authenticate to Samba, assign the/sbin/nologin
command as shell to prevent the account from logging in locally.Set a password to the operating system account to enable it:
# passwd example Enter new UNIX password:
password
Retype new UNIX password:password
passwd: password updated successfullySamba does not use the password set on the operating system account to authenticate. However, you need to set a password to enable the account. If an account is disabled, Samba denies access if this user connects.
Add the user to the Samba database and set a password to the account:
# smbpasswd -a example New SMB password:
password
Retype new SMB password:password
Added user example.Use this password to authenticate when using this account to connect to a Samba share.
Enable the Samba account:
# smbpasswd -e example Enabled user example.
3.4. Understanding and configuring Samba ID mapping
Windows domains distinguish users and groups by unique Security Identifiers (SID). However, Linux requires unique UIDs and GIDs for each user and group. If you run Samba as a domain member, the winbindd
service is responsible for providing information about domain users and groups to the operating system.
To enable the winbindd
service to provide unique IDs for users and groups to Linux, you must configure ID mapping in the /etc/samba/smb.conf
file for:
- The local database (default domain)
- The AD or NT4 domain the Samba server is a member of
- Each trusted domain from which users must be able to access resources on this Samba server
Samba provides different ID mapping back ends for specific configurations. The most frequently used back ends are:
Back end | Use case |
---|---|
|
The |
| AD domains only |
| AD and NT4 domains |
|
AD, NT4, and the |
3.4.1. Planning Samba ID ranges
Regardless of whether you store the Linux UIDs and GIDs in AD or if you configure Samba to generate them, each domain configuration requires a unique ID range that must not overlap with any of the other domains.
If you set overlapping ID ranges, Samba fails to work correctly.
Example 3.1. Unique ID Ranges
The following shows non-overlapping ID mapping ranges for the default (*
), AD-DOM
, and the TRUST-DOM
domains.
[global] ... idmap config * : backend = tdb idmap config * : range = 10000-999999 idmap config AD-DOM:backend = rid idmap config AD-DOM:range = 2000000-2999999 idmap config TRUST-DOM:backend = rid idmap config TRUST-DOM:range = 4000000-4999999
You can only assign one range per domain. Therefore, leave enough space between the domains ranges. This enables you to extend the range later if your domain grows.
If you later assign a different range to a domain, the ownership of files and directories previously created by these users and groups will be lost.
3.4.2. The * default domain
In a domain environment, you add one ID mapping configuration for each of the following:
- The domain the Samba server is a member of
- Each trusted domain that should be able to access the Samba server
However, for all other objects, Samba assigns IDs from the default domain. This includes:
- Local Samba users and groups
-
Samba built-in accounts and groups, such as
BUILTIN\Administrators
You must configure the default domain as described to enable Samba to operate correctly.
The default domain back end must be writable to permanently store the assigned IDs.
For the default domain, you can use one of the following back ends:
tdb
When you configure the default domain to use the
tdb
back end, set an ID range that is big enough to include objects that will be created in the future and that are not part of a defined domain ID mapping configuration.For example, set the following in the
[global]
section in the/etc/samba/smb.conf
file:idmap config * : backend = tdb idmap config * : range = 10000-999999
For further details, see Using the TDB ID mapping back end.
autorid
When you configure the default domain to use the
autorid
back end, adding additional ID mapping configurations for domains is optional.For example, set the following in the
[global]
section in the/etc/samba/smb.conf
file:idmap config * : backend = autorid idmap config * : range = 10000-999999
For further details, see Using the autorid ID mapping back end.
3.4.3. Using the tdb ID mapping back end
The winbindd
service uses the writable tdb
ID mapping back end by default to store Security Identifier (SID), UID, and GID mapping tables. This includes local users, groups, and built-in principals.
Use this back end only for the *
default domain. For example:
idmap config * : backend = tdb idmap config * : range = 10000-999999
Additional resources
3.4.4. Using the ad ID mapping back end
You can configure a Samba AD member to use the ad
ID mapping back end.
The ad
ID mapping back end implements a read-only API to read account and group information from AD. This provides the following benefits:
- All user and group settings are stored centrally in AD.
- User and group IDs are consistent on all Samba servers that use this back end.
- The IDs are not stored in a local database which can corrupt, and therefore file ownerships cannot be lost.
The ad
ID mapping back end does not support Active Directory domains with one-way trusts. If you configure a domain member in an Active Directory with one-way trusts, use instead one of the following ID mapping back ends: tdb
, rid
, or autorid
.
The ad back end reads the following attributes from AD:
AD attribute name | Object type | Mapped to |
---|---|---|
| User and group | User or group name, depending on the object |
| User | User ID (UID) |
| Group | Group ID (GID) |
| User | Path to the shell of the user |
| User | Path to the home directory of the user |
| User | Primary group ID |
[a]
Samba only reads this attribute if you set idmap config DOMAIN:unix_nss_info = yes .
[b]
Samba only reads this attribute if you set idmap config DOMAIN:unix_primary_group = yes .
|
Prerequisites
-
Both users and groups must have unique IDs set in AD, and the IDs must be within the range configured in the
/etc/samba/smb.conf
file. Objects whose IDs are outside of the range will not be available on the Samba server. - Users and groups must have all required attributes set in AD. If required attributes are missing, the user or group will not be available on the Samba server. The required attributes depend on your configuration. .Prerequisites
- You installed Samba.
-
The Samba configuration, except ID mapping, exists in the
/etc/samba/smb.conf
file.
Procedure
Edit the
[global]
section in the/etc/samba/smb.conf
file:Add an ID mapping configuration for the default domain (
*
) if it does not exist. For example:idmap config * : backend = tdb idmap config * : range = 10000-999999
Enable the
ad
ID mapping back end for the AD domain:idmap config DOMAIN : backend = ad
Set the range of IDs that is assigned to users and groups in the AD domain. For example:
idmap config DOMAIN : range = 2000000-2999999
ImportantThe range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Planning Samba ID ranges.
Set that Samba uses the RFC 2307 schema when reading attributes from AD:
idmap config DOMAIN : schema_mode = rfc2307
To enable Samba to read the login shell and the path to the users home directory from the corresponding AD attribute, set:
idmap config DOMAIN : unix_nss_info = yes
Alternatively, you can set a uniform domain-wide home directory path and login shell that is applied to all users. For example:
template shell = /bin/bash template homedir = /home/%U
By default, Samba uses the
primaryGroupID
attribute of a user object as the user’s primary group on Linux. Alternatively, you can configure Samba to use the value set in thegidNumber
attribute instead:idmap config DOMAIN : unix_primary_group = yes
Verify the
/etc/samba/smb.conf
file:# testparm
Reload the Samba configuration:
# smbcontrol all reload-config
Additional resources
- The * default domain
-
smb.conf(5)
andidmap_ad(8)
man pages on your system -
VARIABLE SUBSTITUTIONS
section in thesmb.conf(5)
man page on your system
3.4.5. Using the rid ID mapping back end
You can configure a Samba domain member to use the rid
ID mapping back end.
Samba can use the relative identifier (RID) of a Windows SID to generate an ID on Red Hat Enterprise Linux.
The RID is the last part of a SID. For example, if the SID of a user is S-1-5-21-5421822485-1151247151-421485315-30014
, then 30014
is the corresponding RID.
The rid
ID mapping back end implements a read-only API to calculate account and group information based on an algorithmic mapping scheme for AD and NT4 domains. When you configure the back end, you must set the lowest and highest RID in the idmap config DOMAIN : range
parameter. Samba will not map users or groups with a lower or higher RID than set in this parameter.
As a read-only back end, rid
cannot assign new IDs, such as for BUILTIN
groups. Therefore, do not use this back end for the *
default domain.
Benefits of using the rid back end
- All domain users and groups that have an RID within the configured range are automatically available on the domain member.
- You do not need to manually assign IDs, home directories, and login shells.
Drawbacks of using the rid back end
- All domain users get the same login shell and home directory assigned. However, you can use variables.
-
User and group IDs are only the same across Samba domain members if all use the
rid
back end with the same ID range settings. - You cannot exclude individual users or groups from being available on the domain member. Only users and groups outside of the configured range are excluded.
-
Based on the formulas the
winbindd
service uses to calculate the IDs, duplicate IDs can occur in multi-domain environments if objects in different domains have the same RID.
Prerequisites
- You installed Samba.
-
The Samba configuration, except ID mapping, exists in the
/etc/samba/smb.conf
file.
Procedure
Edit the
[global]
section in the/etc/samba/smb.conf
file:Add an ID mapping configuration for the default domain (
*
) if it does not exist. For example:idmap config * : backend = tdb idmap config * : range = 10000-999999
Enable the
rid
ID mapping back end for the domain:idmap config DOMAIN : backend = rid
Set a range that is big enough to include all RIDs that will be assigned in the future. For example:
idmap config DOMAIN : range = 2000000-2999999
Samba ignores users and groups whose RIDs in this domain are not within the range.
ImportantThe range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Planning Samba ID ranges.
Set a shell and home directory path that will be assigned to all mapped users. For example:
template shell = /bin/bash template homedir = /home/%U
Verify the
/etc/samba/smb.conf
file:# testparm
Reload the Samba configuration:
# smbcontrol all reload-config
Additional resources
- The * default domain
-
VARIABLE SUBSTITUTIONS
section in thesmb.conf(5)
man page on your system -
Calculation of the local ID from a RID, see the
idmap_rid(8)
man page on your system
3.4.6. Using the autorid ID mapping back end
You can configure a Samba domain member to use the autorid
ID mapping back end.
The autorid
back end works similar to the rid
ID mapping back end, but can automatically assign IDs for different domains. This enables you to use the autorid
back end in the following situations:
-
Only for the
*
default domain -
For the
*
default domain and additional domains, without the need to create ID mapping configurations for each of the additional domains - Only for specific domains
If you use autorid
for the default domain, adding additional ID mapping configuration for domains is optional.
Parts of this section were adopted from the idmap config autorid documentation published in the Samba Wiki. License: CC BY 4.0. Authors and contributors: See the history tab on the Wiki page.
Benefits of using the autorid back end
- All domain users and groups whose calculated UID and GID is within the configured range are automatically available on the domain member.
- You do not need to manually assign IDs, home directories, and login shells.
- No duplicate IDs, even if multiple objects in a multi-domain environment have the same RID.
Drawbacks
- User and group IDs are not the same across Samba domain members.
- All domain users get the same login shell and home directory assigned. However, you can use variables.
- You cannot exclude individual users or groups from being available on the domain member. Only users and groups whose calculated UID or GID is outside of the configured range are excluded.
Prerequisites
- You installed Samba.
-
The Samba configuration, except ID mapping, exists in the
/etc/samba/smb.conf
file.
Procedure
Edit the
[global]
section in the/etc/samba/smb.conf
file:Enable the
autorid
ID mapping back end for the*
default domain:idmap config * : backend = autorid
Set a range that is big enough to assign IDs for all existing and future objects. For example:
idmap config * : range = 10000-999999
Samba ignores users and groups whose calculated IDs in this domain are not within the range.
WarningAfter you set the range and Samba starts using it, you can only increase the upper limit of the range. Any other change to the range can result in new ID assignments, and thus in losing file ownerships.
Optional: Set a range size. For example:
idmap config * : rangesize = 200000
Samba assigns this number of continuous IDs for each domain’s object until all IDs from the range set in the
idmap config * : range
parameter are taken.NoteIf you set a rangesize, you need to adapt the range accordingly. The range needs to be a multiple of the rangesize.
Set a shell and home directory path that will be assigned to all mapped users. For example:
template shell = /bin/bash template homedir = /home/%U
Optional: Add additional ID mapping configuration for domains. If no configuration for an individual domain is available, Samba calculates the ID using the
autorid
back end settings in the previously configured*
default domain.ImportantThe range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Planning Samba ID ranges.
Verify the
/etc/samba/smb.conf
file:# testparm
Reload the Samba configuration:
# smbcontrol all reload-config
Additional resources
-
THE MAPPING FORMULAS
section in theidmap_autorid(8)
man page on your system -
rangesize
parameter description in theidmap_autorid(8)
man page on your system -
VARIABLE SUBSTITUTIONS
section in thesmb.conf(5)
man page on your system
3.5. Setting up Samba as an AD domain member server
If you are running an AD or NT4 domain, use Samba to add your Red Hat Enterprise Linux server as a member to the domain to gain the following:
- Access domain resources on other domain members
-
Authenticate domain users to local services, such as
sshd
- Share directories and printers hosted on the server to act as a file and print server
3.5.1. Joining a RHEL system to an AD domain
Samba Winbind is an alternative to the System Security Services Daemon (SSSD) for connecting a Red Hat Enterprise Linux (RHEL) system with Active Directory (AD). You can join a RHEL system to an AD domain by using realmd
to configure Samba Winbind.
Procedure
If your AD requires the deprecated RC4 encryption type for Kerberos authentication, enable support for these ciphers in RHEL:
# update-crypto-policies --set DEFAULT:AD-SUPPORT
Install the following packages:
# yum install realmd oddjob-mkhomedir oddjob samba-winbind-clients \ samba-winbind samba-common-tools samba-winbind-krb5-locator krb5-workstation
To share directories or printers on the domain member, install the
samba
package:# yum install samba
Backup the existing
/etc/samba/smb.conf
Samba configuration file:# mv /etc/samba/smb.conf /etc/samba/smb.conf.bak
Join the domain. For example, to join a domain named
ad.example.com
:# realm join --membership-software=samba --client-software=winbind ad.example.com
Using the previous command, the
realm
utility automatically:-
Creates a
/etc/samba/smb.conf
file for a membership in thead.example.com
domain -
Adds the
winbind
module for user and group lookups to the/etc/nsswitch.conf
file -
Updates the Pluggable Authentication Module (PAM) configuration files in the
/etc/pam.d/
directory -
Starts the
winbind
service and enables the service to start when the system boots
-
Creates a
-
Optional: Set an alternative ID mapping back end or customized ID mapping settings in the
/etc/samba/smb.conf
file. For details, see Understanding and configuring Samba ID mapping. Verify that the
winbind
service is running:# systemctl status winbind ... Active: active (running) since Tue 2018-11-06 19:10:40 CET; 15s ago
ImportantTo enable Samba to query domain user and group information, the
winbind
service must be running before you startsmb
.If you installed the
samba
package to share directories and printers, enable and start thesmb
service:# systemctl enable --now smb
-
If you are authenticating local logins to Active Directory, enable the
winbind_krb5_localauth
plug-in. See Using the local authorization plug-in for MIT Kerberos.
Verification
Display an AD user’s details, such as the AD administrator account in the AD domain:
# getent passwd "AD\administrator" AD\administrator:*:10000:10000::/home/administrator@AD:/bin/bash
Query the members of the domain users group in the AD domain:
# getent group "AD\Domain Users" AD\domain users:x:10000:user1,user2
Optional: Verify that you can use domain users and groups when you set permissions on files and directories. For example, to set the owner of the
/srv/samba/example.txt
file toAD\administrator
and the group toAD\Domain Users
:# chown "AD\administrator":"AD\Domain Users" /srv/samba/example.txt
Verify that Kerberos authentication works as expected:
On the AD domain member, obtain a ticket for the
administrator@AD.EXAMPLE.COM
principal:# kinit administrator@AD.EXAMPLE.COM
Display the cached Kerberos ticket:
# klist Ticket cache: KCM:0 Default principal: administrator@AD.EXAMPLE.COM Valid starting Expires Service principal 01.11.2018 10:00:00 01.11.2018 20:00:00 krbtgt/AD.EXAMPLE.COM@AD.EXAMPLE.COM renew until 08.11.2018 05:00:00
Display the available domains:
# wbinfo --all-domains BUILTIN SAMBA-SERVER AD
Additional resources
- If you do not want to use the deprecated RC4 ciphers, you can enable the AES encryption type in AD. See
- Enabling the AES encryption type in Active Directory using a GPO
-
realm(8)
man page on your system
3.5.2. Using the local authorization plug-in for MIT Kerberos
The winbind
service provides Active Directory users to the domain member. In certain situations, administrators want to enable domain users to authenticate to local services, such as an SSH server, which are running on the domain member. When using Kerberos to authenticate the domain users, enable the winbind_krb5_localauth
plug-in to correctly map Kerberos principals to Active Directory accounts through the winbind
service.
For example, if the sAMAccountName
attribute of an Active Directory user is set to EXAMPLE
and the user tries to log with the user name lowercase, Kerberos returns the user name in upper case. As a consequence, the entries do not match and authentication fails.
Using the winbind_krb5_localauth
plug-in, the account names are mapped correctly. Note that this only applies to GSSAPI authentication and not for getting the initial ticket granting ticket (TGT).
Prerequisites
- Samba is configured as a member of an Active Directory.
- Red Hat Enterprise Linux authenticates log in attempts against Active Directory.
-
The
winbind
service is running.
Procedure
Edit the /etc/krb5.conf
file and add the following section:
[plugins] localauth = { module = winbind:/usr/lib64/samba/krb5/winbind_krb5_localauth.so enable_only = winbind }
Additional resources
-
winbind_krb5_localauth(8)
man page on your system
3.6. Setting up Samba on an IdM domain member
You can set up Samba on a host that is joined to a Red Hat Identity Management (IdM) domain. Users from IdM and also, if available, from trusted Active Directory (AD) domains, can access shares and printer services provided by Samba.
Using Samba on an IdM domain member is an unsupported Technology Preview feature and contains certain limitations. For example, IdM trust controllers do not support the Active Directory Global Catalog service, and they do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access Samba shares and printers hosted on IdM clients when logged in to other IdM clients; AD users logged into a Windows machine can not access Samba shares hosted on an IdM domain member.
Customers deploying Samba on IdM domain members are encouraged to provide feedback to Red Hat.
If users from AD domains need to access shares and printer services provided by Samba, ensure the AES encryption type is enabled is AD. For more information, see Enabling the AES encryption type in Active Directory using a GPO.
Prerequisites
- The host is joined as a client to the IdM domain.
- Both the IdM servers and the client must run on RHEL 8.1 or later.
3.6.1. Preparing the IdM domain for installing Samba on domain members
Before you can set up Samba on an IdM client, you must prepare the IdM domain using the ipa-adtrust-install
utility on an IdM server.
Any system where you run the ipa-adtrust-install
command automatically becomes an AD trust controller. However, you must run ipa-adtrust-install
only once on an IdM server.
Prerequisites
- IdM server is installed.
- You need root privileges to install packages and restart IdM services.
Procedure
Install the required packages:
[root@ipaserver ~]# yum install ipa-server-trust-ad samba-client
Authenticate as the IdM administrative user:
[root@ipaserver ~]# kinit admin
Run the
ipa-adtrust-install
utility:[root@ipaserver ~]# ipa-adtrust-install
The DNS service records are created automatically if IdM was installed with an integrated DNS server.
If you installed IdM without an integrated DNS server,
ipa-adtrust-install
prints a list of service records that you must manually add to DNS before you can continue.The script prompts you that the
/etc/samba/smb.conf
already exists and will be rewritten:WARNING: The smb.conf already exists. Running ipa-adtrust-install will break your existing Samba configuration. Do you wish to continue? [no]:
yes
The script prompts you to configure the
slapi-nis
plug-in, a compatibility plug-in that allows older Linux clients to work with trusted users:Do you want to enable support for trusted domains in Schema Compatibility plugin? This will allow clients older than SSSD 1.9 and non-Linux clients to work with trusted users. Enable trusted domains support in slapi-nis? [no]:
yes
When prompted, enter the NetBIOS name for the IdM domain or press Enter to accept the name suggested:
Trust is configured but no NetBIOS domain name found, setting it now. Enter the NetBIOS name for the IPA domain. Only up to 15 uppercase ASCII letters, digits and dashes are allowed. Example: EXAMPLE. NetBIOS domain name [IDM]:
You are prompted to run the SID generation task to create a SID for any existing users:
Do you want to run the ipa-sidgen task? [no]:
yes
This is a resource-intensive task, so if you have a high number of users, you can run this at another time.
Optional: By default, the Dynamic RPC port range is defined as
49152-65535
for Windows Server 2008 and later. If you need to define a different Dynamic RPC port range for your environment, configure Samba to use different ports and open those ports in your firewall settings. The following example sets the port range to55000-65000
.[root@ipaserver ~]# net conf setparm global 'rpc server dynamic port range' 55000-65000 [root@ipaserver ~]# firewall-cmd --add-port=55000-65000/tcp [root@ipaserver ~]# firewall-cmd --runtime-to-permanent
Restart the
ipa
service:[root@ipaserver ~]# ipactl restart
Use the
smbclient
utility to verify that Samba responds to Kerberos authentication from the IdM side:[root@ipaserver ~]#
smbclient -L ipaserver.idm.example.com -U user_name --use-kerberos=required
lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- IPC$ IPC IPC Service (Samba 4.15.2) ...
3.6.2. Installing and configuring a Samba server on an IdM client
You can install and configure Samba on a client enrolled in an IdM domain.
Prerequisites
- Both the IdM servers and the client must run on RHEL 8.1 or later.
- The IdM domain is prepared as described in Preparing the IdM domain for installing Samba on domain members.
- If IdM has a trust configured with AD, enable the AES encryption type for Kerberos. For example, use a group policy object (GPO) to enable the AES encryption type. For details, see Enabling AES encryption in Active Directory using a GPO.
Procedure
Install the
ipa-client-samba
package:[root@idm_client]# yum install ipa-client-samba
Use the
ipa-client-samba
utility to prepare the client and create an initial Samba configuration:[root@idm_client]# ipa-client-samba Searching for IPA server... IPA server: DNS discovery Chosen IPA master: idm_server.idm.example.com SMB principal to be created: cifs/idm_client.idm.example.com@IDM.EXAMPLE.COM NetBIOS name to be used: IDM_CLIENT Discovered domains to use: Domain name: idm.example.com NetBIOS name: IDM SID: S-1-5-21-525930803-952335037-206501584 ID range: 212000000 - 212199999 Domain name: ad.example.com NetBIOS name: AD SID: None ID range: 1918400000 - 1918599999 Continue to configure the system with these values? [no]: yes Samba domain member is configured. Please check configuration at /etc/samba/smb.conf and start smb and winbind services
By default,
ipa-client-samba
automatically adds the[homes]
section to the/etc/samba/smb.conf
file that dynamically shares a user’s home directory when the user connects. If users do not have home directories on this server, or if you do not want to share them, remove the following lines from/etc/samba/smb.conf
:[homes] read only = no
Share directories and printers. For details, see:
Open the ports required for a Samba client in the local firewall:
[root@idm_client]# firewall-cmd --permanent --add-service=samba-client [root@idm_client]# firewall-cmd --reload
Enable and start the
smb
andwinbind
services:[root@idm_client]# systemctl enable --now smb winbind
Verification
Run the following verification step on a different IdM domain member that has the samba-client
package installed:
List the shares on the Samba server using Kerberos authentication:
$
smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required
lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPC$ IPC IPC Service (Samba 4.15.2) ...
Additional resources
-
ipa-client-samba(1)
man page on your system
3.6.3. Manually adding an ID mapping configuration if IdM trusts a new domain
Samba requires an ID mapping configuration for each domain from which users access resources. On an existing Samba server running on an IdM client, you must manually add an ID mapping configuration after the administrator added a new trust to an Active Directory (AD) domain.
Prerequisites
- You configured Samba on an IdM client. Afterward, a new trust was added to IdM.
- The DES and RC4 encryption types for Kerberos must be disabled in the trusted AD domain. For security reasons, RHEL 8 does not support these weak encryption types.
Procedure
Authenticate using the host’s keytab:
[root@idm_client]# kinit -k
Use the
ipa idrange-find
command to display both the base ID and the ID range size of the new domain. For example, the following command displays the values for thead.example.com
domain:[root@idm_client]# ipa idrange-find --name="AD.EXAMPLE.COM_id_range" --raw --------------- 1 range matched --------------- cn: AD.EXAMPLE.COM_id_range ipabaseid: 1918400000 ipaidrangesize: 200000 ipabaserid: 0 ipanttrusteddomainsid: S-1-5-21-968346183-862388825-1738313271 iparangetype: ipa-ad-trust ---------------------------- Number of entries returned 1 ----------------------------
You need the values from the
ipabaseid
andipaidrangesize
attributes in the next steps.To calculate the highest usable ID, use the following formula:
maximum_range = ipabaseid + ipaidrangesize - 1
With the values from the previous step, the highest usable ID for the
ad.example.com
domain is1918599999
(1918400000 + 200000 - 1).Edit the
/etc/samba/smb.conf
file, and add the ID mapping configuration for the domain to the[global]
section:idmap config AD : range = 1918400000 - 1918599999 idmap config AD : backend = sss
Specify the value from
ipabaseid
attribute as the lowest and the computed value from the previous step as the highest value of the range.Restart the
smb
andwinbind
services:[root@idm_client]# systemctl restart smb winbind
Verification
List the shares on the Samba server using Kerberos authentication:
$
smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required
lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPC$ IPC IPC Service (Samba 4.15.2) ...
3.6.4. Additional resources
3.13. Configuring Samba for macOS clients
The fruit
virtual file system (VFS) Samba module provides enhanced compatibility with Apple server message block (SMB) clients.
3.15. Setting up Samba as a print server
If you set up Samba as a print server, clients in your network can use Samba to print. Additionally, Windows clients can, if configured, download the driver from the Samba server.
Parts of this section were adopted from the Setting up Samba as a Print Server documentation published in the Samba Wiki. License: CC BY 4.0. Authors and contributors: See the history tab on the Wiki page.
Prerequisites
Samba has been set up in one of the following modes:
3.15.1. Enabling print server support in Samba
By default, print server support is not enabled in Samba. To use Samba as a print server, you must configure Samba accordingly.
Print jobs and printer operations require remote procedure calls (RPCs). By default, Samba starts the rpcd_spoolss
service on demand to manage RPCs. During the first RPC call, or when you update the printer list in CUPS, Samba retrieves the printer information from CUPS. This can require approximately 1 second per printer. Therefore, if you have more than 50 printers, tune the rpcd_spoolss
settings.
Prerequisites
The printers are configured in a CUPS server.
For details about configuring printers in CUPS, see the documentation provided in the CUPS web console (https://printserver:631/help) on the print server.
Procedure
Edit the
/etc/samba/smb.conf
file:Add the
[printers]
section to enable the printing backend in Samba:[printers] comment = All Printers path = /var/tmp/ printable = yes create mask = 0600
ImportantThe
[printers]
share name is hard-coded and cannot be changed.If the CUPS server runs on a different host or port, specify the setting in the
[printers]
section:cups server = printserver.example.com:631
If you have many printers, set the number of idle seconds to a higher value than the numbers of printers connected to CUPS. For example, if you have 100 printers, set in the
[global]
section:rpcd_spoolss:idle_seconds = 200
If this setting does not scale in your environment, also increase the number of
rpcd_spoolss
workers in the[global]
section:rpcd_spoolss:num_workers = 10
By default,
rpcd_spoolss
starts 5 workers.
Verify the
/etc/samba/smb.conf
file:# testparm
Open the required ports and reload the firewall configuration using the
firewall-cmd
utility:# firewall-cmd --permanent --add-service=samba # firewall-cmd --reload
Restart the
smb
service:# systemctl restart smb
After restarting the service, Samba automatically shares all printers that are configured in the CUPS back end. If you want to manually share only specific printers, see Manually sharing specific printers.
Verification
Submit a print job. For example, to print a PDF file, enter:
# smbclient -Uuser //sambaserver.example.com/printer_name -c "print example.pdf"
3.15.2. Manually sharing specific printers
If you configured Samba as a print server, by default, Samba shares all printers that are configured in the CUPS back end. The following procedure explains how to share only specific printers.
Prerequisites
- Samba is set up as a print server
Procedure
Edit the
/etc/samba/smb.conf
file:In the
[global]
section, disable automatic printer sharing by setting:load printers = no
Add a section for each printer you want to share. For example, to share the printer named
example
in the CUPS back end asExample-Printer
in Samba, add the following section:[Example-Printer] path = /var/tmp/ printable = yes printer name = example
You do not need individual spool directories for each printer. You can set the same spool directory in the
path
parameter for the printer as you set in the[printers]
section.
Verify the
/etc/samba/smb.conf
file:# testparm
Reload the Samba configuration:
# smbcontrol all reload-config
3.16. Setting up automatic printer driver downloads for Windows clients on Samba print servers
If you are running a Samba print server for Windows clients, you can upload drivers and preconfigure printers. If a user connects to a printer, Windows automatically downloads and installs the driver locally on the client. The user does not require local administrator permissions for the installation. Additionally, Windows applies preconfigured driver settings, such as the number of trays.
Parts of this section were adopted from the Setting up Automatic Printer Driver Downloads for Windows Clients documentation published in the Samba Wiki. License: CC BY 4.0. Authors and contributors: See the history tab on the Wiki page.
Prerequisites
- Samba is set up as a print server
3.16.1. Basic information about printer drivers
This section provides general information about printer drivers.
Supported driver model version
Samba only supports the printer driver model version 3 which is supported in Windows 2000 and later, and Windows Server 2000 and later. Samba does not support the driver model version 4, introduced in Windows 8 and Windows Server 2012. However, these and later Windows versions also support version 3 drivers.
Package-aware drivers
Samba does not support package-aware drivers.
Preparing a printer driver for being uploaded
Before you can upload a driver to a Samba print server:
- Unpack the driver if it is provided in a compressed format.
Some drivers require to start a setup application that installs the driver locally on a Windows host. In certain situations, the installer extracts the individual files into the operating system’s temporary folder during the setup runs. To use the driver files for uploading:
- Start the installer.
- Copy the files from the temporary folder to a new location.
- Cancel the installation.
Ask your printer manufacturer for drivers that support uploading to a print server.
Providing 32-bit and 64-bit drivers for a printer to a client
To provide the driver for a printer for both 32-bit and 64-bit Windows clients, you must upload a driver with exactly the same name for both architectures. For example, if you are uploading the 32-bit driver named Example PostScript
and the 64-bit driver named Example PostScript (v1.0)
, the names do not match. Consequently, you can only assign one of the drivers to a printer and the driver will not be available for both architectures.
3.16.2. Enabling users to upload and preconfigure drivers
To be able to upload and preconfigure printer drivers, a user or a group needs to have the SePrintOperatorPrivilege
privilege granted. A user must be added into the printadmin
group. Red Hat Enterprise Linux automatically creates this group when you install the samba
package. The printadmin
group gets assigned the lowest available dynamic system GID that is lower than 1000.
Procedure
For example, to grant the
SePrintOperatorPrivilege
privilege to theprintadmin
group:# net rpc rights grant "printadmin" SePrintOperatorPrivilege -U "DOMAIN\administrator" Enter DOMAIN\administrator's password: Successfully granted rights.
NoteIn a domain environment, grant
SePrintOperatorPrivilege
to a domain group. This enables you to centrally manage the privilege by updating a user’s group membership.To list all users and groups having
SePrintOperatorPrivilege
granted:# net rpc rights list privileges SePrintOperatorPrivilege -U "DOMAIN\administrator" Enter administrator's password: SePrintOperatorPrivilege: BUILTIN\Administrators DOMAIN\printadmin
3.16.4. Creating a GPO to enable clients to trust the Samba print server
For security reasons, recent Windows operating systems prevent clients from downloading non-package-aware printer drivers from an untrusted server. If your print server is a member in an AD, you can create a Group Policy Object (GPO) in your domain to trust the Samba server.
Prerequisites
- The Samba print server is a member of an AD domain.
- The Windows computer you are using to create the GPO must have the Windows Remote Server Administration Tools (RSAT) installed. For details, see the Windows documentation.
Procedure
-
Log into a Windows computer using an account that is allowed to edit group policies, such as the AD domain
Administrator
user. -
Open the
Group Policy Management Console
. Right-click to your AD domain and select
Create a GPO in this domain, and Link it here
.-
Enter a name for the GPO, such as
Legacy Printer Driver Policy
and clickOK
. The new GPO will be displayed under the domain entry. -
Right-click to the newly-created GPO and select
Edit
to open theGroup Policy Management Editor
. Navigate to
→ → → .On the right side of the window, double-click
Point and Print Restriction
to edit the policy:Enable the policy and set the following options:
-
Select
Users can only point and print to these servers
and enter the fully-qualified domain name (FQDN) of the Samba print server to the field next to this option. In both check boxes under
Security Prompts
, selectDo not show warning or elevation prompt
.
-
Select
- Click OK.
Double-click
Package Point and Print - Approved servers
to edit the policy:-
Enable the policy and click the
Show
button. Enter the FQDN of the Samba print server.
-
Close both the
Show Contents
and the policy’s properties window by clickingOK
.
-
Enable the policy and click the
-
Close the
Group Policy Management Editor
. -
Close the
Group Policy Management Console
.
After the Windows domain members applied the group policy, printer drivers are automatically downloaded from the Samba server when a user connects to a printer.
Additional resources
- For using group policies, see the Windows documentation.
3.16.5. Uploading drivers and preconfiguring printers
Use the Print Management
application on a Windows client to upload drivers and preconfigure printers hosted on the Samba print server. For further details, see the Windows documentation.
3.17. Running Samba on a server with FIPS mode enabled
This section provides an overview of the limitations of running Samba with FIPS mode enabled. It also provides the procedure for enabling FIPS mode on a Red Hat Enterprise Linux host running Samba.
3.17.1. Limitations of using Samba in FIPS mode
The following Samba modes and features work in FIPS mode under the indicated conditions:
- Samba as a domain member only in Active Directory (AD) or Red Hat Identity Management (IdM) environments with Kerberos authentication that uses AES ciphers.
- Samba as a file server on an Active Directory domain member. However, this requires that clients use Kerberos to authenticate to the server.
Due to the increased security of FIPS, the following Samba features and modes do not work if FIPS mode is enabled:
- NT LAN Manager (NTLM) authentication because RC4 ciphers are blocked
- The server message block version 1 (SMB1) protocol
- The stand-alone file server mode because it uses NTLM authentication
- NT4-style domain controllers
- NT4-style domain members. Note that Red Hat continues supporting the primary domain controller (PDC) functionality IdM uses in the background.
- Password changes against the Samba server. You can only perform password changes using Kerberos against an Active Directory domain controller.
The following feature is not tested in FIPS mode and, therefore, is not supported by Red Hat:
- Running Samba as a print server
3.17.2. Using Samba in FIPS mode
You can enable the FIPS mode on a RHEL host that runs Samba.
Prerequisites
- Samba is configured on the Red Hat Enterprise Linux host.
- Samba runs in a mode that is supported in FIPS mode.
Procedure
Enable the FIPS mode on RHEL:
# fips-mode-setup --enable
Reboot the server:
# reboot
Use the
testparm
utility to verify the configuration:# testparm -s
If the command displays any errors or incompatibilities, fix them to ensure that Samba works correctly.
Additional resources
3.18. Tuning the performance of a Samba server
Learn what settings can improve the performance of Samba in certain situations, and which settings can have a negative performance impact.
Parts of this section were adopted from the Performance Tuning documentation published in the Samba Wiki. License: CC BY 4.0. Authors and contributors: See the history tab on the Wiki page.
Prerequisites
- Samba is set up as a file or print server
3.18.1. Setting the SMB protocol version
Each new SMB version adds features and improves the performance of the protocol. The recent Windows and Windows Server operating systems always supports the latest protocol version. If Samba also uses the latest protocol version, Windows clients connecting to Samba benefit from the performance improvements. In Samba, the default value of the server max protocol is set to the latest supported stable SMB protocol version.
To always have the latest stable SMB protocol version enabled, do not set the server max protocol
parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled.
The following procedure explains how to use the default value in the server max protocol
parameter.
Procedure
-
Remove the
server max protocol
parameter from the[global]
section in the/etc/samba/smb.conf
file. Reload the Samba configuration
# smbcontrol all reload-config
3.18.3. Settings that can have a negative performance impact
By default, the kernel in Red Hat Enterprise Linux is tuned for high network performance. For example, the kernel uses an auto-tuning mechanism for buffer sizes. Setting the socket options
parameter in the /etc/samba/smb.conf
file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases.
To use the optimized settings from the Kernel, remove the socket options
parameter from the [global]
section in the /etc/samba/smb.conf
.
3.19. Configuring Samba to be compatible with clients that require an SMB version lower than the default
Samba uses a reasonable and secure default value for the minimum server message block (SMB) version it supports. However, if you have clients that require an older SMB version, you can configure Samba to support it.
3.19.1. Setting the minimum SMB protocol version supported by a Samba server
In Samba, the server min protocol
parameter in the /etc/samba/smb.conf
file defines the minimum server message block (SMB) protocol version the Samba server supports. You can change the minimum SMB protocol version.
By default, Samba on RHEL 8.2 and later supports only SMB2 and newer protocol versions. Red Hat recommends to not use the deprecated SMB1 protocol. However, if your environment requires SMB1, you can manually set the server min protocol
parameter to NT1
to re-enable SMB1.
Prerequisites
- Samba is installed and configured.
Procedure
Edit the
/etc/samba/smb.conf
file, add theserver min protocol
parameter, and set the parameter to the minimum SMB protocol version the server should support. For example, to set the minimum SMB protocol version toSMB3
, add:server min protocol = SMB3
Restart the
smb
service:# systemctl restart smb
Additional resources
-
smb.conf(5)
man page on your system
3.20. Frequently used Samba command-line utilities
This chapter describes frequently used commands when working with a Samba server.
3.20.1. Using the net ads join and net rpc join commands
Using the join
subcommand of the net
utility, you can join Samba to an AD or NT4 domain. To join the domain, you must create the /etc/samba/smb.conf
file manually, and optionally update additional configurations, such as PAM.
Red Hat recommends using the realm
utility to join a domain. The realm
utility automatically updates all involved configuration files.
Procedure
Manually create the
/etc/samba/smb.conf
file with the following settings:For an AD domain member:
[global] workgroup = domain_name security = ads passdb backend = tdbsam realm = AD_REALM
For an NT4 domain member:
[global] workgroup = domain_name security = user passdb backend = tdbsam
-
Add an ID mapping configuration for the
*
default domain and for the domain you want to join to the[global
] section in the/etc/samba/smb.conf
file. Verify the
/etc/samba/smb.conf
file:# testparm
Join the domain as the domain administrator:
To join an AD domain:
# net ads join -U "DOMAIN\administrator"
To join an NT4 domain:
# net rpc join -U "DOMAIN\administrator"
Append the
winbind
source to thepasswd
andgroup
database entry in the/etc/nsswitch.conf
file:passwd: files
winbind
group: fileswinbind
Enable and start the
winbind
service:# systemctl enable --now winbind
Optional: Configure PAM using the
authselect
utility.For details, see the
authselect(8)
man page on your system.Optional: For AD environments, configure the Kerberos client.
For details, see the documentation of your Kerberos client.
Additional resources
3.20.2. Using the net rpc rights command
In Windows, you can assign privileges to accounts and groups to perform special operations, such as setting ACLs on a share or upload printer drivers. On a Samba server, you can use the net rpc rights
command to manage privileges.
Listing privileges you can set
To list all available privileges and their owners, use the net rpc rights list
command. For example:
# net rpc rights list -U "DOMAIN\administrator" Enter DOMAIN\administrator's password: SeMachineAccountPrivilege Add machines to domain SeTakeOwnershipPrivilege Take ownership of files or other objects SeBackupPrivilege Back up files and directories SeRestorePrivilege Restore files and directories SeRemoteShutdownPrivilege Force shutdown from a remote system SePrintOperatorPrivilege Manage printers SeAddUsersPrivilege Add users and groups to the domain SeDiskOperatorPrivilege Manage disk shares SeSecurityPrivilege System security
Granting privileges
To grant a privilege to an account or group, use the net rpc rights grant
command.
For example, grant the SePrintOperatorPrivilege
privilege to the DOMAIN\printadmin
group:
# net rpc rights grant "DOMAIN\printadmin" SePrintOperatorPrivilege -U "DOMAIN\administrator" Enter DOMAIN\administrator's password: Successfully granted rights.
Revoking privileges
To revoke a privilege from an account or group, use the net rpc rights revoke
command.
For example, to revoke the SePrintOperatorPrivilege
privilege from the DOMAIN\printadmin
group:
# net rpc rights remoke "DOMAIN\printadmin" SePrintOperatorPrivilege -U "DOMAIN\administrator" Enter DOMAIN\administrator's password: Successfully revoked rights.
3.20.4. Using the net user command
The net user
command enables you to perform the following actions on an AD DC or NT4 PDC:
- List all user accounts
- Add users
- Remove Users
Specifying a connection method, such as ads
for AD domains or rpc
for NT4 domains, is only required when you list domain user accounts. Other user-related subcommands can auto-detect the connection method.
Pass the -U user_name
parameter to the command to specify a user that is allowed to perform the requested action.
Listing domain user accounts
To list all users in an AD domain:
# net ads user -U "DOMAIN\administrator"
To list all users in an NT4 domain:
# net rpc user -U "DOMAIN\administrator"
Adding a user account to the domain
On a Samba domain member, you can use the net user add
command to add a user account to the domain.
For example, add the user
account to the domain:
Add the account:
# net user add user password -U "DOMAIN\administrator" User user added
Optional: Use the remote procedure call (RPC) shell to enable the account on the AD DC or NT4 PDC. For example:
# net rpc shell -U DOMAIN\administrator -S DC_or_PDC_name Talking to domain DOMAIN (S-1-5-21-1424831554-512457234-5642315751) net rpc>
user edit disabled user: no
Set user's disabled flag from [yes] to [no] net rpc>exit
Deleting a user account from the domain
On a Samba domain member, you can use the net user delete
command to remove a user account from the domain.
For example, to remove the user
account from the domain:
# net user delete user -U "DOMAIN\administrator" User user deleted
3.20.5. Using the rpcclient utility
The rpcclient
utility enables you to manually execute client-side Microsoft Remote Procedure Call (MS-RPC) functions on a local or remote SMB server. However, most of the features are integrated into separate utilities provided by Samba. Use rpcclient
only for testing MS-PRC functions.
Prerequisites
-
The
samba-client
package is installed.
Examples
For example, you can use the rpcclient
utility to:
Manage the printer Spool Subsystem (SPOOLSS).
Example 3.7. Assigning a Driver to a Printer
# rpcclient server_name -U "DOMAIN\administrator" -c 'setdriver "printer_name" "driver_name"' Enter DOMAIN\administrators password: Successfully set printer_name to driver driver_name.
Retrieve information about an SMB server.
Example 3.8. Listing all File Shares and Shared Printers
# rpcclient server_name -U "DOMAIN\administrator" -c 'netshareenum' Enter DOMAIN\administrators password: netname: Example_Share remark: path: C:\srv\samba\example_share\ password: netname: Example_Printer remark: path: C:\var\spool\samba\ password:
Perform actions using the Security Account Manager Remote (SAMR) protocol.
Example 3.9. Listing Users on an SMB Server
# rpcclient server_name -U "DOMAIN\administrator" -c 'enumdomusers' Enter DOMAIN\administrators password: user:[user1] rid:[0x3e8] user:[user2] rid:[0x3e9]
If you run the command against a standalone server or a domain member, it lists the users in the local database. Running the command against an AD DC or NT4 PDC lists the domain users.
Additional resources
-
rpcclient(1)
man page on your system
3.20.6. Using the samba-regedit application
Certain settings, such as printer configurations, are stored in the registry on the Samba server. You can use the ncurses-based samba-regedit
application to edit the registry of a Samba server.
Prerequisites
-
The
samba-client
package is installed.
Procedure
To start the application, enter:
# samba-regedit
Use the following keys:
- Cursor up and cursor down: Navigate through the registry tree and the values.
- Enter: Opens a key or edits a value.
-
Tab: Switches between the
Key
andValue
pane. - Ctrl+C: Closes the application.
3.20.7. Using the smbcontrol utility
The smbcontrol
utility enables you to send command messages to the smbd
, nmbd
, winbindd
, or all of these services. These control messages instruct the service, for example, to reload its configuration.
Prerequisites
-
The
samba-common-tools
package is installed.
Procedure
-
Reload the configuration of the
smbd
,nmbd
,winbindd
services by sending thereload-config
message type to theall
destination:
# smbcontrol all reload-config
Additional resources
-
smbcontrol(1)
man page on your system
3.20.8. Using the smbpasswd utility
The smbpasswd
utility manages user accounts and passwords in the local Samba database.
Prerequisites
-
The
samba-common-tools
package is installed.
Procedure
If you run the command as a user,
smbpasswd
changes the Samba password of the user who run the command. For example:[user@server ~]$ smbpasswd New SMB password: password Retype new SMB password: password
If you run
smbpasswd
as theroot
user, you can use the utility, for example, to:Create a new user:
[root@server ~]# smbpasswd -a user_name New SMB password:
password
Retype new SMB password:password
Added user user_name.NoteBefore you can add a user to the Samba database, you must create the account in the local operating system. See the Adding a new user from the command line section in the Configuring basic system settings guide.
Enable a Samba user:
[root@server ~]# smbpasswd -e user_name Enabled user user_name.
Disable a Samba user:
[root@server ~]# smbpasswd -x user_name Disabled user user_name
Delete a user:
[root@server ~]# smbpasswd -x user_name Deleted user user_name.
Additional resources
-
smbpasswd(8)
man page on your system
3.20.9. Using the smbstatus utility
The smbstatus
utility reports on:
-
Connections per PID of each
smbd
daemon to the Samba server. This report includes the user name, primary group, SMB protocol version, encryption, and signing information. -
Connections per Samba share. This report includes the PID of the
smbd
daemon, the IP of the connecting machine, the time stamp when the connection was established, encryption, and signing information. - A list of locked files. The report entries include further details, such as opportunistic lock (oplock) types
Prerequisites
-
The
samba
package is installed. -
The
smbd
service is running.
Procedure
# smbstatus Samba version 4.15.2 PID Username Group Machine Protocol Version Encryption Signing ....------------------------------------------------------------------------------------------------------------------------- 963 DOMAIN\administrator DOMAIN\domain users client-pc (ipv4:192.0.2.1:57786) SMB3_02 - AES-128-CMAC Service pid Machine Connected at Encryption Signing: ....--------------------------------------------------------------------------- example 969 192.0.2.1 Thu Nov 1 10:00:00 2018 CEST - AES-128-CMAC Locked files: Pid Uid DenyMode Access R/W Oplock SharePath Name Time ....-------------------------------------------------------------------------------------------------------- 969 10000 DENY_WRITE 0x120089 RDONLY LEASE(RWH) /srv/samba/example file.txt Thu Nov 1 10:00:00 2018
Additional resources
-
smbstatus(1)
man page on your system
3.20.10. Using the smbtar utility
The smbtar
utility backs up the content of an SMB share or a subdirectory of it and stores the content in a tar
archive. Alternatively, you can write the content to a tape device.
Prerequisites
-
The
samba-client
package is installed.
Procedure
Use the following command to back up the content of the
demo
directory on the//server/example/
share and store the content in the/root/example.tar
archive:# smbtar -s server -x example -u user_name -p password -t /root/example.tar
Additional resources
-
smbtar(1)
man page on your system
3.20.11. Using the wbinfo utility
The wbinfo
utility queries and returns information created and used by the winbindd
service.
Prerequisites
-
The
samba-winbind-clients
package is installed.
Procedure
You can use wbinfo
, for example, to:
List domain users:
# wbinfo -u AD\administrator AD\guest ...
List domain groups:
# wbinfo -g AD\domain computers AD\domain admins AD\domain users ...
Display the SID of a user:
# wbinfo --name-to-sid="AD\administrator" S-1-5-21-1762709870-351891212-3141221786-500 SID_USER (1)
Display information about domains and trusts:
# wbinfo --trusted-domains --verbose Domain Name DNS Domain Trust Type Transitive In Out BUILTIN None Yes Yes Yes server None Yes Yes Yes DOMAIN1 domain1.example.com None Yes Yes Yes DOMAIN2 domain2.example.com External No Yes Yes
Additional resources
-
wbinfo(1)
man page on your system
3.21. Additional resources
-
smb.conf(5)
man page on your system -
/usr/share/docs/samba-version/
directory contains general documentation, example scripts, and LDAP schema files, provided by the Samba project - Setting up Samba and the Clustered Trivial Database (CDTB) to share directories stored on an GlusterFS volume
- Mounting an SMB Share on Red Hat Enterprise Linux
Chapter 4. Setting up and configuring a BIND DNS server
BIND is a feature-rich DNS server that is fully compliant with the Internet Engineering Task Force (IETF) DNS standards and draft standards. For example, administrators frequently use BIND as:
- Caching DNS server in the local network
- Authoritative DNS server for zones
- Secondary server to provide high availability for zones
4.1. Considerations about protecting BIND with SELinux or running it in a change-root environment
To secure a BIND installation, you can:
Run the
named
service without a change-root environment. In this case, SELinux inenforcing
mode prevents exploitation of known BIND security vulnerabilities. By default, Red Hat Enterprise Linux uses SELinux inenforcing
mode.ImportantRunning BIND on RHEL with SELinux in
enforcing
mode is more secure than running BIND in a change-root environment.Run the
named-chroot
service in a change-root environment.Using the change-root feature, administrators can define that the root directory of a process and its sub-processes is different to the
/
directory. When you start thenamed-chroot
service, BIND switches its root directory to/var/named/chroot/
. As a consequence, the service usesmount --bind
commands to make the files and directories listed in/etc/named-chroot.files
available in/var/named/chroot/
, and the process has no access to files outside of/var/named/chroot/
.
If you decide to use BIND:
-
In normal mode, use the
named
service. -
In a change-root environment, use the
named-chroot
service. This requires that you install, additionally, thenamed-chroot
package.
Additional resources
-
The
Red Hat SELinux BIND security profile
section in thenamed(8)
man page on your system
4.2. The BIND Administrator Reference Manual
The comprehensive BIND Administrator Reference Manual
, that is included in the bind
package, provides:
- Configuration examples
- Documentation on advanced features
- A configuration reference
- Security considerations
To display the BIND Administrator Reference Manual
on a host that has the bind
package installed, open the /usr/share/doc/bind/Bv9ARM.html
file in a browser.
4.3. Configuring BIND as a caching DNS server
By default, the BIND DNS server resolves and caches successful and failed lookups. The service then answers requests to the same records from its cache. This significantly improves the speed of DNS lookups.
Prerequisites
- The IP address of the server is static.
Procedure
Install the
bind
andbind-utils
packages:# yum install bind bind-utils
These packages provide BIND 9.11. If you require BIND 9.16, install the
bind9.16
andbind9.16-utils
packages.If you want to run BIND in a change-root environment install the
bind-chroot
package:# yum install bind-chroot
Note that running BIND on a host with SELinux in
enforcing
mode, which is default, is more secure.Edit the
/etc/named.conf
file, and make the following changes in theoptions
statement:Update the
listen-on
andlisten-on-v6
statements to specify on which IPv4 and IPv6 interfaces BIND should listen:listen-on port 53 { 127.0.0.1; 192.0.2.1; }; listen-on-v6 port 53 { ::1; 2001:db8:1::1; };
Update the
allow-query
statement to configure from which IP addresses and ranges clients can query this DNS server:allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; };
Add an
allow-recursion
statement to define from which IP addresses and ranges BIND accepts recursive queries:allow-recursion { localhost; 192.0.2.0/24; 2001:db8:1::/64; };
WarningDo not allow recursion on public IP addresses of the server. Otherwise, the server can become part of large-scale DNS amplification attacks.
By default, BIND resolves queries by recursively querying from the root servers to an authoritative DNS server. Alternatively, you can configure BIND to forward queries to other DNS servers, such as the ones of your provider. In this case, add a
forwarders
statement with the list of IP addresses of the DNS servers that BIND should forward queries to:forwarders { 198.51.100.1; 203.0.113.5; };
As a fall-back behavior, BIND resolves queries recursively if the forwarder servers do not respond. To disable this behavior, add a
forward only;
statement.
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
If the command displays no output, the syntax is correct.
Update the
firewalld
rules to allow incoming DNS traffic:# firewall-cmd --permanent --add-service=dns # firewall-cmd --reload
Start and enable BIND:
# systemctl enable --now named
If you want to run BIND in a change-root environment, use the
systemctl enable --now named-chroot
command to enable and start the service.
Verification
Use the newly set up DNS server to resolve a domain:
# dig @localhost www.example.org ... www.example.org. 86400 IN A 198.51.100.34 ;; Query time: 917 msec ...
This example assumes that BIND runs on the same host and responds to queries on the
localhost
interface.After querying a record for the first time, BIND adds the entry to its cache.
Repeat the previous query:
# dig @localhost www.example.org ... www.example.org. 85332 IN A 198.51.100.34 ;; Query time: 1 msec ...
Because of the cached entry, further requests for the same record are significantly faster until the entry expires.
Next steps
- Configure the clients in your network to use this DNS server. If a DHCP server provides the DNS server setting to the clients, update the DHCP server’s configuration accordingly.
Additional resources
- Considerations about protecting BIND with SELinux or running it in a change-root environment
-
named.conf(5)
man page on your system -
/usr/share/doc/bind/sample/etc/named.conf
- The BIND Administrator Reference Manual
4.4. Configuring logging on a BIND DNS server
The configuration in the default /etc/named.conf
file, as provided by the bind
package, uses the default_debug
channel and logs messages to the /var/named/data/named.run
file. The default_debug
channel only logs entries when the server’s debug level is non-zero.
Using different channels and categories, you can configure BIND to write different events with a defined severity to separate files.
Prerequisites
- BIND is already configured, for example, as a caching name server.
-
The
named
ornamed-chroot
service is running.
Procedure
Edit the
/etc/named.conf
file, and addcategory
andchannel
phrases to thelogging
statement, for example:logging { ... category notify { zone_transfer_log; }; category xfer-in { zone_transfer_log; }; category xfer-out { zone_transfer_log; }; channel zone_transfer_log { file "/var/named/log/transfer.log" versions 10 size 50m; print-time yes; print-category yes; print-severity yes; severity info; }; ... };
With this example configuration, BIND logs messages related to zone transfers to
/var/named/log/transfer.log
. BIND creates up to10
versions of the log file and rotates them if they reach a maximum size of50
MB.The
category
phrase defines to which channels BIND sends messages of a category.The
channel
phrase defines the destination of log messages including the number of versions, the maximum file size, and the severity level BIND should log to a channel. Additional settings, such as enabling logging the time stamp, category, and severity of an event are optional, but useful for debugging purposes.Create the log directory if it does not exist, and grant write permissions to the
named
user on this directory:# mkdir /var/named/log/ # chown named:named /var/named/log/ # chmod 700 /var/named/log/
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
If the command displays no output, the syntax is correct.
Restart BIND:
# systemctl restart named
If you run BIND in a change-root environment, use the
systemctl restart named-chroot
command to restart the service.
Verification
Display the content of the log file:
# cat /var/named/log/transfer.log ... 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR started: TSIG example-transfer-key (serial 2022070603) 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR ended
Additional resources
-
named.conf(5)
man page on your system - The BIND Administrator Reference Manual
4.5. Writing BIND ACLs
Controlling access to certain features of BIND can prevent unauthorized access and attacks, such as denial of service (DoS). BIND access control list (acl
) statements are lists of IP addresses and ranges. Each ACL has a nickname that you can use in several statements, such as allow-query
, to refer to the specified IP addresses and ranges.
BIND uses only the first matching entry in an ACL. For example, if you define an ACL { 192.0.2/24; !192.0.2.1; }
and the host with IP address 192.0.2.1
connects, access is granted even if the second entry excludes this address.
BIND has the following built-in ACLs:
-
none
: Matches no hosts. -
any
: Matches all hosts. -
localhost
: Matches the loopback addresses127.0.0.1
and::1
, as well as the IP addresses of all interfaces on the server that runs BIND. -
localnets
: Matches the loopback addresses127.0.0.1
and::1
, as well as all subnets the server that runs BIND is directly connected to.
Prerequisites
- BIND is already configured, for example, as a caching name server.
-
The
named
ornamed-chroot
service is running.
Procedure
Edit the
/etc/named.conf
file and make the following changes:Add
acl
statements to the file. For example, to create an ACL namedinternal-networks
for127.0.0.1
,192.0.2.0/24
, and2001:db8:1::/64
, enter:acl internal-networks { 127.0.0.1; 192.0.2.0/24; 2001:db8:1::/64; }; acl dmz-networks { 198.51.100.0/24; 2001:db8:2::/64; };
Use the ACL’s nickname in statements that support them, for example:
allow-query { internal-networks; dmz-networks; }; allow-recursion { internal-networks; };
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
If the command displays no output, the syntax is correct.
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.
Verification
Execute an action that triggers a feature which uses the configured ACL. For example, the ACL in this procedure allows only recursive queries from the defined IP addresses. In this case, enter the following command on a host that is not within the ACL’s definition to attempt resolving an external domain:
# dig +short @192.0.2.1 www.example.com
If the command returns no output, BIND denied access, and the ACL works. For a verbose output on the client, use the command without
+short
option:# dig @192.0.2.1 www.example.com ... ;; WARNING: recursion requested but not available ...
Additional resources
-
The
Access control lists
section in the The BIND Administrator Reference Manual.
4.6. Configuring zones on a BIND DNS server
A DNS zone is a database with resource records for a specific sub-tree in the domain space. For example, if you are responsible for the example.com
domain, you can set up a zone for it in BIND. As a result, clients can, resolve www.example.com
to the IP address configured in this zone.
4.6.1. The SOA record in zone files
The start of authority (SOA) record is a required record in a DNS zone. This record is important, for example, if multiple DNS servers are authoritative for a zone but also to DNS resolvers.
A SOA record in BIND has the following syntax:
name class type mname rname serial refresh retry expire minimum
For better readability, administrators typically split the record in zone files into multiple lines with comments that start with a semicolon (;
). Note that, if you split a SOA record, parentheses keep the record together:
@ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL
Note the trailing dot at the end of the fully-qualified domain names (FQDNs). FQDNs consist of multiple domain labels, separated by dots. Because the DNS root has an empty label, FQDNs end with a dot. Therefore, BIND appends the zone name to names without a trailing dot. A hostname without a trailing dot, for example, ns1.example.com
would be expanded to ns1.example.com.example.com.
, which is not the correct address of the primary name server.
These are the fields in a SOA record:
-
name
: The name of the zone, the so-calledorigin
. If you set this field to@
, BIND expands it to the zone name defined in/etc/named.conf
. -
class
: In SOA records, you must set this field always to Internet (IN
). -
type
: In SOA records, you must set this field always toSOA
. -
mname
(master name): The hostname of the primary name server of this zone. -
rname
(responsible name): The email address of who is responsible for this zone. Note that the format is different. You must replace the at sign (@
) with a dot (.
). serial
: The version number of this zone file. Secondary name servers only update their copies of the zone if the serial number on the primary server is higher.The format can be any numeric value. A commonly-used format is
<year><month><day><two-digit-number>
. With this format, you can, theoretically, change the zone file up to a hundred times per day.-
refresh
: The amount of time secondary servers should wait before checking the primary server if the zone was updated. -
retry
: The amount of time after that a secondary server retries to query the primary server after a failed attempt. -
expire
: The amount of time after that a secondary server stops querying the primary server, if all previous attempts failed. -
minimum
: RFC 2308 changed the meaning of this field to the negative caching time. Compliant resolvers use it to determine how long to cacheNXDOMAIN
name errors.
A numeric value in the refresh
, retry
, expire
, and minimum
fields define a time in seconds. However, for better readability, use time suffixes, such as m
for minute, h
for hours, and d
for days. For example, 3h
stands for 3 hours.
4.6.2. Setting up a forward zone on a BIND primary server
Forward zones map names to IP addresses and other information. For example, if you are responsible for the domain example.com
, you can set up a forward zone in BIND to resolve names, such as www.example.com
.
Prerequisites
- BIND is already configured, for example, as a caching name server.
-
The
named
ornamed-chroot
service is running.
Procedure
Add a zone definition to the
/etc/named.conf
file:zone "example.com" { type master; file "example.com.zone"; allow-query { any; }; allow-transfer { none; }; };
These settings define:
-
This server as the primary server (
type master
) for theexample.com
zone. -
The
/var/named/example.com.zone
file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set indirectory
in theoptions
statement. - Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access.
- No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers.
-
This server as the primary server (
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
If the command displays no output, the syntax is correct.
Create the
/var/named/example.com.zone
file, for example, with the following content:$TTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. IN MX 10 mail.example.com. www IN A 192.0.2.30 www IN AAAA 2001:db8:1::30 ns1 IN A 192.0.2.1 ns1 IN AAAA 2001:db8:1::1 mail IN A 192.0.2.20 mail IN AAAA 2001:db8:1::20
This zone file:
-
Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as
h
for hour, BIND interprets the value as seconds. - Contains the required SOA resource record with details about the zone.
-
Sets
ns1.example.com
as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server (NS
) record. However, to be compliant with RFC 1912, you require at least two name servers. -
Sets
mail.example.com
as the mail exchanger (MX
) of theexample.com
domain. The numeric value in front of the host name is the priority of the record. Entries with a lower value have a higher priority. -
Sets the IPv4 and IPv6 addresses of
www.example.com
,mail.example.com
, andns1.example.com
.
-
Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as
Set secure permissions on the zone file that allow only the
named
group to read it:# chown root:named /var/named/example.com.zone # chmod 640 /var/named/example.com.zone
Verify the syntax of the
/var/named/example.com.zone
file:# named-checkzone example.com /var/named/example.com.zone zone example.com/IN: loaded serial 2022070601 OK
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.
Verification
Query different records from the
example.com
zone, and verify that the output matches the records you have configured in the zone file:# dig +short @localhost AAAA www.example.com 2001:db8:1::30 # dig +short @localhost NS example.com ns1.example.com. # dig +short @localhost A ns1.example.com 192.0.2.1
This example assumes that BIND runs on the same host and responds to queries on the
localhost
interface.
4.6.3. Setting up a reverse zone on a BIND primary server
Reverse zones map IP addresses to names. For example, if you are responsible for IP range 192.0.2.0/24
, you can set up a reverse zone in BIND to resolve IP addresses from this range to hostnames.
If you create a reverse zone for whole classful networks, name the zone accordingly. For example, for the class C network 192.0.2.0/24
, the name of the zone is 2.0.192.in-addr.arpa
. If you want to create a reverse zone for a different network size, for example 192.0.2.0/28
, the name of the zone is 28-2.0.192.in-addr.arpa
.
Prerequisites
- BIND is already configured, for example, as a caching name server.
-
The
named
ornamed-chroot
service is running.
Procedure
Add a zone definition to the
/etc/named.conf
file:zone "2.0.192.in-addr.arpa" { type master; file "2.0.192.in-addr.arpa.zone"; allow-query { any; }; allow-transfer { none; }; };
These settings define:
-
This server as the primary server (
type master
) for the2.0.192.in-addr.arpa
reverse zone. -
The
/var/named/2.0.192.in-addr.arpa.zone
file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set indirectory
in theoptions
statement. - Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access.
- No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers.
-
This server as the primary server (
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
If the command displays no output, the syntax is correct.
Create the
/var/named/2.0.192.in-addr.arpa.zone
file, for example, with the following content:$TTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. 1 IN PTR ns1.example.com. 30 IN PTR www.example.com.
This zone file:
-
Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as
h
for hour, BIND interprets the value as seconds. - Contains the required SOA resource record with details about the zone.
-
Sets
ns1.example.com
as an authoritative DNS server for this reverse zone. To be functional, a zone requires at least one name server (NS
) record. However, to be compliant with RFC 1912, you require at least two name servers. -
Sets the pointer (
PTR
) record for the192.0.2.1
and192.0.2.30
addresses.
-
Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as
Set secure permissions on the zone file that only allow the
named
group to read it:# chown root:named /var/named/2.0.192.in-addr.arpa.zone # chmod 640 /var/named/2.0.192.in-addr.arpa.zone
Verify the syntax of the
/var/named/2.0.192.in-addr.arpa.zone
file:# named-checkzone 2.0.192.in-addr.arpa /var/named/2.0.192.in-addr.arpa.zone zone 2.0.192.in-addr.arpa/IN: loaded serial 2022070601 OK
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.
Verification
Query different records from the reverse zone, and verify that the output matches the records you have configured in the zone file:
# dig +short @localhost -x 192.0.2.1 ns1.example.com. # dig +short @localhost -x 192.0.2.30 www.example.com.
This example assumes that BIND runs on the same host and responds to queries on the
localhost
interface.
4.6.4. Updating a BIND zone file
In certain situations, for example if an IP address of a server changes, you must update a zone file. If multiple DNS servers are responsible for a zone, perform this procedure only on the primary server. Other DNS servers that store a copy of the zone will receive the update through a zone transfer.
Prerequisites
- The zone is configured.
-
The
named
ornamed-chroot
service is running.
Procedure
Optional: Identify the path to the zone file in the
/etc/named.conf
file:options { ... directory "/var/named"; } zone "example.com" { ... file "example.com.zone"; };
You find the path to the zone file in the
file
statement in the zone’s definition. A relative path is relative to the directory set indirectory
in theoptions
statement.Edit the zone file:
- Make the required changes.
Increment the serial number in the start of authority (SOA) record.
ImportantIf the serial number is equal to or lower than the previous value, secondary servers will not update their copy of the zone.
Verify the syntax of the zone file:
# named-checkzone example.com /var/named/example.com.zone zone example.com/IN: loaded serial 2022062802 OK
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.
Verification
Query the record you have added, modified, or removed, for example:
# dig +short @localhost A ns2.example.com 192.0.2.2
This example assumes that BIND runs on the same host and responds to queries on the
localhost
interface.
4.6.5. DNSSEC zone signing using the automated key generation and zone maintenance features
You can sign zones with domain name system security extensions (DNSSEC) to ensure authentication and data integrity. Such zones contain additional resource records. Clients can use them to verify the authenticity of the zone information.
If you enable the DNSSEC policy feature for a zone, BIND performs the following actions automatically:
- Creates the keys
- Signs the zone
- Maintains the zone, including re-signing and periodically replacing the keys.
To enable external DNS servers to verify the authenticity of a zone, you must add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this.
This procedure uses the built-in default
DNSSEC policy in BIND. This policy uses single ECDSAP256SHA
key signatures. Alternatively, create your own policy to use custom keys, algorithms, and timings.
Prerequisites
-
BIND 9.16 or later is installed. To meet this requirement, install the
bind9.16
package instead ofbind
. - The zone for which you want to enable DNSSEC is configured.
-
The
named
ornamed-chroot
service is running. - The server synchronizes the time with a time server. An accurate system time is important for DNSSEC validation.
Procedure
Edit the
/etc/named.conf
file, and adddnssec-policy default;
to the zone for which you want to enable DNSSEC:zone "example.com" { ... dnssec-policy default; };
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.BIND stores the public key in the
/var/named/K<zone_name>.+<algorithm>+<key_ID>.key
file. Use this file to display the public key of the zone in the format that the parent zone requires:DS record format:
# dnssec-dsfromkey /var/named/Kexample.com.+013+61141.key example.com. IN DS 61141 13 2 3E184188CF6D2521EDFDC3F07CFEE8D0195AACBD85E68BAE0620F638B4B1B027
DNSKEY format:
# grep DNSKEY /var/named/Kexample.com.+013+61141.key example.com. 3600 IN DNSKEY 257 3 13 sjzT3jNEp120aSO4mPEHHSkReHUf7AABNnT8hNRTzD5cKMQSjDJin2I3 5CaKVcWO1pm+HltxUEt+X9dfp8OZkg==
- Request to add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this.
Verification
Query your own DNS server for a record from the zone for which you enabled DNSSEC signing:
# dig +dnssec +short @localhost A www.example.com 192.0.2.30 A 13 3 28800 20220718081258 20220705120353 61141 example.com. e7Cfh6GuOBMAWsgsHSVTPh+JJSOI/Y6zctzIuqIU1JqEgOOAfL/Qz474 M0sgi54m1Kmnr2ANBKJN9uvOs5eXYw==
This example assumes that BIND runs on the same host and responds to queries on the
localhost
interface.After the public key has been added to the parent zone and propagated to other servers, verify that the server sets the authenticated data (
ad
) flag on queries to the signed zone:# dig @localhost example.com +dnssec ... ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ...
4.7. Configuring zone transfers among BIND DNS servers
Zone transfers ensure that all DNS servers that have a copy of the zone use up-to-date data.
Prerequisites
- On the future primary server, the zone for which you want to set up zone transfers is already configured.
- On the future secondary server, BIND is already configured, for example, as a caching name server.
-
On both servers, the
named
ornamed-chroot
service is running.
Procedure
On the existing primary server:
Create a shared key, and append it to the
/etc/named.conf
file:# tsig-keygen example-transfer-key | tee -a /etc/named.conf key "example-transfer-key" { algorithm hmac-sha256; secret "q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ="; };
This command displays the output of the
tsig-keygen
command and automatically appends it to/etc/named.conf
.You will require the output of the command later on the secondary server as well.
Edit the zone definition in the
/etc/named.conf
file:In the
allow-transfer
statement, define that servers must provide the key specified in theexample-transfer-key
statement to transfer a zone:zone "example.com" { ... allow-transfer { key example-transfer-key; }; };
Alternatively, use BIND access control list (ACL) nicknames in the
allow-transfer
statement.By default, after a zone has been updated, BIND notifies all name servers which have a name server (
NS
) record in this zone. If you do not plan to add anNS
record for the secondary server to the zone, you can, configure that BIND notifies this server anyway. For that, add thealso-notify
statement with the IP addresses of this secondary server to the zone:zone "example.com" { ... also-notify { 192.0.2.2; 2001:db8:1::2; }; };
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
If the command displays no output, the syntax is correct.
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.
On the future secondary server:
Edit the
/etc/named.conf
file as follows:Add the same key definition as on the primary server:
key "example-transfer-key" { algorithm hmac-sha256; secret "q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ="; };
Add the zone definition to the
/etc/named.conf
file:zone "example.com" { type slave; file "slaves/example.com.zone"; allow-query { any; }; allow-transfer { none; }; masters { 192.0.2.1 key example-transfer-key; 2001:db8:1::1 key example-transfer-key; }; };
These settings state:
-
This server is a secondary server (
type slave
) for theexample.com
zone. -
The
/var/named/slaves/example.com.zone
file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set indirectory
in theoptions
statement. To separate zone files for which this server is secondary from primary ones, you can store them, for example, in the/var/named/slaves/
directory. - Any host can query this zone. Alternatively, specify IP ranges or ACL nicknames to limit the access.
- No host can transfer the zone from this server.
-
The IP addresses of the primary server of this zone are
192.0.2.1
and2001:db8:1::2
. Alternatively, you can specify ACL nicknames. This secondary server will use the key namedexample-transfer-key
to authenticate to the primary server.
-
This server is a secondary server (
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.
-
Optional: Modify the zone file on the primary server and add an
NS
record for the new secondary server.
Verification
On the secondary server:
Display the
systemd
journal entries of thenamed
service:# journalctl -u named ... Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: Transfer started. Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: connected using 192.0.2.2#45803 Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: transferred serial 2022070101 Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer status: success Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer completed: 1 messages, 29 records, 2002 bytes, 0.003 secs (667333 bytes/sec)
If you run BIND in a change-root environment, use the
journalctl -u named-chroot
command to display the journal entries.Verify that BIND created the zone file:
# ls -l /var/named/slaves/ total 4 -rw-r--r--. 1 named named 2736 Jul 6 15:08 example.com.zone
Note that, by default, secondary servers store zone files in a binary raw format.
Query a record of the transferred zone from the secondary server:
# dig +short @192.0.2.2 AAAA www.example.com 2001:db8:1::30
This example assumes that the secondary server you set up in this procedure listens on IP address
192.0.2.2
.
4.8. Configuring response policy zones in BIND to override DNS records
Using DNS blocking and filtering, administrators can rewrite a DNS response to block access to certain domains or hosts. In BIND, response policy zones (RPZs) provide this feature. You can configure different actions for blocked entries, such as returning an NXDOMAIN
error or not responding to the query.
If you have multiple DNS servers in your environment, use this procedure to configure the RPZ on the primary server, and later configure zone transfers to make the RPZ available on your secondary servers.
Prerequisites
- BIND is already configured, for example, as a caching name server.
-
The
named
ornamed-chroot
service is running.
Procedure
Edit the
/etc/named.conf
file, and make the following changes:Add a
response-policy
definition to theoptions
statement:options { ... response-policy { zone "rpz.local"; }; ... }
You can set a custom name for the RPZ in the
zone
statement inresponse-policy
. However, you must use the same name in the zone definition in the next step.Add a
zone
definition for the RPZ you set in the previous step:zone "rpz.local" { type master; file "rpz.local"; allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; }; allow-transfer { none; }; };
These settings state:
-
This server is the primary server (
type master
) for the RPZ namedrpz.local
. -
The
/var/named/rpz.local
file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set indirectory
in theoptions
statement. -
Any hosts defined in
allow-query
can query this RPZ. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. - No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers.
-
This server is the primary server (
Verify the syntax of the
/etc/named.conf
file:# named-checkconf
If the command displays no output, the syntax is correct.
Create the
/var/named/rpz.local
file, for example, with the following content:$TTL 10m @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1h ; refresh period 1m ; retry period 3d ; expire time 1m ) ; minimum TTL IN NS ns1.example.com. example.org IN CNAME . *.example.org IN CNAME . example.net IN CNAME rpz-drop. *.example.net IN CNAME rpz-drop.
This zone file:
-
Sets the default time-to-live (TTL) value for resource records to 10 minutes. Without a time suffix, such as
h
for hour, BIND interprets the value as seconds. - Contains the required start of authority (SOA) resource record with details about the zone.
-
Sets
ns1.example.com
as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server (NS
) record. However, to be compliant with RFC 1912, you require at least two name servers. -
Return an
NXDOMAIN
error for queries toexample.org
and hosts in this domain. -
Drop queries to
example.net
and hosts in this domain.
For a full list of actions and examples, see IETF draft: DNS Response Policy Zones (RPZ).
-
Sets the default time-to-live (TTL) value for resource records to 10 minutes. Without a time suffix, such as
Verify the syntax of the
/var/named/rpz.local
file:# named-checkzone rpz.local /var/named/rpz.local zone rpz.local/IN: loaded serial 2022070601 OK
Reload BIND:
# systemctl reload named
If you run BIND in a change-root environment, use the
systemctl reload named-chroot
command to reload the service.
Verification
Attempt to resolve a host in
example.org
, that is configured in the RPZ to return anNXDOMAIN
error:# dig @localhost www.example.org ... ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30286 ...
This example assumes that BIND runs on the same host and responds to queries on the
localhost
interface.Attempt to resolve a host in the
example.net
domain, that is configured in the RPZ to drop queries:# dig @localhost www.example.net ... ;; connection timed out; no servers could be reached ...
Additional resources
4.9. Bind Migration from RHEL 7 to RHEL 8
To migrate the BIND
from RHEL 7 to RHEL 8 you need to adjust the bind configuration in the following ways :
-
Remove the
dnssec-lookaside auto
configuration option. -
BIND
will listen on any configured IPv6 addresses by default because the default value for thelisten-on-v6
configuration option has been changed toany
fromnone
. -
Multiple zones cannot share the same zone file when updates to its zone are allowed. If you need to use the same file in multiple zone definitions, ensure that allow-updates uses only
none
. Do not use non-emptyupdate-policy
or enableinline-signing
, otherwise usein-view
clause to share the zone.
Updated command-line options, default behavior and output formats :
-
The number of UDP listeners employed per interface has been changed to be a function of the number of processors. You can override it by using the
-U
argument toBIND
. -
The XML format used in the
statistics-channel
has been changed. -
The
rndc flushtree
option now flushesDNSSEC
validation failures as well as specific name records. -
You must use the
/etc/named.root.key
file instead of the/etc/named.iscdlv.key
file. The/etc/named.iscdlv.key
file is not available anymore. - The querylog format has been changed to include a memory address of the client object. It can be helpful in debugging.
-
The
named
anddig
utility now send aDNS COOKIE
(RFC 7873) by default, which might break on restrictive firewall or intrusion detection system. You can change this behaviour by using thesend-cookie
configuration option. -
The
dig
utility can display theExtended DNS Errors
(EDE, RFC 8914) in a text format.
4.10. Recording DNS queries by using dnstap
As a network administrator, you can record Domain Name System (DNS) details to analyze DNS traffic patterns, monitor DNS server performance, and troubleshoot DNS issues. If you want an advanced way to monitor and log details of incoming name queries, use the dnstap
interface that records sent messages from the named
service. You can capture and record DNS queries to collect information about websites or IP addresses.
Prerequisites
-
The
bind-9.11.26-2
package or a later version is installed.
If you already have a BIND
version installed and running, adding a new version of BIND
will overwrite the existing version.
Procedure
Enable
dnstap
and the target file by editing the/etc/named.conf
file in theoptions
block:options { # ... dnstap { all; }; # Configure filter dnstap-output file "/var/named/data/dnstap.bin"; # ... }; # end of options
To specify which types of DNS traffic you want to log, add
dnstap
filters to thednstap
block in the/etc/named.conf
file. You can use the following filters:-
auth
- Authoritative zone response or answer. -
client
- Internal client query or answer. -
forwarder
- Forwarded query or response from it. -
resolver
- Iterative resolution query or response. -
update
- Dynamic zone update requests. -
all
- Any from the above options. query
orresponse
- If you do not specify aquery
or aresponse
keyword,dnstap
records both.NoteThe
dnstap
filter contains multiple definitions delimited by a;
in thednstap {}
block with the following syntax:dnstap { ( all | auth | client | forwarder | resolver | update ) [ ( query | response ) ]; … };
-
To apply your changes, restart the
named
service:# systemctl restart named.service
Configure a periodic rollout for active logs
In the following example, the
cron
scheduler runs the content of the user-edited script once a day. Theroll
option with the value3
specifies thatdnstap
can create up to three backup log files. The value3
overrides theversion
parameter of thednstap-output
variable, and limits the number of backup log files to three. Additionally, the binary log file is moved to another directory and renamed, and it never reaches the.2
suffix, even if three backup log files already exist. You can skip this step if automatic rolling of binary logs based on size limit is sufficient.Example: sudoedit /etc/cron.daily/dnstap #!/bin/sh rndc dnstap -roll 3 mv /var/named/data/dnstap.bin.1 /var/log/named/dnstap/dnstap-$(date -I).bin # use dnstap-read to analyze saved logs sudo chmod a+x /etc/cron.daily/dnstap
Handle and analyze logs in a human-readable format by using the
dnstap-read
utility:In the following example, the
dnstap-read
utility prints the output in theYAML
file format.Example: dnstap-read -y [file-name]
Chapter 5. Deploying an NFS server
By using the Network File System (NFS) protocol, remote users can mount shared directories over a network and use them as they were mounted locally. This enables you to consolidate resources onto centralized servers on the network.
5.1. Key features of minor NFSv4 versions
Each minor NFSv4 version brings enhancements aimed at improving performance and security. Use these improvements to utilize the full potential of NFSv4, ensuring efficient and reliable file sharing across networks.
Key features of NFSv4.2
- Server-side copy
- Server-side copy is a capability of the NFS server to copy files on the server without transferring the data back and forth over the network.
- Sparse files
- Enables files to have one or more empty spaces, or gaps, which are unallocated or uninitialized data blocks consisting only of zeros. This enables applications to map out the location of holes in the sparse file.
- Space reservation
- Clients can reserve or allocate space on the storage server before writing data. This prevents the server from running out of space.
- Labeled NFS
- Enforces data access rights and enables SELinux labels between a client and a server for individual files on an NFS file system.
- Layout enhancements
- Provides functionality to enable Parallel NFS (pNFS) servers to collect better performance statistics.
Key features of NFSv4.1
- Client-side support for pNFS
- The support of high-speed I/O to clustered servers enables you to store data on multiple machines, to provide direct access to data, and synchronization of updates to metadata.
- Sessions
- Sessions maintain the server’s state relative to the connections belonging to a client. These sessions provide improved performance and efficiency by reducing the overhead associated with establishing and terminating connections for each Remote Procedure Call (RPC) operation.
Key features of NFSv4.0
- RPC and security
-
The
RPCSEC_GSS
framework enhances RPC security. The NFSv4 protocol introduces a new operation for in-band security negotiation. This enables clients to query server policies for accessing file system resources securely. - Procedure and operation structure
-
NFS 4.0 introduces the
COMPOUND
procedure, which enables clients to merge multiple operations into a single request to reduce RPCs. - File system model
NFS 4.0 retains the hierarchical file system model, treating files as byte streams and encoding names with UTF-8 for internationalization.
File handle types
With volatile file handles, servers can adjust to file system changes and enable clients to adapt as needed without requiring permanent file handles.
Attribute types
The file attribute structure includes required, recommended, and named attributes, each serving distinct purposes. Required attributes, derived from NFSv3, are essential for distinguishing file types, while recommended attributes, such as ACLs, provide enhanced access control.
Multi-server namespace
Namespaces span across multiple servers, simplify file system transfers based on attributes, support referrals, redundancy, and seamless server migration.
- OPEN and CLOSE operations
- These operations can combine file lookup, creation, and semantic sharing at a single point and make the file access management more efficient.
- File locking
- File locking is part of the protocol, eliminating the need for RPC callbacks. File lock state is managed by the server under a lease-based model, where failure to renew the lease may result in state release by the server.
- Client caching and delegation
- Caching resembles previous versions, with client-determined timeouts for attribute and directory caching. Delegations in NFS 4.0 allow the server to assign certain responsibilities to the client, guaranteeing specific file sharing semantics and enabling local file operations without immediate server interaction.
5.2. The AUTH_SYS authentication method
The AUTH_SYS
method, which is also known as AUTH_UNIX
, is a client authentication mechanism. With AUTH_SYS
, the client sends the User ID (UID) and Group ID (GID) of the user to the server to verify its identity and permissions when accessing files. It is considered less secure as it relies on the client-provided information, making it susceptible to unauthorized access if misconfigured.
Mapping mechanisms ensure that NFS clients can access files with the appropriate permissions on the server, even if the UID and GID assignments differ between systems. UIDs and GIDs are mapped between NFS client and server by the following mechanisms:
- Direct mapping
UIDs and GIDs are directly mapped by NFS servers and clients between local and remote systems. This requires consistent UID and GID assignments across all systems participating in NFS file sharing. For example, a user with UID 1000 on a client can only access the files on a share that a user with UID 1000 on the server has access to.
For a simplified ID management in an NFS environment, administrators often rely on centralized services, such as LDAP or Network Information Service (NIS) to manage UID and GID mappings across multiple systems.
- User and Group ID mapping
-
NFS servers and clients can use the
idmapd
service to translate UIDs and GIDs between different systems for consistent identification and permission assignment.
5.3. The AUTH_GSS authentication method
Kerberos is a network authentication protocol that allows secure authentication for clients and servers over a non-secure network. It uses symmetric key cryptography and requires a trusted Key Distribution Center (KDC) to authenticate users and services.
Unlike AUTH_SYS
, with the RPCSEC_GSS
Kerberos mechanism, the server does not depend on the client to correctly represent which user is accessing the file. Instead, cryptography is used to authenticate users to the server, which prevents a malicious client from impersonating a user without having that user’s Kerberos credentials.
In the /etc/exports
file, the sec
option defines one or multiple methods of Kerberos security that the share should provide, and clients can mount the share with one of these methods. The sec
option supports the following values:
-
sys
: no cryptographic protection (default) -
krb5
: authentication only -
krb5i
: authentication and integrity protection -
krb5p
: authentication, integrity checking, and traffic encryption
Note that the more cryptographic functionality a method provides, the lower is the performance.
5.4. File permissions on exported file systems
File permissions on exported file systems determine access rights to files and directories for clients accessing them over NFS.
Once the NFS file system is mounted by a remote host, the only protection each shared file has is its file system permissions. If two users that share the same User ID (UID) value mount the same NFS file system on different client systems, they can modify each other’s files.
NFS treats the root
user on the client as equivalent to the root
user on the server. However, by default, the NFS server maps root
to the nobody
account when accessing an NFS share. The root_squash
option controls this behavior.
Additional resources
-
exports(5)
man page on your system
5.5. Services required on an NFS server
Red Hat Enterprise Linux (RHEL) uses a combination of a kernel module and user-space processes to provide NFS file shares:
Service name | NFS versions | Description |
---|---|---|
| 3, 4 | The NFS kernel module that services requests for shared NFS file systems. |
| 3 |
This process accepts port reservations from local remote procedure call (RPC) services, makes them available or advertised, allowing corresponding remote RPC services to access them. The |
| 3, 4 |
This service processes It checks that the requested NFS share is currently exported by the NFS server and that the client is allowed to access it. |
| 3, 4 | This process advertises explicit NFS versions and protocols the server defines. It works with the kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects.
The |
| 3 | This kernel module implements the Network Lock Manager (NLM) protocol, which enables clients to lock files on the server. RHEL loads the module automatically when the NFS server runs. |
| 3, 4 | This service provides user quota information for remote users. |
| 4 | This process provides NFSv4 client and server upcalls, which map between NFSv4 names (strings in the form of `user@domain`) and local user and group IDs. |
| 3, 4 |
This service handles |
| 4 | This service provides a NFSv4 client tracking daemon that prevents the server from granting lock reclaims when other clients have taken conflicting locks during a network partition combined with a server reboot. |
| 3 | This service provides notification to other NFSv3 clients when the local host reboots, and to the kernel when a remote NFSv3 host reboots. |
Additional resources
-
rpcbind(8)
,rpc.mountd(8)
,rpc.nfsd(8)
,rpc.statd(8)
,rpc.rquotad(8)
,rpc.idmapd(8)
,gssproxy(8)
,nfsdcld(8)
,rpc.statd(8)
man pages on your system
5.6. The /etc/exports configuration file
The /etc/exports
file controls which directories the server exports. Each line contains an export point, a whitespace-separated list of clients that are allowed to mount the directory, and options for each of the clients:
<directory> <host_or_network_1>(<options_1>) <host_or_network_n>(<options_n>)...
The following are the individual parts of an /etc/exports
entry:
- <export>
- The directory that is being exported.
- <host_or_network>
- The host or network to which the export is being shared. For example, you can specify a hostname, an IP address, or an IP network.
- <options>
- The options for the host or network.
Adding a space between a client and options, changes the behavior. For example, the following lines do not have the same meaning:
/projects client.example.com(rw) /projects client.example.com (rw)
In the first line, the server allows only client.example.com
to mount the /projects
directory in read-write mode, and no other hosts can mount the share. However, due to the space between client.example.com
and (rw)
in the second line, the server exports the directory to client.example.com
in read-only mode (default setting), but all other hosts can mount the share in read-write mode.
The NFS server uses the following default settings for each exported directory:
Default setting | Description |
---|---|
| Exports the directory in read-only mode. |
| The NFS server does not reply to requests before changes made by previous requests are written to disk. |
| The server delays writing to the disk if it suspects another write request is pending.. |
|
Prevents that the |
5.7. Configuring an NFSv4-only server
If you do not have any NFSv3 clients in your network, you can configure the NFS server to support only NFSv4 or specific minor protocol versions of it. Using only NFSv4 on the server reduces the number of ports that are open to the network.
Procedure
Install the
nfs-utils
package:# dnf install nfs-utils
Edit the
/etc/nfs.conf
file, and make the following changes:Disable the
vers3
parameter in the[nfsd]
section to disable NFSv3:[nfsd] vers3=n
Optional: If you require only specific NFSv4 minor versions, uncomment all
vers4.<minor_version>
parameters and set them accordingly, for example:[nfsd] vers3=n # vers4=y vers4.0=n vers4.1=n vers4.2=y
With this configuration, the server provides only NFS version 4.2.
ImportantIf you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the
vers4
parameter to avoid an unpredictable activation or deactivation of minor versions. By default, thevers4
parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you setvers4
in conjunction with othervers
parameters.
Disable all NFSv3-related services:
# systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket
Configure the
rpc.mountd
daemon to not listen for NFSv3 mount requests. Create a/etc/systemd/system/nfs-mountd.service.d/v4only.conf
file with the following content:[Service] ExecStart= ExecStart=/usr/sbin/rpc.mountd --no-tcp --no-udp
Reload the
systemd
manager configuration and restart thenfs-mountd
service:# systemctl daemon-reload # systemctl restart nfs-mountd
Optional: Create a directory that you want to share, for example:
# mkdir -p /nfs/projects/
If you want to share an existing directory, skip this step.
Set the permissions you require on the
/nfs/projects/
directory:# chmod 2770 /nfs/projects/ # chgrp users /nfs/projects/
These commands set write permissions for the
users
group on the/nfs/projects/
directory and ensure that the same group is automatically set on new entries created in this directory.Add an export point to the
/etc/exports
file for each directory that you want to share:/nfs/projects/ 192.0.2.0/24(rw) 2001:db8::/32(rw)
This entry shares the
/nfs/projects/
directory to be accessible with read and write access to clients in the192.0.2.0/24
and2001:db8::/32
subnets.Open the relevant ports in
firewalld
:# firewall-cmd --permanent --add-service nfs # firewall-cmd --reload
Enable and start the NFS server:
# systemctl enable --now nfs-server
Verification
On the server, verify that the server provides only the NFS versions that you have configured:
# cat /proc/fs/nfsd/versions -3 +4 -4.0 -4.1 +4.2
On a client, perform the following steps:
Install the
nfs-utils
package:# dnf install nfs-utils
Mount an exported NFS share:
# mount server.example.com:/nfs/projects/ /mnt/
As a user which is a member of the
users
group, create a file in/mnt/
:# touch /mnt/file
List the directory to verify that the file was created:
# ls -l /mnt/ total 0 -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
5.8. Configuring an NFSv3 server with optional NFSv4 support
In a network which still uses NFSv3 clients, configure the server to provide shares by using the NFSv3 protocol. If you also have newer clients in your network, you can, additionally, enable NFSv4. By default, Red Hat Enterprise Linux NFS clients use the latest NFS version that the server provides.
Procedure
Install the
nfs-utils
package:# dnf install nfs-utils
Optional: By default, NFSv3 and NFSv4 are enabled. If you do not require NFSv4 or only specific minor versions, uncomment all
vers4.<minor_version>
parameters and set them accordingly:[nfsd] # vers3=y # vers4=y vers4.0=n vers4.1=n vers4.2=y
With this configuration, the server provides only the NFS version 3 and 4.2.
ImportantIf you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the
vers4
parameter to avoid an unpredictable activation or deactivation of minor versions. By default, thevers4
parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you setvers4
in conjunction with othervers
parameters.By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure fixed port numbers in the
/etc/nfs.conf
file:In the
[lockd]
section, set a fixed port number for thenlockmgr
RPC service, for example:[lockd] port=5555
With this setting, the service automatically uses this port number for both the UDP and TCP protocol.
In the
[statd]
section, set a fixed port number for therpc.statd
service, for example:[statd] port=6666
With this setting, the service automatically uses this port number for both the UDP and TCP protocol.
Optional: Create a directory that you want to share, for example:
# mkdir -p /nfs/projects/
If you want to share an existing directory, skip this step.
Set the permissions you require on the
/nfs/projects/
directory:# chmod 2770 /nfs/projects/ # chgrp users /nfs/projects/
These commands set write permissions for the
users
group on the/nfs/projects/
directory and ensure that the same group is automatically set on new entries created in this directory.Add an export point to the
/etc/exports
file for each directory that you want to share:/nfs/projects/ 192.0.2.0/24(rw) 2001:db8::/32(rw)
This entry shares the
/nfs/projects/
directory to be accessible with read and write access to clients in the192.0.2.0/24
and2001:db8::/32
subnets.Open the relevant ports in
firewalld
:# firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd} # firewall-cmd --permanent --add-port={5555/tcp,5555/udp,6666/tcp,6666/udp} # firewall-cmd --reload
Enable and start the NFS server:
# systemctl enable --now rpc-statd nfs-server
Verification
On the server, verify that the server provides only the NFS versions that you have configured:
# cat /proc/fs/nfsd/versions +3 +4 -4.0 -4.1 +4.2
On a client, perform the following steps:
Install the
nfs-utils
package:# dnf install nfs-utils
Mount an exported NFS share:
# mount -o vers=<version> server.example.com:/nfs/projects/ /mnt/
Verify that the share was mounted with the specified NFS version:
# mount | grep "/mnt" server.example.com:/nfs/projects/ on /mnt type nfs (rw,relatime,vers=3,...
As a user which is a member of the
users
group, create a file in/mnt/
:# touch /mnt/file
List the directory to verify that the file was created:
# ls -l /mnt/ total 0 -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
5.9. Enabling quota support on an NFS server
If you want to restrict the amount of data a user or a group can store, you can configure quotas on the file system. On an NFS server, the rpc-rquotad
service ensures that the quota is also applied to users on NFS clients.
Prerequisites
Procedure
Verify that quotas are enabled on the directories that you export:
For ext file system, enter:
# quotaon -p /nfs/projects/ group quota on /nfs/projects (/dev/sdb1) is on user quota on /nfs/projects (/dev/sdb1) is on project quota on /nfs/projects (/dev/sdb1) is off
For an XFS file system, enter:
# findmnt /nfs/projects TARGET SOURCE FSTYPE OPTIONS /nfs/projects /dev/sdb1 xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,usrquota,grpquota
Install the
quota-rpc
package:# dnf install quota-rpc
Optional: By default, the quota RPC service runs on port 875. If you want to run the service on a different port, append
-p <port_number>
to theRPCRQUOTADOPTS
variable in the/etc/sysconfig/rpc-rquotad
file:RPCRQUOTADOPTS="-p __<port_number>__"
Optional: By default, remote hosts can only read quotas. To allow clients to set quotas, append the
-S
option to theRPCRQUOTADOPTS
variable in the/etc/sysconfig/rpc-rquotad
file:RPCRQUOTADOPTS="-S"
Open the port in
firewalld
:# firewall-cmd --permanent --add-port=875/udp # firewall-cmd --reload
Enable and start the
rpc-rquotad
service:# systemctl enable --now rpc-rquotad
Verification
On the client:
Mount the exported share:
# mount server.example.com:/nfs/projects/ /mnt/
Display the quota. The command depends on the file system of the exported directory. For example:
To display the quota of a specific user on all mounted ext file systems, enter:
# quota -u <user_name> Disk quotas for user demo (uid 1000): Filesystem space quota limit grace files quota limit grace server.example.com:/nfs/projects 0K 100M 200M 0 0 0
To display the user and group quota on an XFS file system, enter:
# xfs_quota -x -c "report -h" /mnt/ User quota on /nfs/projects (/dev/vdb1) Blocks User ID Used Soft Hard Warn/Grace ---------- --------------------------------- root 0 0 0 00 [------] demo 0 100M 200M 00 [------]
Additional resources
-
quota(1)
andxfs_quota(8)
man pages on your system
5.10. Enabling NFS over RDMA on an NFS server
Remote Direct Memory Access (RDMA) is a protocol that enables a client system to directly transfer data from the memory of a storage server into its own memory. This enhances storage throughput, decreases latency in data transfer between the server and client, and reduces CPU load on both ends. If both the NFS server and clients are connected over RDMA, clients can use NFSoRDMA to mount an exported directory.
Prerequisites
- The NFS service is running and configured
- An InfiniBand or RDMA over Converged Ethernet (RoCE) device is installed on the server.
- IP over InfiniBand (IPoIB) is configured on the server, and the InfiniBand device has an IP address assigned.
Procedure
Install the
rdma-core
package:# dnf install rdma-core
If the package was already installed, verify that the
xprtrdma
andsvcrdma
modules in the/etc/rdma/modules/rdma.conf
file are uncommented:# NFS over RDMA client support xprtrdma # NFS over RDMA server support svcrdma
Optional: By default, NFS over RDMA uses port 20049. If you want to use a different port, set the
rdma-port
setting in the[nfsd]
section of the/etc/nfs.conf
file:rdma-port=<port>
Open the NFSoRDMA port in
firewalld
:# firewall-cmd --permanent --add-port={20049/tcp,20049/udp} # firewall-cmd --reload
Adjust the port numbers if you set a different port than 20049.
Restart the
nfs-server
service:# systemctl restart nfs-server
Verification
On a client with InfiniBand hardware, perform the following steps:
Install the following packages:
# dnf install nfs-utils rdma-core
Mount an exported NFS share over RDMA:
# mount -o rdma server.example.com:/nfs/projects/ /mnt/
If you set a port number other than the default (20049), pass
port=<port_number>
to the command:# mount -o rdma,port=<port_number> server.example.com:/nfs/projects/ /mnt/
Verify that the share was mounted with the
rdma
option:# mount | grep "/mnt" server.example.com:/nfs/projects/ on /mnt type nfs (...,proto=rdma,...)
Additional resources
5.11. Setting up an NFS server with Kerberos in a Red Hat Identity Management domain
If you use Red Hat Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption.
Prerequisites
- The NFS server is enrolled in a Red Hat Identity Management (IdM) domain.
- The NFS server is running and configured.
Procedure
Obtain a kerberos ticket as an IdM administrator:
# kinit admin
Create a
nfs/<FQDN>
service principal:# ipa service-add nfs/nfs_server.idm.example.com
Retrieve the
nfs
service principal from IdM, and store it in the/etc/krb5.keytab
file:# ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytab
Optional: Display the principals in the
/etc/krb5.keytab
file:# klist -k /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
By default, the IdM client adds the host principal to the
/etc/krb5.keytab
file when you join the host to the IdM domain. If the host principal is missing, use theipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytab
command to add it.Use the
ipa-client-automount
utility to configure mapping of IdM IDs:# ipa-client-automount Searching for IPA server... IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs
Update your
/etc/exports
file, and add the Kerberos security method to the client options. For example:/nfs/projects/ 192.0.2.0/24(rw,sec=krb5i)
If you want that your clients can select from multiple security methods, specify them separated by colons:
/nfs/projects/ 192.0.2.0/24(rw,sec=krb5:krb5i:krb5p)
Reload the exported file systems:
# exportfs -r
Chapter 6. Configuring the Squid caching proxy server
Squid is a proxy server that caches content to reduce bandwidth and load web pages more quickly. This chapter describes how to set up Squid as a proxy for the HTTP, HTTPS, and FTP protocol, as well as authentication and restricting access.
6.1. Setting up Squid as a caching proxy without authentication
You can configure Squid as a caching proxy without authentication. The procedure limits access to the proxy based on IP ranges.
Prerequisites
-
The procedure assumes that the
/etc/squid/squid.conf
file is as provided by thesquid
package. If you edited this file before, remove the file and reinstall the package.
Procedure
Install the
squid
package:# yum install squid
Edit the
/etc/squid/squid.conf
file:Adapt the
localnet
access control lists (ACL) to match the IP ranges that should be allowed to use the proxy:acl localnet src 192.0.2.0/24 acl localnet 2001:db8:1::/64
By default, the
/etc/squid/squid.conf
file contains thehttp_access allow localnet
rule that allows using the proxy from all IP ranges specified inlocalnet
ACLs. Note that you must specify alllocalnet
ACLs before thehttp_access allow localnet
rule.ImportantRemove all existing
acl localnet
entries that do not match your environment.The following ACL exists in the default configuration and defines
443
as a port that uses the HTTPS protocol:acl SSL_ports port 443
If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port:
acl SSL_ports port port_number
Update the list of
acl Safe_ports
rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the followingacl Safe_ports
statements in the configuration:acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443
By default, the configuration contains the
http_access deny !Safe_ports
rule that defines access denial to ports that are not defined inSafe_ports
ACLs.Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the
cache_dir
parameter:cache_dir ufs /var/spool/squid 10000 16 256
With these settings:
-
Squid uses the
ufs
cache type. -
Squid stores its cache in the
/var/spool/squid/
directory. -
The cache grows up to
10000
MB. -
Squid creates
16
level-1 sub-directories in the/var/spool/squid/
directory. Squid creates
256
sub-directories in each level-1 directory.If you do not set a
cache_dir
directive, Squid stores the cache in memory.
-
Squid uses the
If you set a different cache directory than
/var/spool/squid/
in thecache_dir
parameter:Create the cache directory:
# mkdir -p path_to_cache_directory
Configure the permissions for the cache directory:
# chown squid:squid path_to_cache_directory
If you run SELinux in
enforcing
mode, set thesquid_cache_t
context for the cache directory:# semanage fcontext -a -t squid_cache_t "path_to_cache_directory(/.*)?" # restorecon -Rv path_to_cache_directory
If the
semanage
utility is not available on your system, install thepolicycoreutils-python-utils
package.
Open the
3128
port in the firewall:# firewall-cmd --permanent --add-port=3128/tcp # firewall-cmd --reload
Enable and start the
squid
service:# systemctl enable --now squid
Verification
To verify that the proxy works correctly, download a web page using the curl
utility:
# curl -O -L "https://www.redhat.com/index.html" -x "proxy.example.com:3128"
If curl
does not display any error and the index.html
file was downloaded to the current directory, the proxy works.
6.2. Setting up Squid as a caching proxy with LDAP authentication
You can configure Squid as a caching proxy that uses LDAP to authenticate users. The procedure configures that only authenticated users can use the proxy.
Prerequisites
-
The procedure assumes that the
/etc/squid/squid.conf
file is as provided by thesquid
package. If you edited this file before, remove the file and reinstall the package. -
An service user, such as
uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com
exists in the LDAP directory. Squid uses this account only to search for the authenticating user. If the authenticating user exists, Squid binds as this user to the directory to verify the authentication.
Procedure
Install the
squid
package:# yum install squid
Edit the
/etc/squid/squid.conf
file:To configure the
basic_ldap_auth
helper utility, add the following configuration entry to the top of/etc/squid/squid.conf
:auth_param basic program /usr/lib64/squid/basic_ldap_auth -b "cn=users,cn=accounts,dc=example,dc=com" -D "uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com" -W /etc/squid/ldap_password -f "(&(objectClass=person)(uid=%s))" -ZZ -H ldap://ldap_server.example.com:389
The following describes the parameters passed to the
basic_ldap_auth
helper utility in the example above:-
-b base_DN
sets the LDAP search base. -
-D proxy_service_user_DN
sets the distinguished name (DN) of the account Squid uses to search for the authenticating user in the directory. -
-W path_to_password_file
sets the path to the file that contains the password of the proxy service user. Using a password file prevents that the password is visible in the operating system’s process list. -f LDAP_filter
specifies the LDAP search filter. Squid replaces the%s
variable with the user name provided by the authenticating user.The
(&(objectClass=person)(uid=%s))
filter in the example defines that the user name must match the value set in theuid
attribute and that the directory entry contains theperson
object class.-ZZ
enforces a TLS-encrypted connection over the LDAP protocol using theSTARTTLS
command. Omit the-ZZ
in the following situations:- The LDAP server does not support encrypted connections.
- The port specified in the URL uses the LDAPS protocol.
- The -H LDAP_URL parameter specifies the protocol, the host name or IP address, and the port of the LDAP server in URL format.
-
Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy:
acl ldap-auth proxy_auth REQUIRED http_access allow ldap-auth
ImportantSpecify these settings before the
http_access deny
all rule.Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in
localnet
ACLs:http_access allow localnet
The following ACL exists in the default configuration and defines
443
as a port that uses the HTTPS protocol:acl SSL_ports port 443
If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port:
acl SSL_ports port port_number
Update the list of
acl Safe_ports
rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the followingacl Safe_ports
statements in the configuration:acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443
By default, the configuration contains the
http_access deny !Safe_ports
rule that defines access denial to ports that are not defined inSafe_ports ACLs
.Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the
cache_dir
parameter:cache_dir ufs /var/spool/squid 10000 16 256
With these settings:
-
Squid uses the
ufs
cache type. -
Squid stores its cache in the
/var/spool/squid/
directory. -
The cache grows up to
10000
MB. -
Squid creates
16
level-1 sub-directories in the/var/spool/squid/
directory. Squid creates
256
sub-directories in each level-1 directory.If you do not set a
cache_dir
directive, Squid stores the cache in memory.
-
Squid uses the
If you set a different cache directory than
/var/spool/squid/
in thecache_dir
parameter:Create the cache directory:
# mkdir -p path_to_cache_directory
Configure the permissions for the cache directory:
# chown squid:squid path_to_cache_directory
If you run SELinux in
enforcing
mode, set thesquid_cache_t
context for the cache directory:# semanage fcontext -a -t squid_cache_t "path_to_cache_directory(/.*)?" # restorecon -Rv path_to_cache_directory
If the
semanage
utility is not available on your system, install thepolicycoreutils-python-utils
package.
Store the password of the LDAP service user in the
/etc/squid/ldap_password
file, and set appropriate permissions for the file:# echo "password" > /etc/squid/ldap_password # chown root:squid /etc/squid/ldap_password # chmod 640 /etc/squid/ldap_password
Open the
3128
port in the firewall:# firewall-cmd --permanent --add-port=3128/tcp # firewall-cmd --reload
Enable and start the
squid
service:# systemctl enable --now squid
Verification
To verify that the proxy works correctly, download a web page using the curl
utility:
# curl -O -L "https://www.redhat.com/index.html" -x "user_name:password@proxy.example.com:3128"
If curl does not display any error and the index.html
file was downloaded to the current directory, the proxy works.
Troubleshooting steps
To verify that the helper utility works correctly:
Manually start the helper utility with the same settings you used in the
auth_param
parameter:# /usr/lib64/squid/basic_ldap_auth -b "cn=users,cn=accounts,dc=example,dc=com" -D "uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com" -W /etc/squid/ldap_password -f "(&(objectClass=person)(uid=%s))" -ZZ -H ldap://ldap_server.example.com:389
Enter a valid user name and password, and press Enter:
user_name password
If the helper utility returns
OK
, authentication succeeded.
6.3. Setting up Squid as a caching proxy with kerberos authentication
You can configure Squid as a caching proxy that authenticates users to an Active Directory (AD) using Kerberos. The procedure configures that only authenticated users can use the proxy.
Prerequisites
-
The procedure assumes that the
/etc/squid/squid.conf
file is as provided by thesquid
package. If you edited this file before, remove the file and reinstall the package.
Procedure
Install the following packages:
# yum install squid krb5-workstation
Authenticate as the AD domain administrator:
# kinit administrator@AD.EXAMPLE.COM
Create a keytab for Squid and store it in the
/etc/squid/HTTP.keytab
file:# export KRB5_KTNAME=FILE:/etc/squid/HTTP.keytab # net ads keytab CREATE -U administrator
Add the
HTTP
service principal to the keytab:# net ads keytab ADD HTTP -U administrator
Set the owner of the keytab file to the
squid
user:# chown squid /etc/squid/HTTP.keytab
Optional: Verify that the keytab file contains the
HTTP
service principal for the fully-qualified domain name (FQDN) of the proxy server:# klist -k /etc/squid/HTTP.keytab Keytab name: FILE:/etc/squid/HTTP.keytab KVNO Principal ---- --------------------------------------------------- ... 2 HTTP/proxy.ad.example.com@AD.EXAMPLE.COM ...
Edit the
/etc/squid/squid.conf
file:To configure the
negotiate_kerberos_auth
helper utility, add the following configuration entry to the top of/etc/squid/squid.conf
:auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k /etc/squid/HTTP.keytab -s HTTP/proxy.ad.example.com@AD.EXAMPLE.COM
The following describes the parameters passed to the
negotiate_kerberos_auth
helper utility in the example above:-
-k file
sets the path to the key tab file. Note that the squid user must have read permissions on this file. -s HTTP/host_name@kerberos_realm
sets the Kerberos principal that Squid uses.Optionally, you can enable logging by passing one or both of the following parameters to the helper utility:
-
-i
logs informational messages, such as the authenticating user. -d
enables debug logging.Squid logs the debugging information from the helper utility to the
/var/log/squid/cache.log
file.
-
Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy:
acl kerb-auth proxy_auth REQUIRED http_access allow kerb-auth
ImportantSpecify these settings before the
http_access deny all
rule.Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in
localnet
ACLs:http_access allow localnet
The following ACL exists in the default configuration and defines
443
as a port that uses the HTTPS protocol:acl SSL_ports port 443
If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port:
acl SSL_ports port port_number
Update the list of
acl Safe_ports
rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the followingacl Safe_ports
statements in the configuration:acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443
By default, the configuration contains the
http_access deny !Safe_ports
rule that defines access denial to ports that are not defined inSafe_ports
ACLs.Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the
cache_dir
parameter:cache_dir ufs /var/spool/squid 10000 16 256
With these settings:
-
Squid uses the
ufs
cache type. -
Squid stores its cache in the
/var/spool/squid/
directory. -
The cache grows up to
10000
MB. -
Squid creates
16
level-1 sub-directories in the/var/spool/squid/
directory. Squid creates
256
sub-directories in each level-1 directory.If you do not set a
cache_dir
directive, Squid stores the cache in memory.
-
Squid uses the
If you set a different cache directory than
/var/spool/squid/
in thecache_dir
parameter:Create the cache directory:
# mkdir -p path_to_cache_directory
Configure the permissions for the cache directory:
# chown squid:squid path_to_cache_directory
If you run SELinux in
enforcing
mode, set thesquid_cache_t
context for the cache directory:# semanage fcontext -a -t squid_cache_t "path_to_cache_directory(/.*)?" # restorecon -Rv path_to_cache_directory
If the
semanage
utility is not available on your system, install thepolicycoreutils-python-utils
package.
Open the
3128
port in the firewall:# firewall-cmd --permanent --add-port=3128/tcp # firewall-cmd --reload
Enable and start the
squid
service:# systemctl enable --now squid
Verification
To verify that the proxy works correctly, download a web page using the curl
utility:
# curl -O -L "https://www.redhat.com/index.html" --proxy-negotiate -u : -x "proxy.ad.example.com:3128"
If curl
does not display any error and the index.html
file exists in the current directory, the proxy works.
Troubleshooting steps
To manually test Kerberos authentication:
Obtain a Kerberos ticket for the AD account:
# kinit user@AD.EXAMPLE.COM
Optional: Display the ticket:
# klist
Use the
negotiate_kerberos_auth_test
utility to test the authentication:# /usr/lib64/squid/negotiate_kerberos_auth_test proxy.ad.example.com
If the helper utility returns a token, the authentication succeeded:
Token: YIIFtAYGKwYBBQUCoIIFqDC...
6.4. Configuring a domain deny list in Squid
Frequently, administrators want to block access to specific domains. This section describes how to configure a domain deny list in Squid.
Prerequisites
- Squid is configured, and users can use the proxy.
Procedure
Edit the
/etc/squid/squid.conf
file and add the following settings:acl domain_deny_list dstdomain "/etc/squid/domain_deny_list.txt" http_access deny all domain_deny_list
ImportantAdd these entries before the first
http_access allow
statement that allows access to users or clients.Create the
/etc/squid/domain_deny_list.txt
file and add the domains you want to block. For example, to block access toexample.com
including subdomains and to blockexample.net
, add:.example.com example.net
ImportantIf you referred to the
/etc/squid/domain_deny_list.txt
file in the squid configuration, this file must not be empty. If the file is empty, Squid fails to start.Restart the
squid
service:# systemctl restart squid
6.5. Configuring the Squid service to listen on a specific port or IP address
By default, the Squid proxy service listens on the 3128
port on all network interfaces. You can change the port and configuring Squid to listen on a specific IP address.
Prerequisites
-
The
squid
package is installed.
Procedure
Edit the
/etc/squid/squid.conf
file:To set the port on which the Squid service listens, set the port number in the
http_port
parameter. For example, to set the port to8080
, set:http_port 8080
To configure on which IP address the Squid service listens, set the IP address and port number in the
http_port
parameter. For example, to configure that Squid listens only on the192.0.2.1
IP address on port3128
, set:http_port 192.0.2.1:3128
Add multiple
http_port
parameters to the configuration file to configure that Squid listens on multiple ports and IP addresses:http_port 192.0.2.1:3128 http_port 192.0.2.1:8080
If you configured that Squid uses a different port as the default (
3128
):Open the port in the firewall:
# firewall-cmd --permanent --add-port=port_number/tcp # firewall-cmd --reload
If you run SELinux in enforcing mode, assign the port to the
squid_port_t
port type definition:# semanage port -a -t squid_port_t -p tcp port_number
If the
semanage
utility is not available on your system, install thepolicycoreutils-python-utils
package.
Restart the
squid
service:# systemctl restart squid
6.6. Additional resources
-
Configuration parameters
usr/share/doc/squid-<version>/squid.conf.documented
Chapter 7. Database servers
7.1. Introduction to database servers
A database server is a service that provides features of a database management system (DBMS). DBMS provides utilities for database administration and interacts with end users, applications, and databases.
Red Hat Enterprise Linux 8 provides the following database management systems:
- MariaDB 10.3
- MariaDB 10.5 - available since RHEL 8.4
- MariaDB 10.11 - available since RHEL 8.10
- MySQL 8.0
- PostgreSQL 10
- PostgreSQL 9.6
- PostgreSQL 12 - available since RHEL 8.1.1
- PostgreSQL 13 - available since RHEL 8.4
- PostgreSQL 15 - available since RHEL 8.8
- PostgreSQL 16 - available since RHEL 8.10
7.2. Using MariaDB
The MariaDB server is an open source fast and robust database server that is based on the MySQL technology. MariaDB is a relational database that converts data into structured information and provides an SQL interface for accessing data. It includes multiple storage engines and plugins, as well as geographic information system (GIS) and JavaScript Object Notation (JSON) features.
Learn how to install and configure MariaDB on a RHEL system, how to back up MariaDB data, how to migrate from an earlier MariaDB version, and how to replicate a database using the MariaDB Galera Cluster.
7.2.1. Installing MariaDB
In RHEL 8, the MariaDB server is available in the following versions, each provided by a separate stream:
- MariaDB 10.3
- MariaDB 10.5 - available since RHEL 8.4
- MariaDB 10.11 - available since RHEL 8.10
By design, it is impossible to install more than one version (stream) of the same module in parallel. Therefore, you must choose only one of the available streams from the mariadb
module. You can use different versions of the MariaDB database server in containers, see Running multiple MariaDB versions in containers.
The MariaDB and MySQL database servers cannot be installed in parallel in RHEL 8 due to conflicting RPM packages. You can use the MariaDB and MySQL database servers in parallel in containers, see Running multiple MySQL and MariaDB versions in containers.
To install MariaDB, use the following procedure.
Procedure
Install MariaDB server packages by selecting a stream (version) from the
mariadb
module and specifying theserver
profile. For example:# yum module install mariadb:10.3/server
Start the
mariadb
service:# systemctl start mariadb.service
Enable the
mariadb
service to start at boot:# systemctl enable mariadb.service
Recommended for MariaDB 10.3: To improve security when installing MariaDB, run the following command:
$ mysql_secure_installation
The command launches a fully interactive script, which prompts for each step in the process. The script enables you to improve security in the following ways:
- Setting a password for root accounts
- Removing anonymous users
Disallowing remote root logins (outside the local host)
NoteThe mysql_secure_installation script is no longer valuable in MariaDB 10.5 or later. The security enhancements are part of the default behavior since MariaDB 10.5.
If you want to upgrade from an earlier mariadb
stream within RHEL 8, follow both procedures described in Switching to a later stream and in Upgrading from MariaDB 10.3 to MariaDB 10.5 or in Upgrading from MariaDB 10.5 to MariaDB 10.11.
7.2.1.1. Running multiple MariaDB versions in containers
To run different versions of MariaDB on the same host, run them in containers because you cannot install multiple versions (streams) of the same module in parallel.
Prerequisites
-
The
container-tools
module is installed.
Procedure
Use your Red Hat Customer Portal account to authenticate to the
registry.redhat.io
registry:# podman login registry.redhat.io
Skip this step if you are already logged in to the container registry.
Run MariaDB 10.3 in a container:
$ podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD=<mariadb_root_password> -p <host_port_1>:3306 rhel8/mariadb-103
For more information about the usage of this container image, see the Red Hat Ecosystem Catalog.
Run MariaDB 10.5 in a container:
$ podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD=<mariadb_root_password> -p <host_port_2>:3306 rhel8/mariadb-105
For more information about the usage of this container image, see the Red Hat Ecosystem Catalog.
Run MariaDB 10.11 in a container:
$ podman run -d --name <container_name> -e MYSQL_ROOT_PASSWORD=<mariadb_root_password> -p <host_port_3>:3306 rhel8/mariadb-1011
For more information about the usage of this container image, see the Red Hat Ecosystem Catalog.
NoteThe container names and host ports of the two database servers must differ.
To ensure that clients can access the database server on the network, open the host ports in the firewall:
# firewall-cmd --permanent --add-port={<host_port_1>/tcp,<host_port_2>/tcp,<host_port_3>/tcp...} # firewall-cmd --reload
Verification
Display information about running containers:
$ podman ps
Connect to the database server and log in as root:
# mysql -u root -p -h localhost -P <host_port> --protocol tcp
7.2.2. Configuring MariaDB
To configure the MariaDB server for networking, use the following procedure.
Procedure
Edit the
[mysqld]
section of the/etc/my.cnf.d/mariadb-server.cnf
file. You can set the following configuration directives:bind-address
- is the address on which the server listens. Possible options are:- a host name
- an IPv4 address
- an IPv6 address
skip-networking
- controls whether the server listens for TCP/IP connections. Possible values are:- 0 - to listen for all clients
- 1 - to listen for local clients only
-
port
- the port on which MariaDB listens for TCP/IP connections.
Restart the
mariadb
service:# systemctl restart mariadb.service
7.2.3. Setting up TLS encryption on a MariaDB server
By default, MariaDB uses unencrypted connections. For secure connections, enable TLS support on the MariaDB server and configure your clients to establish encrypted connections.
7.2.3.1. Placing the CA certificate, server certificate, and private key on the MariaDB server
Before you can enable TLS encryption in the MariaDB server, store the certificate authority (CA) certificate, the server certificate, and the private key on the MariaDB server.
Prerequisites
The following files in Privacy Enhanced Mail (PEM) format have been copied to the server:
-
The private key of the server:
server.example.com.key.pem
-
The server certificate:
server.example.com.crt.pem
-
The Certificate Authority (CA) certificate:
ca.crt.pem
For details about creating a private key and certificate signing request (CSR), as well as about requesting a certificate from a CA, see your CA’s documentation.
-
The private key of the server:
Procedure
Store the CA and server certificates in the
/etc/pki/tls/certs/
directory:# mv <path>/server.example.com.crt.pem /etc/pki/tls/certs/ # mv <path>/ca.crt.pem /etc/pki/tls/certs/
Set permissions on the CA and server certificate that enable the MariaDB server to read the files:
# chmod 644 /etc/pki/tls/certs/server.example.com.crt.pem /etc/pki/tls/certs/ca.crt.pem
Because certificates are part of the communication before a secure connection is established, any client can retrieve them without authentication. Therefore, you do not need to set strict permissions on the CA and server certificate files.
Store the server’s private key in the
/etc/pki/tls/private/
directory:# mv <path>/server.example.com.key.pem /etc/pki/tls/private/
Set secure permissions on the server’s private key:
# chmod 640 /etc/pki/tls/private/server.example.com.key.pem # chgrp mysql /etc/pki/tls/private/server.example.com.key.pem
If unauthorized users have access to the private key, connections to the MariaDB server are no longer secure.
Restore the SELinux context:
# restorecon -Rv /etc/pki/tls/
7.2.3.2. Configuring TLS on a MariaDB server
To improve security, enable TLS support on the MariaDB server. As a result, clients can transmit data with the server using TLS encryption.
Prerequisites
- You installed the MariaDB server.
-
The
mariadb
service is running. The following files in Privacy Enhanced Mail (PEM) format exist on the server and are readable by the
mysql
user:-
The private key of the server:
/etc/pki/tls/private/server.example.com.key.pem
-
The server certificate:
/etc/pki/tls/certs/server.example.com.crt.pem
-
The Certificate Authority (CA) certificate
/etc/pki/tls/certs/ca.crt.pem
-
The private key of the server:
- The subject distinguished name (DN) or the subject alternative name (SAN) field in the server certificate matches the server’s hostname.
Procedure
Create the
/etc/my.cnf.d/mariadb-server-tls.cnf
file:Add the following content to configure the paths to the private key, server and CA certificate:
[mariadb] ssl_key = /etc/pki/tls/private/server.example.com.key.pem ssl_cert = /etc/pki/tls/certs/server.example.com.crt.pem ssl_ca = /etc/pki/tls/certs/ca.crt.pem
If you have a Certificate Revocation List (CRL), configure the MariaDB server to use it:
ssl_crl = /etc/pki/tls/certs/example.crl.pem
Optional: If you run MariaDB 10.5.2 or later, you can reject connection attempts without encryption. To enable this feature, append:
require_secure_transport = on
Optional: If you run MariaDB 10.4.6 or later, you can set the TLS versions the server should support. For example, to support TLS 1.2 and TLS 1.3, append:
tls_version = TLSv1.2,TLSv1.3
By default, the server supports TLS 1.1, TLS 1.2, and TLS 1.3.
Restart the
mariadb
service:# systemctl restart mariadb
Verification
To simplify troubleshooting, perform the following steps on the MariaDB server before you configure the local client to use TLS encryption:
Verify that MariaDB now has TLS encryption enabled:
# mysql -u root -p -e "SHOW GLOBAL VARIABLES LIKE 'have_ssl';" +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | have_ssl | YES | +---------------+-----------------+
If the
have_ssl
variable is set toyes
, TLS encryption is enabled.If you configured the MariaDB service to only support specific TLS versions, display the
tls_version
variable:# mysql -u root -p -e "SHOW GLOBAL VARIABLES LIKE 'tls_version';" +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | tls_version | TLSv1.2,TLSv1.3 | +---------------+-----------------+
7.2.3.3. Requiring TLS encrypted connections for specific user accounts
Users that have access to sensitive data should always use a TLS-encrypted connection to avoid sending data unencrypted over the network.
If you cannot configure on the server that a secure transport is required for all connections (require_secure_transport = on
), configure individual user accounts to require TLS encryption.
Prerequisites
- The MariaDB server has TLS support enabled.
- The user you configure to require secure transport exists.
- The client trusts the CA certificate that issued the server’s certificate.
Procedure
Connect as an administrative user to the MariaDB server:
# mysql -u root -p -h server.example.com
If your administrative user has no permissions to access the server remotely, perform the command on the MariaDB server and connect to
localhost
.Use the
REQUIRE SSL
clause to enforce that a user must connect using a TLS-encrypted connection:MariaDB [(none)]> ALTER USER 'example'@'%' REQUIRE SSL;
Verification
Connect to the server as the
example
user using TLS encryption:# mysql -u example -p -h server.example.com --ssl ... MariaDB [(none)]>
If no error is shown and you have access to the interactive MariaDB console, the connection with TLS succeeds.
Attempt to connect as the
example
user with TLS disabled:# mysql -u example -p -h server.example.com --skip-ssl ERROR 1045 (28000): Access denied for user 'example'@'server.example.com' (using password: YES)
The server rejected the login attempt because TLS is required for this user but disabled (
--skip-ssl
).
Additional resources
7.2.4. Globally enabling TLS encryption in MariaDB clients
If your MariaDB server supports TLS encryption, configure your clients to establish only secure connections and to verify the server certificate. This procedure describes how to enable TLS support for all users on the server.
7.2.4.1. Configuring the MariaDB client to use TLS encryption by default
On RHEL, you can globally configure that the MariaDB client uses TLS encryption and verifies that the Common Name (CN) in the server certificate matches the hostname the user connects to. This prevents man-in-the-middle attacks.
Prerequisites
- The MariaDB server has TLS support enabled.
- If the certificate authority (CA) that issued the server’s certificate is not trusted by RHEL, the CA certificate has been copied to the client.
Procedure
If RHEL does not trust the CA that issued the server’s certificate:
Copy the CA certificate to the
/etc/pki/ca-trust/source/anchors/
directory:# cp <path>/ca.crt.pem /etc/pki/ca-trust/source/anchors/
Set permissions that enable all users to read the CA certificate file:
# chmod 644 /etc/pki/ca-trust/source/anchors/ca.crt.pem
Rebuild the CA trust database:
# update-ca-trust
Create the
/etc/my.cnf.d/mariadb-client-tls.cnf
file with the following content:[client-mariadb] ssl ssl-verify-server-cert
These settings define that the MariaDB client uses TLS encryption (
ssl
) and that the client compares the hostname with the CN in the server certificate (ssl-verify-server-cert
).
Verification
Connect to the server using the hostname, and display the server status:
# mysql -u root -p -h server.example.com -e status ... SSL: Cipher in use is TLS_AES_256_GCM_SHA384
If the
SSL
entry containsCipher in use is…
, the connection is encrypted.Note that the user you use in this command has permissions to authenticate remotely.
If the hostname you connect to does not match the hostname in the TLS certificate of the server, the
ssl-verify-server-cert
parameter causes the connection to fail. For example, if you connect tolocalhost
:# mysql -u root -p -h localhost -e status ERROR 2026 (HY000): SSL connection error: Validation of SSL server certificate failed
Additional resources
-
The
--ssl*
parameter descriptions in themysql(1)
man page on your system
7.2.5. Backing up MariaDB data
There are two main ways to back up data from a MariaDB database in Red Hat Enterprise Linux 8:
- Logical backup
- Physical backup
Logical backup consists of the SQL statements necessary to restore the data. This type of backup exports information and records in plain text files.
The main advantage of logical backup over physical backup is portability and flexibility. The data can be restored on other hardware configurations, MariaDB versions or Database Management System (DBMS), which is not possible with physical backups.
Note that logical backup can be performed if the mariadb.service
is running. Logical backup does not include log and configuration files.
Physical backup consists of copies of files and directories that store the content.
Physical backup has the following advantages compared to logical backup:
- Output is more compact.
- Backup is smaller in size.
- Backup and restore are faster.
- Backup includes log and configuration files.
Note that physical backup must be performed when the mariadb.service
is not running or all tables in the database are locked to prevent changes during the backup.
You can use one of the following MariaDB backup approaches to back up data from a MariaDB database:
-
Logical backup with
mysqldump
-
Physical online backup using the
Mariabackup
utility - File system backup
- Replication as a backup solution
7.2.5.1. Performing logical backup with mysqldump
The mysqldump client is a backup utility, which can be used to dump a database or a collection of databases for the purpose of a backup or transfer to another database server. The output of mysqldump typically consists of SQL statements to re-create the server table structure, populate it with data, or both. mysqldump can also generate files in other formats, including XML and delimited text formats, such as CSV.
To perform the mysqldump backup, you can use one of the following options:
- Back up one or more selected databases
- Back up all databases
- Back up a subset of tables from one database
Procedure
To dump a single database, run:
# mysqldump [options] --databases db_name > backup-file.sql
To dump multiple databases at once, run:
# mysqldump [options] --databases db_name1 [db_name2 …] > backup-file.sql
To dump all databases, run:
# mysqldump [options] --all-databases > backup-file.sql
To load one or more dumped full databases back into a server, run:
# mysql < backup-file.sql
To load a database to a remote MariaDB server, run:
# mysql --host=remote_host < backup-file.sql
To dump a subset of tables from one database, add a list of the chosen tables at the end of the
mysqldump
command:# mysqldump [options] db_name [tbl_name …] > backup-file.sql
To load a subset of tables dumped from one database, run:
# mysql db_name < backup-file.sql
NoteThe db_name database must exist at this point.
To see a list of the options that mysqldump supports, run:
$ mysqldump --help
Additional resources
- For more information about logical backup with mysqldump, see the MariaDB Documentation.
7.2.5.2. Performing physical online backup using the Mariabackup utility
Mariabackup is a utility based on the Percona XtraBackup technology, which enables performing physical online backups of InnoDB, Aria, and MyISAM tables. This utility is provided by the mariadb-backup
package from the AppStream repository.
Mariabackup supports full backup capability for MariaDB server, which includes encrypted and compressed data.
Prerequisites
The
mariadb-backup
package is installed on the system:# yum install mariadb-backup
- You must provide Mariabackup with credentials for the user under which the backup will be run. You can provide the credentials either on the command line or by a configuration file.
-
Users of Mariabackup must have the
RELOAD
,LOCK TABLES
, andREPLICATION CLIENT
privileges.
To create a backup of a database using Mariabackup, use the following procedure.
Procedure
To create a backup while providing credentials on the command line, run:
$ mariabackup --backup --target-dir <backup_directory> --user <backup_user> --password <backup_passwd>
The
target-dir
option defines the directory where the backup files are stored. If you want to perform a full backup, the target directory must be empty or not exist.The
user
andpassword
options allow you to configure the user name and the password.To create a backup with credentials set in a configuration file:
-
Create a configuration file in the
/etc/my.cnf.d/
directory, for example,/etc/my.cnf.d/mariabackup.cnf
. Add the following lines into the
[xtrabackup]
or[mysqld]
section of the new file:[xtrabackup] user=myuser password=mypassword
Perform the backup:
$ mariabackup --backup --target-dir <backup_directory>
-
Create a configuration file in the
Additional resources
7.2.5.3. Restoring data using the Mariabackup utility
When the backup is complete, you can restore the data from the backup by using the mariabackup
command with one of the following options:
-
--copy-back
allows you to keep the original backup files. -
--move-back
moves the backup files to the data directory and removes the original backup files.
To restore data using the Mariabackup utility, use the following procedure.
Prerequisites
Verify that the
mariadb
service is not running:# systemctl stop mariadb.service
- Verify that the data directory is empty.
-
Users of Mariabackup must have the
RELOAD
,LOCK TABLES
, andREPLICATION CLIENT
privileges.
Procedure
Run the
mariabackup
command:To restore data and keep the original backup files, use the
--copy-back
option:$ mariabackup --copy-back --target-dir=/var/mariadb/backup/
To restore data and remove the original backup files, use the
--move-back
option:$ mariabackup --move-back --target-dir=/var/mariadb/backup/
Fix the file permissions.
When restoring a database, Mariabackup preserves the file and directory privileges of the backup. However, Mariabackup writes the files to disk as the user and group restoring the database. After restoring a backup, you may need to adjust the owner of the data directory to match the user and group for the MariaDB server, typically
mysql
for both.For example, to recursively change ownership of the files to the
mysql
user and group:# chown -R mysql:mysql /var/lib/mysql/
Start the
mariadb
service:# systemctl start mariadb.service
Additional resources
7.2.5.4. Performing file system backup
To create a file system backup of MariaDB data files, copy the content of the MariaDB data directory to your backup location.
To back up also your current configuration or the log files, use the optional steps of the following procedure.
Procedure
Stop the
mariadb
service:# systemctl stop mariadb.service
Copy the data files to the required location:
# cp -r /var/lib/mysql /backup-location
Optional: Copy the configuration files to the required location:
# cp -r /etc/my.cnf /etc/my.cnf.d /backup-location/configuration
Optional: Copy the log files to the required location:
# cp /var/log/mariadb/* /backup-location/logs
Start the
mariadb
service:# systemctl start mariadb.service
When loading the backed up data from the backup location to the
/var/lib/mysql
directory, ensure thatmysql:mysql
is an owner of all data in/var/lib/mysql
:# chown -R mysql:mysql /var/lib/mysql
7.2.5.5. Replication as a backup solution
Replication is an alternative backup solution for source servers. If a source server replicates to a replica server, backups can be run on the replica without any impact on the source. The source can still run while you shut down the replica and back the data up from the replica.
Replication itself is not a sufficient backup solution. Replication protects source servers against hardware failures, but it does not ensure protection against data loss. It is recommended that you use any other backup solution on the replica together with this method.
Additional resources
7.2.6. Migrating to MariaDB 10.3
RHEL 7 contains MariaDB 5.5 as the default implementation of a server from the MySQL databases family. Later versions of the MariaDB database server are available as Software Collections for RHEL 7. RHEL 8 provides MariaDB 10.3, MariaDB 10.5, MariaDB 10.11, and MySQL 8.0.
This part describes migration to MariaDB 10.3 from a RHEL 7 or Red Hat Software Collections version of MariaDB.
If you want to migrate from MariaDB 10.3 to MariaDB 10.5 within RHEL 8, see Upgrading from MariaDB 10.3 to MariaDB 10.5 instead.
If you want to migrate from MariaDB 10.5 to MariaDB 10.11 within RHEL 8, see Upgrading from MariaDB 10.5 to MariaDB 10.11.
7.2.6.1. Notable differences between the RHEL 7 and RHEL 8 versions of MariaDB
The most important changes between MariaDB 5.5 and MariaDB 10.3 are:
- MariaDB Galera Cluster, a synchronous multi-source cluster, is a standard part of MariaDB since 10.1.
- The ARCHIVE storage engine is no longer enabled by default, and the plugin needs to be specifically enabled.
- The BLACKHOLE storage engine is no longer enabled by default, and the plugin needs to be specifically enabled.
InnoDB is used as the default storage engine instead of XtraDB, which was used in MariaDB 10.1 and earlier versions.
For more details, see Why does MariaDB 10.2 use InnoDB instead of XtraDB?.
-
The new
mariadb-connector-c
packages provide a common client library for MySQL and MariaDB. This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with Red Hat Enterprise Linux 8.
To migrate from MariaDB 5.5 to MariaDB 10.3, you need to perform multiple configuration changes.
7.2.6.2. Configuration changes
The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively.
The main advantage of upgrading one minor version at a time is better adaptation of the database, including both data and configuration, to the changes. The upgrade ends on the same major version as is available in RHEL 8 (MariaDB 10.3), which significantly reduces configuration changes or other issues.
For more information about configuration changes when migrating from MariaDB 5.5 to MariaDB 10.0, see Migrating to MariaDB 10.0 in Red Hat Software Collections documentation.
The migration to following successive versions of MariaDB and the required configuration changes is described in these documents:
- Migrating to MariaDB 10.1 in Red Hat Software Collections documentation.
- Migrating to MariaDB 10.2 in Red Hat Software Collections documentation.
- Migrating to MariaDB 10.3 in Red Hat Software Collections documentation.
Migration directly from MariaDB 5.5 to MariaDB 10.3 is also possible, but you must perform all configuration changes that are required by differences described in the migration documents above.
7.2.6.3. In-place upgrade using the mysql_upgrade utility
To migrate the database files to RHEL 8, users of MariaDB on RHEL 7 must perform the in-place upgrade using the mysql_upgrade
utility. The mysql_upgrade
utility is provided by the mariadb-server-utils
subpackage, which is installed as a dependency of the mariadb-server
package.
To perform an in-place upgrade, you must copy binary data files to the /var/lib/mysql/
data directory on the RHEL 8 system and use the mysql_upgrade
utility.
You can use this method for migrating data from:
- The Red Hat Enterprise Linux 7 version of MariaDB 5.5
The Red Hat Software Collections versions of:
- MariaDB 5.5 (no longer supported)
- MariaDB 10.0 (no longer supported)
- MariaDB 10.1 (no longer supported)
- MariaDB 10.2 (no longer supported)
MariaDB 10.3 (no longer supported)
Note that it is recommended to upgrade to MariaDB 10.3 by one version successively. See the respective Migration chapters in the Release Notes for Red Hat Software Collections.
If you are upgrading from the RHEL 7 version of MariaDB, the source data is stored in the /var/lib/mysql/
directory. In case of Red Hat Software Collections versions of MariaDB, the source data directory is /var/opt/rh/<collection_name>/lib/mysql/
(with the exception of the mariadb55
, which uses the /opt/rh/mariadb55/root/var/lib/mysql/
data directory).
To perform an upgrade using the mysql_upgrade utility, use the following procedure.
Prerequisites
- Before performing the upgrade, back up all your data stored in the MariaDB databases.
Procedure
Ensure that the
mariadb-server
package is installed on the RHEL 8 system:# yum install mariadb-server
Ensure that the
mariadb
service is not running on either of the source and target systems at the time of copying data:# systemctl stop mariadb.service
-
Copy the data from the source location to the
/var/lib/mysql/
directory on the RHEL 8 target system. Set the appropriate permissions and SELinux context for copied files on the target system:
# restorecon -vr /var/lib/mysql
Start the MariaDB server on the target system:
# systemctl start mariadb.service
Run the
mysql_upgrade
command to check and repair internal tables:$ mysql_upgrade
-
When the upgrade is complete, verify that all configuration files within the
/etc/my.cnf.d/
directory include only options valid for MariaDB 10.3.
There are certain risks and known problems related to an in-place upgrade. For example, some queries might not work or they will be run in a different order than before the upgrade. For more information about these risks and problems, and for general information about an in-place upgrade, see MariaDB 10.3 Release Notes.
7.2.7. Upgrading from MariaDB 10.3 to MariaDB 10.5
This part describes migration from MariaDB 10.3 to MariaDB 10.5 within RHEL 8.
7.2.7.1. Notable differences between MariaDB 10.3 and MariaDB 10.5
Significant changes between MariaDB 10.3 and MariaDB 10.5 include:
-
MariaDB now uses the
unix_socket
authentication plugin by default. The plugin enables users to use operating system credentials when connecting to MariaDB through the local UNIX socket file. -
MariaDB
addsmariadb-*
named binaries andmysql*
symbolic links pointing to themariadb-*
binaries. For example, themysqladmin
,mysqlaccess
, andmysqlshow
symlinks point to themariadb-admin
,mariadb-access
, andmariadb-show
binaries, respectively. -
The
SUPER
privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. -
In parallel replication, the
slave_parallel_mode
now defaults tooptimistic
. -
In the InnoDB storage engine, defaults of the following variables have been changed:
innodb_adaptive_hash_index
toOFF
andinnodb_checksum_algorithm
tofull_crc32
. MariaDB now uses the
libedit
implementation of the underlying software managing the MariaDB command history (the.mysql_history
file) instead of the previously usedreadline
library. This change impacts users working directly with the.mysql_history
file. Note that.mysql_history
is a file managed by the MariaDB or MySQL applications, and users should not work with the file directly. The human-readable appearance is coincidental.NoteTo increase security, you can consider not maintaining a history file. To disable the command history recording:
-
Remove the
.mysql_history
file if it exists. Use either of the following approaches:
-
Set the
MYSQL_HISTFILE
variable to/dev/null
and include this setting in any of your shell’s startup files. Change the
.mysql_history
file to a symbolic link to/dev/null
:$ ln -s /dev/null $HOME/.mysql_history
-
Set the
-
Remove the
MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes:
- Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments.
- Galera now fully supports Global Transaction ID (GTID).
-
The default value for the
wsrep_on
option in the/etc/my.cnf.d/galera.cnf
file has changed from1
to0
to prevent end users from startingwsrep
replication without configuring required additional options.
Changes to the PAM plugin in MariaDB 10.5 include:
-
MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plugin. The PAM plugin version 2.0 performs PAM authentication using a separate
setuid root
helper binary, which enables MariaDB to use additional PAM modules. -
The helper binary can be executed only by users in the
mysql
group. By default, the group contains only themysql
user. Red Hat recommends that administrators do not add more users to themysql
group to prevent password-guessing attacks without throttling or logging through this helper utility. -
In MariaDB 10.5, the Pluggable Authentication Modules (PAM) plugin and its related files have been moved to a new package,
mariadb-pam
. As a result, no newsetuid root
binary is introduced on systems that do not use PAM authentication forMariaDB
. -
The
mariadb-pam
package contains both PAM plugin versions: version 2.0 is the default, and version 1.0 is available as theauth_pam_v1
shared object library. -
The
mariadb-pam
package is not installed by default with the MariaDB server. To make the PAM authentication plugin available in MariaDB 10.5, install themariadb-pam
package manually.
7.2.7.2. Upgrading from a RHEL 8 version of MariaDB 10.3 to MariaDB 10.5
This procedure describes upgrading from the mariadb:10.3
module stream to the mariadb:10.5
module stream using the yum
and mariadb-upgrade
utilities.
The mariadb-upgrade
utility is provided by the mariadb-server-utils
subpackage, which is installed as a dependency of the mariadb-server
package.
Prerequisites
- Before performing the upgrade, back up all your data stored in the MariaDB databases.
Procedure
Stop the MariaDB server:
# systemctl stop mariadb.service
Execute the following command to determine if your system is prepared for switching to a later stream:
# yum distro-sync
This command must finish with the message Nothing to do. Complete! For more information, see Switching to a later stream.
Reset the
mariadb
module on your system:# yum module reset mariadb
Enable the new
mariadb:10.5
module stream:# yum module enable mariadb:10.5
Synchronize installed packages to perform the change between streams:
# yum distro-sync
This will update all installed MariaDB packages.
-
Adjust the configuration so that option files located in
/etc/my.cnf.d/
include only options valid for MariaDB 10.5. For details, see upstream documentation for MariaDB 10.4 and MariaDB 10.5. Start the MariaDB server.
When upgrading a database running standalone:
# systemctl start mariadb.service
When upgrading a Galera cluster node:
# galera_new_cluster
The
mariadb
service will be started automatically.
Execute the mariadb-upgrade utility to check and repair internal tables.
When upgrading a database running standalone:
# mariadb-upgrade
When upgrading a Galera cluster node:
# mariadb-upgrade --skip-write-binlog
There are certain risks and known problems related to an in-place upgrade. For example, some queries might not work or they will be run in a different order than before the upgrade. For more information about these risks and problems, and for general information about an in-place upgrade, see MariaDB 10.5 Release Notes.
7.2.8. Upgrading from MariaDB 10.5 to MariaDB 10.11
This part describes migration from MariaDB 10.5 to MariaDB 10.11 within RHEL 8.
7.2.8.1. Notable differences between MariaDB 10.5 and MariaDB 10.11
Significant changes between MariaDB 10.5 and MariaDB 10.11 include:
-
A new
sys_schema
feature is a collection of views, functions, and procedures to provide information about database usage. -
The
CREATE TABLE
,ALTER TABLE
,RENAME TABLE
,DROP TABLE
,DROP DATABASE
, and related Data Definition Language (DDL) statements are now atomic. The statement must be fully completed, otherwise the changes are reverted. Note that when deleting multiple tables withDROP TABLE
, only each individual drop is atomic, not the full list of tables. -
A new
GRANT … TO PUBLIC
privilege is available. -
The
SUPER
andREAD ONLY ADMIN
privileges are now separate. -
You can now store universally unique identifiers in the new
UUID
database data type. - MariaDB now supports the Secure Socket Layer (SSL) protocol version 3.
- The MariaDB server now requires correctly configured SSL to start. Previously, MariaDB silently disabled SSL and used insecure connections in case of misconfigured SSL.
-
MariaDB now supports the natural sort order through the
natural_sort_key()
function. -
A new
SFORMAT
function is now available for arbitrary text formatting. -
The
utf8
character set (and related collations) is now by default an alias forutf8mb3
. - MariaDB supports the Unicode Collation Algorithm (UCA) 14 collations.
-
systemd
socket activation files for MariaDB are now available in the/usr/share/
directory. Note that they are not a part of the default configuration in RHEL as opposed to upstream. -
Error messages now contain the
MariaDB
string instead ofMySQL
. - Error messages are now available in the Chinese language.
- The default logrotate file has changed significantly. Review your configuration before migrating to MariaDB 10.11.
-
For MariaDB and MySQL clients, the connection property specified on the command line (for example,
--port=3306
), now forces the protocol type of communication between the client and the server, such astcp
,socket
,pipe
, ormemory
. Previously, for example, the specified port was ignored if a MariaDB client connected through a UNIX socket.
7.2.8.2. Upgrading from a RHEL 8 version of MariaDB 10.5 to MariaDB 10.11
This procedure describes upgrading from the mariadb:10.5
module stream to the mariadb:10.11
module stream using the yum
and mariadb-upgrade
utilities.
The mariadb-upgrade
utility is provided by the mariadb-server-utils
subpackage, which is installed as a dependency of the mariadb-server
package.
Prerequisites
- Before performing the upgrade, back up all your data stored in the MariaDB databases.
Procedure
Stop the MariaDB server:
# systemctl stop mariadb.service
Execute the following command to determine if your system is prepared for switching to a later stream:
# yum distro-sync
This command must finish with the message Nothing to do. Complete! For more information, see Switching to a later stream.
Reset the
mariadb
module on your system:# yum module reset mariadb
Enable the new
mariadb:10.11
module stream:# yum module enable mariadb:10.11
Synchronize installed packages to perform the change between streams:
# yum distro-sync
This will update all installed MariaDB packages.
-
Adjust the configuration so that option files located in
/etc/my.cnf.d/
include only options valid for MariaDB 10.11. For details, see upstream documentation for MariaDB 10.6 and MariaDB 10.11. Start the MariaDB server.
When upgrading a database running standalone:
# systemctl start mariadb.service
When upgrading a Galera cluster node:
# galera_new_cluster
The
mariadb
service will be started automatically.
Execute the mariadb-upgrade utility to check and repair internal tables.
When upgrading a database running standalone:
# mariadb-upgrade
When upgrading a Galera cluster node:
# mariadb-upgrade --skip-write-binlog
There are certain risks and known problems related to an in-place upgrade. For example, some queries might not work or they will be run in a different order than before the upgrade. For more information about these risks and problems, and for general information about an in-place upgrade, see MariaDB 10.11 Release Notes.
7.2.9. Replicating MariaDB with Galera
This section describes how to replicate a MariaDB database using the Galera solution on Red Hat Enterprise Linux 8.
7.2.9.1. Introduction to MariaDB Galera Cluster
Galera replication is based on the creation of a synchronous multi-source MariaDB Galera Cluster consisting of multiple MariaDB servers. Unlike the traditional primary/replica setup where replicas are usually read-only, nodes in the MariaDB Galera Cluster can be all writable.
The interface between Galera replication and a MariaDB database is defined by the write set replication API (wsrep API).
The main features of MariaDB Galera Cluster are:
- Synchronous replication
- Active-active multi-source topology
- Read and write to any cluster node
- Automatic membership control, failed nodes drop from the cluster
- Automatic node joining
- Parallel replication on row level
- Direct client connections: users can log on to the cluster nodes, and work with the nodes directly while the replication runs
Synchronous replication means that a server replicates a transaction at commit time by broadcasting the write set associated with the transaction to every node in the cluster. The client (user application) connects directly to the Database Management System (DBMS), and experiences behavior that is similar to native MariaDB.
Synchronous replication guarantees that a change that happened on one node in the cluster happens on other nodes in the cluster at the same time.
Therefore, synchronous replication has the following advantages over asynchronous replication:
- No delay in propagation of the changes between particular cluster nodes
- All cluster nodes are always consistent
- The latest changes are not lost if one of the cluster nodes crashes
- Transactions on all cluster nodes are executed in parallel
- Causality across the whole cluster
7.2.9.2. Components to build MariaDB Galera Cluster
To build MariaDB Galera Cluster, you must install the following packages on your system:
-
mariadb-server-galera
- contains support files and scripts for MariaDB Galera Cluster. -
mariadb-server
- is patched by MariaDB upstream to include the write set replication API (wsrep API). This API provides the interface between Galera replication and MariaDB. galera
- is patched by MariaDB upstream to add full support for MariaDB. Thegalera
package contains the following:- Galera Replication Library provides the whole replication functionality.
- The Galera Arbitrator utility can be used as a cluster member that participates in voting in split-brain scenarios. However, Galera Arbitrator cannot participate in the actual replication.
-
Galera Systemd service and Galera wrapper script which are used for deploying the Galera Arbitrator utility. MariaDB 10.3, MariaDB 10.5, and MariaDB 10.11 in RHEL 8 include a Red Hat version of the
garbd
systemd service and a wrapper script for thegalera
package in the/usr/lib/systemd/system/garbd.service
and/usr/sbin/garbd-wrapper
files, respectively. Since RHEL 8.6, MariaDB distributed with RHEL also provides an upstream version of these files located at/usr/share/doc/galera/garb-systemd
and/usr/share/doc/galera/garbd.service
.
Additional resources
7.2.9.3. Deploying MariaDB Galera Cluster
Prerequisites
- All of the nodes in the cluster have TLS set up.
All certificates on all nodes must have the
Extended Key Usage
field set to:TLS Web Server Authentication, TLS Web Client Authentication
Procedure
Install MariaDB Galera Cluster packages by selecting a stream (version) from the
mariadb
module and specifying thegalera
profile. For example:# yum module install mariadb:10.3/galera
As a result, the following packages are installed:
-
mariadb-server-galera
-
mariadb-server
galera
The
mariadb-server-galera
package pulls themariadb-server
andgalera
packages as its dependency.For more information about which packages you need to install to build MariaDB Galera Cluster, see Components to build MariaDB Cluster.
-
Update the MariaDB server replication configuration before the system is added to a cluster for the first time. The default configuration is distributed in the
/etc/my.cnf.d/galera.cnf
file. Before deploying MariaDB Galera Cluster, set thewsrep_cluster_address
option in the/etc/my.cnf.d/galera.cnf
file on all nodes to start with the following string:gcomm://
For the initial node, it is possible to set
wsrep_cluster_address
as an empty list:wsrep_cluster_address="gcomm://"
For all other nodes, set
wsrep_cluster_address
to include an address to any node which is already a part of the running cluster. For example:wsrep_cluster_address="gcomm://10.0.0.10"
For more information about how to set Galera Cluster address, see Galera Cluster Address.
-
Enable the
wsrep
API on every node by setting thewsrep_on=1
option in the/etc/my.cnf.d/galera.cnf
configuration file. Add the
wsrep_provider_options
variable to the Galera configuration file with the TLS keys and certificates. For example:wsrep_provider_options="socket.ssl_cert=/etc/pki/tls/certs/source.crt;socket.ssl_key=/etc/pki/tls/private/source.key;socket.ssl_ca=/etc/pki/tls/certs/ca.crt”
Bootstrap a first node of a new cluster by running the following wrapper on that node:
# galera_new_cluster
This wrapper ensures that the MariaDB server daemon (
mysqld
) runs with the--wsrep-new-cluster
option. This option provides the information that there is no existing cluster to connect to. Therefore, the node creates a new UUID to identify the new cluster.NoteThe
mariadb
service supports a systemd method for interacting with multiple MariaDB server processes. Therefore, in cases with multiple running MariaDB servers, you can bootstrap a specific instance by specifying the instance name as a suffix:# galera_new_cluster mariadb@node1
Connect other nodes to the cluster by running the following command on each of the nodes:
# systemctl start mariadb
As a result, the node connects to the cluster, and synchronizes itself with the state of the cluster.
Additional resources
7.2.9.4. Adding a new node to MariaDB Galera Cluster
To add a new node to MariaDB Galera Cluster, use the following procedure.
Note that you can also use this procedure to reconnect an already existing node.
Procedure
On the particular node, provide an address to one or more existing cluster members in the
wsrep_cluster_address
option within the[mariadb]
section of the/etc/my.cnf.d/galera.cnf
configuration file :[mariadb] wsrep_cluster_address="gcomm://192.168.0.1"
When a new node connects to one of the existing cluster nodes, it is able to see all nodes in the cluster.
However, preferably list all nodes of the cluster in
wsrep_cluster_address
.As a result, any node can join a cluster by connecting to any other cluster node, even if one or more cluster nodes are down. When all members agree on the membership, the cluster’s state is changed. If the new node’s state is different from the state of the cluster, the new node requests either an Incremental State Transfer (IST) or a State Snapshot Transfer (SST) to ensure consistency with the other nodes.
7.2.9.5. Restarting MariaDB Galera Cluster
If you shut down all nodes at the same time, you stop the cluster, and the running cluster no longer exists. However, the cluster’s data still exist.
To restart the cluster, bootstrap a first node as described in Configuring MariaDB Galera Cluster.
If the cluster is not bootstrapped, and mariadb
on the first node is started with only the systemctl start mariadb
command, the node tries to connect to at least one of the nodes listed in the wsrep_cluster_address
option in the /etc/my.cnf.d/galera.cnf
file. If no nodes are currently running, the restart fails.
Additional resources
7.2.10. Developing MariaDB client applications
Red Hat recommends developing your MariaDB client applications against the MariaDB client library.
The development files and programs necessary to build applications against the MariaDB client library are provided by the mariadb-connector-c-devel
package.
Instead of using a direct library name, use the mariadb_config
program, which is distributed in the mariadb-connector-c-devel
package. This program ensures that the correct build flags are returned.
7.3. Using MySQL
The MySQL server is an open source fast and robust database server. MySQL is a relational database that converts data into structured information and provides an SQL interface for accessing data. It includes multiple storage engines and plugins, as well as geographic information system (GIS) and JavaScript Object Notation (JSON) features.
Learn how to install and configure MySQL on a RHEL system, how to back up MySQL data, how to migrate from an earlier MySQL version, and how to replicate a MySQL.
7.3.1. Installing MySQL
In RHEL 8, the MySQL 8.0 server is available as the mysql:8.0
module stream.
The MySQL and MariaDB database servers cannot be installed in parallel in RHEL 8 due to conflicting RPM packages. You can use the MySQL and MariaDB database servers in parallel in containers, see Running multiple MySQL and MariaDB versions in containers.
To install MySQL, use the following procedure.
Procedure
Install MySQL server packages by selecting the
8.0
stream (version) from themysql
module and specifying theserver
profile:# yum module install mysql:8.0/server
Start the
mysqld
service:# systemctl start mysqld.service
Enable the
mysqld
service to start at boot:# systemctl enable mysqld.service
Recommended: To improve security when installing MySQL, run the following command:
$ mysql_secure_installation
The command launches a fully interactive script, which prompts for each step in the process. The script enables you to improve security in the following ways:
- Setting a password for root accounts
- Removing anonymous users
- Disallowing remote root logins (outside the local host)