Chapter 2. Setting up and configuring NGINX
NGINX is a high performance and modular server that you can use, for example, as a:
- Web server
- Reverse proxy
- Load balancer
2.1. Installing and preparing NGINX Copy linkLink copied to clipboard!
Red Hat uses Application Streams to provide different versions of NGINX. You can do the following:
- Select a stream and install NGINX
- Open the required ports in the firewall
-
Enable and start the
nginx
service
Using the default configuration, NGINX runs as a web server on port 80
and provides content from the /usr/share/nginx/html/
directory.
Prerequisites
- The host is subscribed to the Red Hat Customer Portal.
-
The
firewalld
service is enabled and started.
Procedure
Install the
nginx
package:dnf install nginx
# dnf install nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the ports on which NGINX should provide its service in the firewall. For example, to open the default ports for HTTP (port 80) and HTTPS (port 443) in
firewalld
, enter:firewall-cmd --permanent --add-port={80/tcp,443/tcp} firewall-cmd --reload
# firewall-cmd --permanent --add-port={80/tcp,443/tcp} # firewall-cmd --reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
nginx
service to start automatically when the system boots:systemctl enable nginx
# systemctl enable nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Start the
nginx
service:systemctl start nginx
# systemctl start nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not want to use the default configuration, skip this step, and configure NGINX accordingly before you start the service.
Verification
Use the
dnf
utility to verify that thenginx
package is installed:dnf list installed nginx
# dnf list installed nginx Installed Packages nginx.x86_64 1:1.14.1-9.module+el8.0.0+4108+af250afe @rhel-8-for-x86_64-appstream-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the ports on which NGINX should provide its service are opened in the firewalld:
firewall-cmd --list-ports
# firewall-cmd --list-ports 80/tcp 443/tcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
nginx
service is enabled:systemctl is-enabled nginx
# systemctl is-enabled nginx enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Configuring NGINX as a web server that provides different content for different domains Copy linkLink copied to clipboard!
By default, NGINX acts as a web server that provides the same content to clients for all domain names associated with the IP addresses of the server. This procedure explains how to configure NGINX:
-
To serve requests to the
example.com
domain with content from the/var/www/example.com/
directory -
To serve requests to the
example.net
domain with content from the/var/www/example.net/
directory -
To serve all other requests, for example, to the IP address of the server or to other domains associated with the IP address of the server, with content from the
/usr/share/nginx/html/
directory
Prerequisites
- NGINX is installed.
Clients and the web server resolve the
example.com
andexample.net
domain to the IP address of the web server.Note that you must manually add these entries to your DNS server.
Procedure
Edit the
/etc/nginx/nginx.conf
file:By default, the
/etc/nginx/nginx.conf
file already contains a catch-all configuration. If you have deleted this part from the configuration, re-add the followingserver
block to thehttp
block in the/etc/nginx/nginx.conf
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow These settings configure the following:
-
The
listen
directive define which IP address and ports the service listens. In this case, NGINX listens on port80
on both all IPv4 and IPv6 addresses. Thedefault_server
parameter indicates that NGINX uses thisserver
block as the default for requests matching the IP addresses and ports. -
The
server_name
parameter defines the host names for which thisserver
block is responsible. Settingserver_name
to_
configures NGINX to accept any host name for thisserver
block. -
The
root
directive sets the path to the web content for thisserver
block.
-
The
Append a similar
server
block for theexample.com
domain to thehttp
block:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
access_log
directive defines a separate access log file for this domain. -
The
error_log
directive defines a separate error log file for this domain.
-
The
Append a similar
server
block for theexample.net
domain to thehttp
block:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the root directories for both domains:
mkdir -p /var/www/example.com/ mkdir -p /var/www/example.net/
# mkdir -p /var/www/example.com/ # mkdir -p /var/www/example.net/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
httpd_sys_content_t
context on both root directories:semanage fcontext -a -t httpd_sys_content_t "/var/www/example.com(/.*)?" restorecon -Rv /var/www/example.com/ semanage fcontext -a -t httpd_sys_content_t "/var/www/example.net(/.\*)?" restorecon -Rv /var/www/example.net/
# semanage fcontext -a -t httpd_sys_content_t "/var/www/example.com(/.*)?" # restorecon -Rv /var/www/example.com/ # semanage fcontext -a -t httpd_sys_content_t "/var/www/example.net(/.\*)?" # restorecon -Rv /var/www/example.net/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These commands set the
httpd_sys_content_t
context on the/var/www/example.com/
and/var/www/example.net/
directories.Note that you must install the
policycoreutils-python-utils
package to run therestorecon
commands.Create the log directories for both domains:
mkdir /var/log/nginx/example.com/ mkdir /var/log/nginx/example.net/
# mkdir /var/log/nginx/example.com/ # mkdir /var/log/nginx/example.net/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
nginx
service:systemctl restart nginx
# systemctl restart nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Create a different example file in each virtual host’s document root:
echo "Content for example.com" > /var/www/example.com/index.html echo "Content for example.net" > /var/www/example.net/index.html echo "Catch All content" > /usr/share/nginx/html/index.html
# echo "Content for example.com" > /var/www/example.com/index.html # echo "Content for example.net" > /var/www/example.net/index.html # echo "Catch All content" > /usr/share/nginx/html/index.html
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use a browser and connect to
http://example.com
. The web server shows the example content from the/var/www/example.com/index.html
file. -
Use a browser and connect to
http://example.net
. The web server shows the example content from the/var/www/example.net/index.html
file. -
Use a browser and connect to
http://IP_address_of_the_server
. The web server shows the example content from the/usr/share/nginx/html/index.html
file.
2.3. Adding TLS encryption to an NGINX web server Copy linkLink copied to clipboard!
You can enable TLS encryption on an NGINX web server for the example.com
domain.
Prerequisites
- NGINX is installed. For more details, see Installing and preparing NGINX.
The private key is stored in the
/etc/pki/tls/private/example.com.key
file.For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.
-
The TLS certificate is stored in the
/etc/pki/tls/certs/example.com.crt
file. If you use a different path, adapt the corresponding steps of the procedure. - The CA certificate has been appended to the TLS certificate file of the server.
- Clients and the web server resolve the host name of the server to the IP address of the web server.
-
Port
443
is open in the local firewall. - If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced.
Procedure
Edit the
/etc/nginx/nginx.conf
file, and add the followingserver
block to thehttp
block in the configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Starting with RHEL 9.3, you can use the
ssl_pass_phrase_dialog
directive to configure an external program that is called atnginx
start for each encrypted private key. Add one of the following lines to the/etc/nginx/nginx.conf
file:To call an external program for each encrypted private key file, enter:
ssl_pass_phrase_dialog exec:<path_to_program>;
ssl_pass_phrase_dialog exec:<path_to_program>;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NGINX calls this program with the following two arguments:
-
The server name specified in the
server_name
setting. -
One of the following algorithms:
RSA
,DSA
,EC
,DH
, orUNK
if a cryptographic algorithm cannot be recognized.
-
The server name specified in the
If you want to manually enter a passphrase for each encrypted private key file, enter:
ssl_pass_phrase_dialog builtin;
ssl_pass_phrase_dialog builtin;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is the default behavior if
ssl_pass_phrase_dialog
is not configured.NoteThe
nginx
service fails to start if you use this method but have at least one private key protected by a passphrase. In this case, use one of the other methods.If you want
systemd
to prompt for the passphrase for each encrypted private key when you start thenginx
service by using thesystemctl
utility, enter:ssl_pass_phrase_dialog exec:/usr/libexec/nginx-ssl-pass-dialog;
ssl_pass_phrase_dialog exec:/usr/libexec/nginx-ssl-pass-dialog;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For security reasons, configure that only the
root
user can access the private key file:chown root:root /etc/pki/tls/private/example.com.key chmod 600 /etc/pki/tls/private/example.com.key
# chown root:root /etc/pki/tls/private/example.com.key # chmod 600 /etc/pki/tls/private/example.com.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningIf the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure.
Restart the
nginx
service:systemctl restart nginx
# systemctl restart nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Use a browser and connect to
https://example.com
2.4. Configuring NGINX as a reverse proxy for the HTTP traffic Copy linkLink copied to clipboard!
You can configure the NGINX web server to act as a reverse proxy for HTTP traffic. For example, you can use this functionality to forward requests to a specific subdirectory on a remote server. From the client perspective, the client loads the content from the host it accesses. However, NGINX loads the actual content from the remote server and forwards it to the client.
This procedure explains how to forward traffic to the /example
directory on the web server to the URL https://example.com
.
Prerequisites
- NGINX is installed.
- Optional: TLS encryption is enabled on the reverse proxy.
Procedure
Edit the
/etc/nginx/nginx.conf
file and add the following settings to theserver
block that should provide the reverse proxy:location /example { proxy_pass https://example.com; }
location /example { proxy_pass https://example.com; }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
location
block defines that NGINX passes all requests in the/example
directory tohttps://example.com
.Set the
httpd_can_network_connect
SELinux boolean parameter to1
to configure that SELinux allows NGINX to forward traffic:setsebool -P httpd_can_network_connect 1
# setsebool -P httpd_can_network_connect 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
nginx
service:systemctl restart nginx
# systemctl restart nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Use a browser and connect to
http://host_name/example
and the content ofhttps://example.com
is shown.
2.5. Configuring NGINX as an HTTP load balancer Copy linkLink copied to clipboard!
You can use the NGINX reverse proxy feature to load-balance traffic. This procedure describes how to configure NGINX as an HTTP load balancer that sends requests to different servers, based on which of them has the least number of active connections. If both servers are not available, the procedure also defines a third host for fallback reasons.
Prerequisites
- NGINX is installed.
Procedure
Edit the
/etc/nginx/nginx.conf
file and add the following settings:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
least_conn
directive in the host group namedbackend
defines that NGINX sends requests toserver1.example.com
orserver2.example.com
, depending on which host has the least number of active connections. NGINX usesserver3.example.com
only as a backup in case that the other two hosts are not available.With the
proxy_pass
directive set tohttp://backend
, NGINX acts as a reverse proxy and uses thebackend
host group to distribute requests based on the settings of this group.Instead of the
least_conn
load balancing method, you can specify:- No method to use round robin and distribute requests evenly across servers.
-
ip_hash
to send requests from one client address to the same server based on a hash calculated from the first three octets of the IPv4 address or the whole IPv6 address of the client. -
hash
to determine the server based on a user-defined key, which can be a string, a variable, or a combination of both. Theconsistent
parameter configures that NGINX distributes requests across all servers based on the user-defined hashed key value. -
random
to send requests to a randomly selected server.
Restart the
nginx
service:systemctl restart nginx
# systemctl restart nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow