Chapter 5. Setting up and configuring NGINX
NGINX is a high-performance and modular server that you can use as a web server, a reverse proxy, or an HTTP load balancer.
5.1. Installing and preparing NGINX Copy linkLink copied to clipboard!
Red Hat uses Application Streams to provide different versions of NGINX. With Application Streams, you can select a stream and install NGINX, open the required ports in the firewall, enable, and start the nginx service.
By default, NGINX runs as a web server on port 80 and provides content from the /usr/share/nginx/html/ directory.
Prerequisites
- You have a Red Hat subscription.
-
The
firewalldservice is enabled and running.
Procedure
Install the
nginxpackage:# dnf install nginxOpen the ports on which NGINX should run its service in the firewall. For example, to open the default ports for HTTP (port 80) and HTTPS (port 443) in
firewalld, enter:# firewall-cmd --permanent --add-port={80/tcp,443/tcp} # firewall-cmd --reloadEnable the
nginxservice to start automatically when the system boots:# systemctl enable nginxOptional: Start the
nginxservice:# systemctl start nginxIf you do not want to use the default configuration, skip this step, and configure NGINX before you start the service.
Verification
Verify the installation of the
nginxpackage:# dnf list installed nginxInstalled Packages nginx.x86_64 1:1.14.1-9.module+el8.0.0+4108+af250afe @rhel-8-for-x86_64-appstream-rpmsVerify the allowed ports through the firewall on which NGINX should run its service:
# firewall-cmd --list-ports80/tcp 443/tcpVerify the
nginxservice is enabled:# systemctl is-enabled nginxenabled
5.2. Configuring NGINX as a web server to distribute different content for different domains Copy linkLink copied to clipboard!
To optimize resource usage and management, you can configure the NGINX web server to distribute different content for different domains. By default, NGINX distributes the same content to clients for all domain names associated with the IP addresses of the server.
You can configure NGINX to serve requests to domains in the mentioned ways: the example.com domain with content from the /var/www/example.com/ directory, the example.net domain with content from the /var/www/example.net/ directory, and all other requests with content from the /usr/share/nginx/html/ directory.
Prerequisites
- NGINX is installed.
Clients and the web server resolve the
example.comandexample.netdomain to the IP address of the web server.Note that you must manually add these entries to your DNS server.
Procedure
Edit the
/etc/nginx/nginx.conffile:By default, the
/etc/nginx/nginx.conffile already has a catch-all configuration. If you have deleted this part from the configuration, re-add the followingserverblock to thehttpblock in the/etc/nginx/nginx.conffile:server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html; }-
The
listendirective defines which IP address and ports the service listens. In this case, NGINX listens on port80on both all IPv4 and IPv6 addresses. Thedefault_serverparameter indicates that NGINX uses thisserverblock as the default for requests matching the IP addresses and ports. -
The
server_nameparameter defines the host names for which thisserverblock is responsible. Settingserver_nameto_configures NGINX to accept any hostname for thisserverblock. -
The
rootdirective sets the path to the web content for thisserverblock.
-
The
Append a similar
serverblock for theexample.comdomain to thehttpblock:server { server_name example.com; root /var/www/example.com/; access_log /var/log/nginx/example.com/access.log; error_log /var/log/nginx/example.com/error.log; }-
The
access_logdirective defines a separate access log file for this domain. -
The
error_logdirective defines a separate error log file for this domain.
-
The
Append a similar
serverblock for theexample.netdomain to thehttpblock:server { server_name example.net; root /var/www/example.net/; access_log /var/log/nginx/example.net/access.log; error_log /var/log/nginx/example.net/error.log; }
Create the main directories for both domains:
# mkdir -p /var/www/example.com/ # mkdir -p /var/www/example.net/Set the
httpd_sys_content_tcontext on both main directories:# semanage fcontext -a -t httpd_sys_content_t "/var/www/example.com(/.*)?" # restorecon -Rv /var/www/example.com/ # semanage fcontext -a -t httpd_sys_content_t "/var/www/example.net(/.\*)?" # restorecon -Rv /var/www/example.net/These commands set the
httpd_sys_content_tcontext on the/var/www/example.com/and/var/www/example.net/directories.Note that you must install the
policycoreutils-python-utilspackage to run therestoreconcommands.Create the log directories for both domains:
# mkdir /var/log/nginx/example.com/ # mkdir /var/log/nginx/example.net/Restart the
nginxservice:# systemctl restart nginx
Verification
Create a different example file in each virtual host’s document root:
# echo "Content for example.com" > /var/www/example.com/index.html # echo "Content for example.net" > /var/www/example.net/index.html # echo "Catch All content" > /usr/share/nginx/html/index.html-
Use a browser and connect to
http://example.com. The web server shows the example content from the/var/www/example.com/index.htmlfile. -
Use a browser and connect to
http://example.net. The web server shows the example content from the/var/www/example.net/index.htmlfile. -
Use a browser and connect to
http://IP_address_of_the_server. The web server shows the example content from the/usr/share/nginx/html/index.htmlfile.
5.3. Adding TLS encryption to an NGINX web server Copy linkLink copied to clipboard!
To protect against eavesdropping and man-in-the-middle attacks, you can enable Transport Layer Security (TLS) protocol encryption on an NGINX web server.
Prerequisites
- You have installed NGINX. For details, see Installing and preparing NGINX.
The private key is stored in the
/etc/pki/tls/private/example.com.keyfile.For details about creating a private key and certificate signing request (CSR), and how to request a certificate from a certificate authority (CA), see documentation of your CA.
-
The TLS certificate is stored in the
/etc/pki/tls/certs/example.com.crtfile. If you use a different path, adapt the corresponding steps of the procedure. - The CA certificate has been appended to the TLS certificate file of the server.
- Clients and the web server resolve the hostname of the server to the IP address of the web server.
-
Port
443is open in the local firewall. -
If the server runs Red Hat Enterprise Linux 10 and the Federal Information Processing Standards (FIPS) mode is enabled, clients must either support the
Extended Master Secret(EMS) extension or use Transport Layer Security (TLS) 1.3. TLS 1.2 connections without EMS fail. For details, see TLS extension "Extended Master Secret" enforced - Red Hat Knowledgebase solution.
Procedure
Edit the
/etc/nginx/nginx.conffile, and add the followingserverblock to thehttpblock in the configuration:server { listen 443 ssl; server_name example.com; root /usr/share/nginx/html; ssl_certificate /etc/pki/tls/certs/example.com.crt; ssl_certificate_key /etc/pki/tls/private/example.com.key; }Optional: Starting from RHEL 9.3, you can use the
ssl_pass_phrase_dialogdirective to configure an external program that NGINX calls at startup for each encrypted private key. Add one of the following lines to the/etc/nginx/nginx.conffile:To call an external program for each encrypted private key file, enter:
ssl_pass_phrase_dialog exec:<path_to_program>;NGINX calls this program with the following two arguments:
-
The server name specified in the
server_namesetting. -
One of the following algorithms:
RSA,DSA,EC,DH, orUNKif NGINX cannot recognize a cryptographic algorithm.
-
The server name specified in the
If you want to manually enter a passphrase for each encrypted private key file, enter:
ssl_pass_phrase_dialog builtin;This is the default behavior if
ssl_pass_phrase_dialogis not configured.Note that the
nginxservice fails to start if you use this method but have at least one private key protected by a passphrase. In this case, use one of the other methods.If you want
systemdto prompt for the passphrase for each encrypted private key when you start thenginxservice by using thesystemctlutility, enter:ssl_pass_phrase_dialog exec:/usr/libexec/nginx-ssl-pass-dialog;
For security reasons, configure that only the
rootuser can access the private key file:# chown root:root /etc/pki/tls/private/example.com.key # chmod 600 /etc/pki/tls/private/example.com.keyWarningIf unauthorized users have access to the private key, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure.
Restart the
nginxservice:# systemctl restart nginx
Verification
-
Use a browser and connect to
https://example.com.
5.4. Configuring NGINX as a reverse proxy for the HTTP traffic Copy linkLink copied to clipboard!
To forward requests to a specific subdirectory on a remote server, you can configure the NGINX web server to act as a reverse proxy for HTTP traffic.
From the client perspective, the client loads the content from the host it accesses. However, NGINX loads the actual content from the remote server and forwards it to the client. You can configure NGINX to forward traffic from the /example directory on the web server to the URL https://example.com.
Prerequisites
- NGINX is installed.
- Optional: TLS encryption is enabled on the reverse proxy.
Procedure
Edit the
/etc/nginx/nginx.conffile and add the following settings to theserverblock that should provide the reverse proxy:location /example { proxy_pass https://example.com; }The
locationblock defines that NGINX passes all requests in the/exampledirectory tohttps://example.com.Set the
httpd_can_network_connectSELinux boolean parameter to1to configure that SELinux allows NGINX to forward traffic:# setsebool -P httpd_can_network_connect 1Restart the
nginxservice:# systemctl restart nginx
Verification
-
Use a browser and connect to
http://host_name/exampleand the content ofhttps://example.comis shown.
5.5. Configuring NGINX as an HTTP load balancer Copy linkLink copied to clipboard!
To configure the number of requests to different servers and set up a fallback host, you can use the NGINX reverse proxy feature for load balancing.
Configuring NGINX as an HTTP load balancer directs traffic to various servers. The load balancer selects a server with the least number of active connections. If the primary servers are unavailable, NGINX automatically sends requests to the fallback host.
Prerequisites
- You have installed NGINX.
Procedure
Edit settings in the
/etc/nginx/nginx.conffile:http { upstream backend { least_conn; server server1.example.com; server server2.example.com; server server3.example.com backup; } server { location / { proxy_pass http://backend; } } }The
least_conndirective in the host group namedbackenddefines that NGINX sends requests toserver1.example.comorserver2.example.com, depending on which host has the least number of active connections. NGINX usesserver3.example.comonly as a backup in case that the other two hosts are not available.With the
proxy_passdirective set tohttp://backend, NGINX acts as a reverse proxy and uses thebackendhost group to distribute requests based on the settings of this group.Instead of the
least_connload balancing method, you can specify:- No method to use round robin and distribute requests evenly across servers.
-
ip_hash: Send requests from one client address to the same server based on a hash calculated from the first three octets of the IPv4 address or the whole IPv6 address of the client. -
hash: Decide the server based on a user-defined key, which can be a string, a variable, or a combination of both. Theconsistentparameter configures that NGINX distributes requests across all servers based on the user-defined hashed key value. -
random: Send requests to a randomly selected server.
Restart the
nginxservice:# systemctl restart nginx