Chapter 2. Setting up and configuring NGINX


NGINX is a high performance and modular server that you can use, for example, as a:

  • Web server
  • Reverse proxy
  • Load balancer

2.1. Installing and preparing NGINX

Red Hat uses Application Streams to provide different versions of NGINX. You can do the following:

  • Select a stream and install NGINX
  • Open the required ports in the firewall
  • Enable and start the nginx service

Using the default configuration, NGINX runs as a web server on port 80 and provides content from the /usr/share/nginx/html/ directory.

Prerequisites

  • The host is subscribed to the Red Hat Customer Portal.
  • The firewalld service is enabled and started.

Procedure

  1. Install the nginx package:

    # dnf install nginx
    Copy to Clipboard Toggle word wrap
  2. Open the ports on which NGINX should provide its service in the firewall. For example, to open the default ports for HTTP (port 80) and HTTPS (port 443) in firewalld, enter:

    # firewall-cmd --permanent --add-port={80/tcp,443/tcp}
    # firewall-cmd --reload
    Copy to Clipboard Toggle word wrap
  3. Enable the nginx service to start automatically when the system boots:

    # systemctl enable nginx
    Copy to Clipboard Toggle word wrap
  4. Optional: Start the nginx service:

    # systemctl start nginx
    Copy to Clipboard Toggle word wrap

    If you do not want to use the default configuration, skip this step, and configure NGINX accordingly before you start the service.

Verification

  1. Use the dnf utility to verify that the nginx package is installed:

    # dnf list installed nginx
    Installed Packages
    nginx.x86_64    1:1.14.1-9.module+el8.0.0+4108+af250afe    @rhel-8-for-x86_64-appstream-rpms
    Copy to Clipboard Toggle word wrap
  2. Ensure that the ports on which NGINX should provide its service are opened in the firewalld:

    # firewall-cmd --list-ports
    80/tcp 443/tcp
    Copy to Clipboard Toggle word wrap
  3. Verify that the nginx service is enabled:

    # systemctl is-enabled nginx
    enabled
    Copy to Clipboard Toggle word wrap

By default, NGINX acts as a web server that provides the same content to clients for all domain names associated with the IP addresses of the server. This procedure explains how to configure NGINX:

  • To serve requests to the example.com domain with content from the /var/www/example.com/ directory
  • To serve requests to the example.net domain with content from the /var/www/example.net/ directory
  • To serve all other requests, for example, to the IP address of the server or to other domains associated with the IP address of the server, with content from the /usr/share/nginx/html/ directory

Prerequisites

  • NGINX is installed.
  • Clients and the web server resolve the example.com and example.net domain to the IP address of the web server.

    Note that you must manually add these entries to your DNS server.

Procedure

  1. Edit the /etc/nginx/nginx.conf file:

    1. By default, the /etc/nginx/nginx.conf file already contains a catch-all configuration. If you have deleted this part from the configuration, re-add the following server block to the http block in the /etc/nginx/nginx.conf file:

      server {
          listen       80 default_server;
          listen       [::]:80 default_server;
          server_name  _;
          root         /usr/share/nginx/html;
      }
      Copy to Clipboard Toggle word wrap

      These settings configure the following:

      • The listen directive define which IP address and ports the service listens. In this case, NGINX listens on port 80 on both all IPv4 and IPv6 addresses. The default_server parameter indicates that NGINX uses this server block as the default for requests matching the IP addresses and ports.
      • The server_name parameter defines the host names for which this server block is responsible. Setting server_name to _ configures NGINX to accept any host name for this server block.
      • The root directive sets the path to the web content for this server block.
    2. Append a similar server block for the example.com domain to the http block:

      server {
          server_name  example.com;
          root         /var/www/example.com/;
          access_log   /var/log/nginx/example.com/access.log;
          error_log    /var/log/nginx/example.com/error.log;
      }
      Copy to Clipboard Toggle word wrap
      • The access_log directive defines a separate access log file for this domain.
      • The error_log directive defines a separate error log file for this domain.
    3. Append a similar server block for the example.net domain to the http block:

      server {
          server_name  example.net;
          root         /var/www/example.net/;
          access_log   /var/log/nginx/example.net/access.log;
          error_log    /var/log/nginx/example.net/error.log;
      }
      Copy to Clipboard Toggle word wrap
  2. Create the root directories for both domains:

    # mkdir -p /var/www/example.com/
    # mkdir -p /var/www/example.net/
    Copy to Clipboard Toggle word wrap
  3. Set the httpd_sys_content_t context on both root directories:

    # semanage fcontext -a -t httpd_sys_content_t "/var/www/example.com(/.*)?"
    # restorecon -Rv /var/www/example.com/
    # semanage fcontext -a -t httpd_sys_content_t "/var/www/example.net(/.\*)?"
    # restorecon -Rv /var/www/example.net/
    Copy to Clipboard Toggle word wrap

    These commands set the httpd_sys_content_t context on the /var/www/example.com/ and /var/www/example.net/ directories.

    Note that you must install the policycoreutils-python-utils package to run the restorecon commands.

  4. Create the log directories for both domains:

    # mkdir /var/log/nginx/example.com/
    # mkdir /var/log/nginx/example.net/
    Copy to Clipboard Toggle word wrap
  5. Restart the nginx service:

    # systemctl restart nginx
    Copy to Clipboard Toggle word wrap

Verification

  1. Create a different example file in each virtual host’s document root:

    # echo "Content for example.com" > /var/www/example.com/index.html
    # echo "Content for example.net" > /var/www/example.net/index.html
    # echo "Catch All content" > /usr/share/nginx/html/index.html
    Copy to Clipboard Toggle word wrap
  2. Use a browser and connect to http://example.com. The web server shows the example content from the /var/www/example.com/index.html file.
  3. Use a browser and connect to http://example.net. The web server shows the example content from the /var/www/example.net/index.html file.
  4. Use a browser and connect to http://IP_address_of_the_server. The web server shows the example content from the /usr/share/nginx/html/index.html file.

2.3. Adding TLS encryption to an NGINX web server

You can enable TLS encryption on an NGINX web server for the example.com domain.

Prerequisites

  • NGINX is installed. For more details, see Installing and preparing NGINX.
  • The private key is stored in the /etc/pki/tls/private/example.com.key file.

    For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.

  • The TLS certificate is stored in the /etc/pki/tls/certs/example.com.crt file. If you use a different path, adapt the corresponding steps of the procedure.
  • The CA certificate has been appended to the TLS certificate file of the server.
  • Clients and the web server resolve the host name of the server to the IP address of the web server.
  • Port 443 is open in the local firewall.
  • If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced.

Procedure

  1. Edit the /etc/nginx/nginx.conf file, and add the following server block to the http block in the configuration:

    server {
        listen              443 ssl;
        server_name         example.com;
        root                /usr/share/nginx/html;
        ssl_certificate     /etc/pki/tls/certs/example.com.crt;
        ssl_certificate_key /etc/pki/tls/private/example.com.key;
    }
    Copy to Clipboard Toggle word wrap
  2. Optional: Starting with RHEL 9.3, you can use the ssl_pass_phrase_dialog directive to configure an external program that is called at nginx start for each encrypted private key. Add one of the following lines to the /etc/nginx/nginx.conf file:

    • To call an external program for each encrypted private key file, enter:

      ssl_pass_phrase_dialog exec:<path_to_program>;
      Copy to Clipboard Toggle word wrap

      NGINX calls this program with the following two arguments:

      • The server name specified in the server_name setting.
      • One of the following algorithms: RSA, DSA, EC, DH, or UNK if a cryptographic algorithm cannot be recognized.
    • If you want to manually enter a passphrase for each encrypted private key file, enter:

      ssl_pass_phrase_dialog builtin;
      Copy to Clipboard Toggle word wrap

      This is the default behavior if ssl_pass_phrase_dialog is not configured.

      Note

      The nginx service fails to start if you use this method but have at least one private key protected by a passphrase. In this case, use one of the other methods.

    • If you want systemd to prompt for the passphrase for each encrypted private key when you start the nginx service by using the systemctl utility, enter:

      ssl_pass_phrase_dialog exec:/usr/libexec/nginx-ssl-pass-dialog;
      Copy to Clipboard Toggle word wrap
  3. For security reasons, configure that only the root user can access the private key file:

    # chown root:root /etc/pki/tls/private/example.com.key
    # chmod 600 /etc/pki/tls/private/example.com.key
    Copy to Clipboard Toggle word wrap
    Warning

    If the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure.

  4. Restart the nginx service:

    # systemctl restart nginx
    Copy to Clipboard Toggle word wrap

Verification

  • Use a browser and connect to https://example.com

You can configure the NGINX web server to act as a reverse proxy for HTTP traffic. For example, you can use this functionality to forward requests to a specific subdirectory on a remote server. From the client perspective, the client loads the content from the host it accesses. However, NGINX loads the actual content from the remote server and forwards it to the client.

This procedure explains how to forward traffic to the /example directory on the web server to the URL https://example.com.

Prerequisites

  • NGINX is installed.
  • Optional: TLS encryption is enabled on the reverse proxy.

Procedure

  1. Edit the /etc/nginx/nginx.conf file and add the following settings to the server block that should provide the reverse proxy:

    location /example {
        proxy_pass https://example.com;
    }
    Copy to Clipboard Toggle word wrap

    The location block defines that NGINX passes all requests in the /example directory to https://example.com.

  2. Set the httpd_can_network_connect SELinux boolean parameter to 1 to configure that SELinux allows NGINX to forward traffic:

    # setsebool -P httpd_can_network_connect 1
    Copy to Clipboard Toggle word wrap
  3. Restart the nginx service:

    # systemctl restart nginx
    Copy to Clipboard Toggle word wrap

Verification

  • Use a browser and connect to http://host_name/example and the content of https://example.com is shown.

2.5. Configuring NGINX as an HTTP load balancer

You can use the NGINX reverse proxy feature to load-balance traffic. This procedure describes how to configure NGINX as an HTTP load balancer that sends requests to different servers, based on which of them has the least number of active connections. If both servers are not available, the procedure also defines a third host for fallback reasons.

Prerequisites

  • NGINX is installed.

Procedure

  1. Edit the /etc/nginx/nginx.conf file and add the following settings:

    http {
        upstream backend {
            least_conn;
            server server1.example.com;
            server server2.example.com;
            server server3.example.com backup;
        }
    
        server {
            location / {
                proxy_pass http://backend;
            }
        }
    }
    Copy to Clipboard Toggle word wrap

    The least_conn directive in the host group named backend defines that NGINX sends requests to server1.example.com or server2.example.com, depending on which host has the least number of active connections. NGINX uses server3.example.com only as a backup in case that the other two hosts are not available.

    With the proxy_pass directive set to http://backend, NGINX acts as a reverse proxy and uses the backend host group to distribute requests based on the settings of this group.

    Instead of the least_conn load balancing method, you can specify:

    • No method to use round robin and distribute requests evenly across servers.
    • ip_hash to send requests from one client address to the same server based on a hash calculated from the first three octets of the IPv4 address or the whole IPv6 address of the client.
    • hash to determine the server based on a user-defined key, which can be a string, a variable, or a combination of both. The consistent parameter configures that NGINX distributes requests across all servers based on the user-defined hashed key value.
    • random to send requests to a randomly selected server.
  2. Restart the nginx service:

    # systemctl restart nginx
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat