이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 3. Load balancing with the JBoss HTTP connector (mod_proxy_cluster)


The mod_proxy_cluster connector is a reduced-configuration, intelligent load-balancing solution that allows the Apache HTTP Server to connect to back-end JBoss Web Server or JBoss EAP hosts. The mod_proxy_cluster module is based on technology that the JBoss mod_cluster community project originally developed.

3.1. Mod_proxy_cluster key features and components

The mod_proxy_cluster module load-balances HTTP requests to JBoss EAP and JBoss Web Server worker nodes. The mod_proxy_cluster module uses the Apache HTTP Server as the proxy server.

Key features of mod_proxy_cluster

The mod_proxy_cluster connector has several advantages over the mod_jk connector:

  • When the mod_proxy_cluster module is enabled, the mod_proxy_cluster Management Protocol (MCMP) is an additional connection between the Tomcat servers and the Apache HTTP Server. The Tomcat servers use MCMP to transmit server-side load figures and lifecycle events back to the Apache HTTP Server, by using a custom set of HTTP methods.
  • Dynamic configuration of Apache HTTP Server with mod_proxy_cluster allows Tomcat servers that have mod_proxy_cluster listeners to join the load-balancing arrangement without the need for manual configuration.
  • Tomcat servers perform the load calculations rather than rely on the Apache HTTP Server. This makes load-balancing metrics more accurate than other connectors.
  • The mod_proxy_cluster connector provides fine-grained application lifecycle control. Each Tomcat server forwards web application context lifecycle events to the Apache HTTP Server. These lifecycle events include informing the Apache HTTP Server to start or stop routing requests for a specific context. This prevents end users from seeing HTTP errors because of unavailable resources.
  • You can use Apache JServ Protocol (AJP), Hypertext Transfer Protocol (HTTP), or Hypertext Transfer Protocol Secure (HTTPS) transports with mod_proxy_cluster.

Mod_proxy_cluster components

On the proxy server, mod_proxy_cluster consists of four Apache modules:

ComponentDescription

mod_cluster_slotmem.so

The Shared Memory Manager module shares real-time worker node information with multiple Apache HTTP Server processes.

mod_manager.so

The Cluster Manager module receives and acknowledges messages from worker nodes, including node registrations, node load data, and node application life cycle events.

mod_proxy_cluster.so

The Proxy Balancer Module handles request routing to cluster nodes. The Proxy Balancer selects the appropriate destination node based on application location in the cluster, the current state of each of the cluster nodes, and the Session ID (if a request is part of an established session).

mod_advertise.so

The Proxy Advertisement Module broadcasts the existence of the proxy server via UDP multicast messages. The server advertisement messages contain the IP address and port number where the proxy server is listening for responses from worker nodes that want to join the load-balancing cluster.

3.2. Mod_proxy_cluster installation and upgrade

Red Hat JBoss Core Services (JBCS) and Red Hat Enterprise Linux (RHEL) provide separate distributions of the Apache HTTP Server. The Apache HTTP Server distribution that you install determines whether installation of the mod_proxy_cluster connector is automatic or requires a manual step. Depending on your installed distribution of the Apache HTTP Server, the installation path for the mod_proxy_cluster modules and configuration files also varies.

Note

The JBCS Apache HTTP Server supports the use of mod_proxy_cluster on all supported operating systems. The RHEL Apache HTTP Server supports the use of mod_proxy_cluster on RHEL 9 only.

3.2.1. Installation of mod_proxy_cluster when using the JBCS Apache HTTP Server

The Apache HTTP Server part of a JBCS installation automatically installs the mod_proxy_cluster module.

You can follow the procedures in the Red Hat JBoss Core Services Apache HTTP Server Installation Guide to install or upgrade to the latest JBCS Apache HTTP Server release for your operating system. For more information, see the Additional resources links.

Consider the following guidelines for a mod_proxy_cluster installation when using the JBCS Apache HTTP Server:

  • The mod_proxy_cluster.so, mod_cluster_slotmem.so, mod_manager.so, and mod_advertise.so modules are installed in the JBCS_HOME/httpd/modules directory.
  • The mod_proxy_cluster.conf.sample configuration file is located in the JBCS_HOME/httpd/conf.d directory.
  • The mod_proxy_cluster.conf.sample file includes a LoadModule directive for the mod_proxy_cluster module.
Note

JBCS_HOME represents the top-level directory for a JBCS installation, which is /opt/jbcs-httpd24-2.4.

3.2.2. Upgrade of mod_proxy_cluster from an earlier JBCS release

The mod_cluster-native package that JBCS provided in 2.4.37 and earlier releases is renamed mod_proxy_cluster in JBCS 2.4.51 or later. As part of this change, the mod_cluster.conf file that was available in 2.4.37 and earlier releases is also renamed mod_proxy_cluster.conf in JBCS 2.4.51 or later. JBCS handles the upgrade of your existing mod_proxy_cluster configuration in different ways depending on whether you installed JBCS from archive files or RPM packages.

Upgrades of mod_proxy_cluster configuration when installed from RPM packages

If you are upgrading an existing JBCS installation that you installed from RPM packages on RHEL 7 or RHEL 8, consider the following guidelines:

  • If you are upgrading from JBCS 2.4.37 or earlier, JBCS retains your existing mod_cluster.conf file during the upgrade. In this situation, the upgraded JBCS 2.4.57 deployment includes both your existing mod_cluster.conf file and a default mod_proxy_cluster.conf file. If you subsequently want to migrate to using mod_proxy_cluster.conf instead, you can manually update the default mod_proxy_cluster.conf file to suit your setup requirements.
  • If you are upgrading from JBCS 2.4.51, JBCS retains your existing mod_proxy_cluster.conf file during the upgrade. In this situation, the upgraded JBCS 2.4.57 deployment includes both your existing mod_proxy_cluster.conf file and a default mod_proxy_cluster.conf.rpmnew file.
Upgrades of mod_proxy_cluster configuration when installed from archive files

If you are upgrading an existing JBCS installation that you installed from archive files, consider the following guidelines:

  • If you are upgrading from JBCS 2.4.37 or earlier, you do not need to take any action apart from extracting the 2.4.57 archive files. JBCS 2.4.57 does not include a default mod_cluster.conf file, so your existing mod_cluster.conf file remains in place during the product upgrade. In this situation, the upgraded JBCS 2.4.57 deployment includes both your existing mod_cluster.conf file and a default mod_proxy_cluster.conf file. If you subsequently want to migrate to using mod_proxy_cluster.conf instead, you can manually update the default mod_proxy_cluster.conf file to suit your setup requirements.
  • If you are upgrading from JBCS 2.4.51 or an existing release of JBCS 2.4.57, you must first copy your existing mod_proxy_cluster.conf file to a temporary location. JBCS 2.4.57 includes a default mod_proxy_cluster.conf file, which automatically overwrites your existing mod_proxy_cluster.conf file during the product upgrade. After you extract the latest 2.4.57 archive files, you can then copy your backup of the existing mod_proxy_cluster.conf file to the correct location to overwrite the default file.

3.2.3. Installing mod_proxy_cluster by using RHEL Application Streams

If you install the RHEL 9 distribution of the Apache HTTP Server from an RPM package by using Application Streams, RHEL does not automatically install the mod_proxy_cluster package. In this situation, if you want to use the mod_proxy_cluster connector, you must install the mod_proxy_cluster package manually.

Prerequisites

  • You have installed the Apache HTTP Server on RHEL 9 by using Application Streams.

Procedure

  • Enter the following command as the root user:

    Copy to Clipboard Toggle word wrap
    # dnf install mod_proxy_cluster

Verification

  • To check that the mod_proxy_cluster package is successfully installed, enter the following command:

    Copy to Clipboard Toggle word wrap
    # rpm -q mod_proxy_cluster

    The preceding command outputs the full name of the installed package, which includes version and platform information.

Consider the following guidelines for a mod_proxy_cluster installation when using RHEL Application Streams:

  • The mod_proxy_cluster.so, mod_cluster_slotmem.so, mod_manager.so, and mod_advertise.so modules are installed in the /usr/lib64/httpd/modules directory.
  • The mod_proxy_cluster.conf.sample configuration file is located in the /etc/httpd/conf.d directory.
  • The mod_proxy_cluster.conf.sample file includes a LoadModule directive for the mod_proxy_cluster module.

3.3. Apache HTTP Server load-balancing configuration when using mod_proxy_cluster

In the Apache HTTP Server 2.1 and later versions, mod_proxy_cluster is configured correctly for the Apache HTTP Server by default. For more information about setting a custom configuration, see Configuring a basic proxy server.

Example configuration file for mod_proxy_cluster

Depending on whether you installed mod_proxy_cluster through Red Hat JBoss Core Services (JBCS) or by using Red Hat Enterprise Linux (RHEL) Application Streams, consider the following guidelines:

  • JBCS provides an example configuration file for mod_proxy_cluster in the JBCS_HOME/httpd/conf.d/ directory.
  • RHEL provides an example configuration file for mod_proxy_cluster in the /etc/httpd/conf.d/ directory.

The example configuration file for mod_proxy_cluster is named mod_proxy_cluster.conf.sample. To use this example instead of creating your own configuration file, you can remove the .sample extension, and modify the file content as needed.

Note

You can also use the Load Balancer Configuration tool on the Red Hat Customer Portal to generate optimal configuration templates quickly for mod_proxy_cluster and Tomcat worker nodes. When you use the Load Balancer Configuration tool for Apache HTTP Server 2.4.57, ensure that you select 2.4.x as the Apache version, and select Tomcat/JWS as the back-end configuration.

Guidelines for using mod_proxy_cluster

Consider the following guidelines for using the mod_proxy_cluster connector:

  • When you want to use the mod_proxy_cluster connector, you must enable the mod_proxy module and disable the mod_proxy_balancer module.
  • If you want mod_proxy_cluster to use the Apache JServ Protocol (AJP), you must enable the proxy_ajp_module.
  • Use AJPSecret your_secret to provide the secret for the AJP back end. If your_secret does not correspond to the value configured in the back end, the back end sends a 503 error response for any request that is sent through the proxy.
Note

Red Hat JBoss Core Services 2.4.57 does not support the tunneling of non-upgraded connections to a back-end websockets server. This means that when you are configuring the ProxyPass directive for the mod_proxy_wstunnel module, you must ensure that the upgrade parameter is not set to NONE. For more information about mod_proxy_wstunnel, see the Apache documentation.

3.3.1. Configuring a basic proxy server

You can configure the Apache HTTP Server to function as a proxy server that forwards requests and responses between web clients and back-end web servers. You must configure a proxy server listener to receive connection requests and responses from the back-end worker nodes. When you want to configure a load-balancing proxy server that uses mod_proxy_cluster, you must also configure a virtual host for the management channel.

Prerequisites

Procedure

  1. Go to the Apache HTTP Server configuration directory:

    • If you are using the JBCS Apache HTTP Server, go to the JBCS_HOME/httpd/conf.d directory.
    • If you are using the RHEL Apache HTTP Server, go to the /etc/httpd/conf.d directory.
  2. Open the mod_proxy_cluster.conf file.
  3. To create a Listen directive for the proxy server, enter the following line in the mod_proxy_cluster.conf file:

    Copy to Clipboard Toggle word wrap
    Listen IP_ADDRESS:PORT_NUMBER
    Note

    In the preceding example, replace IP_ADDRESS with the address of the server network interface that the proxy server uses to communicate with the worker nodes, and replace PORT_NUMBER with the port that the proxy server listens on.

    Ensure that the port is open for incoming TCP connections.

  4. To create a virtual host, enter the following details in the mod_proxy_cluster.conf file:

    Copy to Clipboard Toggle word wrap
    <VirtualHost IP_ADDRESS:PORT_NUMBER>
    
       <Directory />
          Require ip IP_ADDRESS
       </Directory>
    
       KeepAliveTimeout 60
       MaxKeepAliveRequests 0
    
       ManagerBalancerName mycluster
       AdvertiseFrequency 5
       EnableMCPMReceive On
    
    </VirtualHost>
    Note

    In the preceding example, replace IP_ADDRESS and PORT_NUMBER with the address of the server network interface and port number that you have specified for the Listen directive.

    This address and port combination is only used for mod_proxy_cluster management messages. This address and port combination is not used for general traffic.

For more information about starting the Apache HTTP Server service, see the Red Hat JBoss Core Services Apache HTTP Server Installation Guide.

3.3.1.1. Disabling server advertisement

The proxy server uses UDP multicast to advertise itself. The AdvertiseFrequency directive instructs the server to send server advertisement messages every 10 seconds by default. Server advertisement messages contain the IP_ADDRESS and PORT_NUMBER that you specify in the VirtualHost definition. Worker nodes that are configured to respond to server advertisements use this information to register themselves with the proxy server. If you want to prevent worker nodes from registering with the proxy server, you can optionally disable server advertisement.

Note

When UDP multicast is available between the proxy server and the worker nodes, server advertisement adds worker nodes without requiring further configuration on the proxy server. Server advertisement requires only minimal configuration on the worker nodes.

Prerequisites

Procedure

  1. Go to the Apache HTTP Server configuration directory:

    • If you are using the JBCS Apache HTTP Server, go to the JBCS_HOME/httpd/conf.d directory.
    • If you are using the RHEL Apache HTTP Server, go to the /etc/httpd/conf.d directory.
  2. Open the mod_proxy_cluster.conf file.
  3. Add the following directive to the VirtualHost definition:

    Copy to Clipboard Toggle word wrap
    ServerAdvertise Off
    Note

    If server advertisements are disabled, or UDP multicast is not available on the network between the proxy server and the worker nodes, you can configure worker nodes with a static list of proxy servers. In either case, you do not need to configure the proxy server with a list of worker nodes.

3.3.1.2. Logging worker node details

When you configure a load-balancing proxy server that uses mod_proxy_cluster, you can optionally configure the Apache HTTP Server to log details of each worker node that handles a request. Logging worker node details can be useful if you need to troubleshoot your load balancer.

Prerequisites

Procedure

  1. Go to the Apache HTTP Server configuration directory:

    • If you are using the JBCS Apache HTTP Server, go to the JBCS_HOME/httpd/conf.d directory.
    • If you are using the RHEL Apache HTTP Server, go to the /etc/httpd/conf.d directory.
  2. Open the mod_proxy_cluster.conf file.
  3. Add the following details to your Apache HTTP Server LogFormat directive(s):

    Copy to Clipboard Toggle word wrap
    %{BALANCER_NAME}e ::
    The name of the balancer that served the request.
    
    %{BALANCER_WORKER_NAME}e ::
    The name of the worker node that served the request.

3.3.2. Configuring a JBoss Web Server worker node in mod_proxy_cluster

When you use mod_proxy_cluster, you can configure a back-end worker node as a JBoss Web Server Tomcat service that operates in non-clustered mode only. In this situation, mod_proxy_cluster can use only one load metric at any specific time when calculating the load-balance factor.

Note

JBoss Web Server worker nodes support only a subset of mod_proxy_cluster functionality. Full mod_proxy_cluster functionality is available with JBoss EAP.

Procedure

  1. To add a listener to JBoss Web Server, in the JWS_HOME/tomcat<VERSION>/conf/server.xml file, add the following Listener element under the other Listener elements:

    Copy to Clipboard Toggle word wrap
    <Listener className="org.jboss.modcluster.container.catalina.standalone.ModClusterListener" advertise="true" stickySession="true" stickySessionForce="false" stickySessionRemove="true" />
  2. To give the worker node a unique identity, in the JWS_HOME/tomcat<VERSION>/conf/server.xml file, add the jvmRoute attribute and value to the Engine element:

    Copy to Clipboard Toggle word wrap
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="worker01">
  3. To configure STATUS MCMP message frequency, modify the org.jboss.modcluster.container.catalina.status-frequency Java system property.

    For example:

    Copy to Clipboard Toggle word wrap
    -Dorg.jboss.modcluster.container.catalina.status-frequency=6
    Note

    JBoss Web Server worker nodes periodically send status messages that contain their current load status to the Apache HTTP Server balancer. The default frequency of these messages is 10 seconds. If you have hundreds of worker nodes, the STATUS MCMP messages can increase traffic congestion on your Apache HTTP Server network.

    You can configure the MCMP message frequency by modifying the org.jboss.modcluster.container.catalina.status-frequency Java system property. By default, the property accepts values that are specified in seconds multiplied by 10. For example, setting the property to 1 means 10 seconds. In the preceding example, the property is set to 6, which means 60 seconds.

  4. Optional: To configure the firewall for proxy server advertisements, complete either of the following steps to open port 23364 for UDP connections on the worker node’s firewall:

    • For RHEL:

      Copy to Clipboard Toggle word wrap
      firewall-cmd --permanent --zone=public --add-port=23364/udp
    • For Windows Server using PowerShell:

      Copy to Clipboard Toggle word wrap
      Start-Process "$psHome\powershell.exe" -Verb Runas -ArgumentList '-command "NetSh Advfirewall firewall add rule name="UDP Port 23364" dir=in  action=allow protocol=UDP localport=23364"'
      Start-Process "$psHome\powershell.exe" -Verb Runas -ArgumentList '-command "NetSh Advfirewall firewall add rule name="UDP Port 23364" dir=out action=allow protocol=UDP localport=23364"'
      Note

      When a proxy server uses mod_proxy_cluster, the proxy server can use UDP multicast to advertise itself. Most operating system firewalls block the server advertisement feature by default. To enable server advertisement and receive these multicast messages, you can open port 23364 for UDP connections on the worker node’s firewall, as shown in the preceding examples.

3.3.3. Configuring a worker node to operate with a static list of proxy servers

Server advertisement allows worker nodes to discover and register with proxy servers dynamically. If UDP multicast is not available or server advertisement is disabled, you must configure JBoss Web Server worker nodes with a static list of proxy server addresses and ports.

Procedure

  1. Open the JWS_HOME/tomcat<VERSION>/conf/server.xml file.
  2. To define a mod_proxy_cluster listener and disable dynamic proxy discovery, add or change the Listener element for ModClusterListener.

    For example:

    Copy to Clipboard Toggle word wrap
    <Listener className="org.jboss.modcluster.container.catalina.standalone.ModClusterListener" advertise="false" stickySession="true" stickySessionForce="false" stickySessionRemove="true"/>
    Note

    Ensure that you set the advertise property to false.

  3. To create a static proxy server list, update the proxyList property by adding a comma-separated list of proxies in the following format: IP_ADDRESS:PORT,IP_ADDRESS:PORT

    For example:

    Copy to Clipboard Toggle word wrap
    <Listener className="org.jboss.modcluster.container.catalina.standalone.ModClusterListener" advertise="false" stickySession="true" stickySessionForce="false" stickySessionRemove="true" proxyList="10.33.144.3:6666,10.33.144.1:6666"/>

3.4. Mod_proxy_cluster character limits

The mod_proxy_cluster module uses shared memory to keep the nodes description. The shared memory is created at the startup of Apache HTTP Server, and the structure of each item is fixed.

When you define proxy server and worker node properties, ensure that you adhere to the following character limits:

PropertyMaximum character limitDescription

Alias length

100 characters

Alias corresponds to the network name of the respective virtual host; the name is defined in the Host element.

Context length

40 characters

For example, if myapp.war is deployed in /myapp , /myapp is included in the context.

Balancer name length

40 characters

This is the balancer in the <Listener> element.

JVMRoute string length

80 characters

JVMRoute in the <Engine> element.

Domain name length

20 characters

This is the loadBalancingGroup in the <Listener> element.

Hostname length for a node

64 characters

This is hostname address in the <Connector> element.

Port length for a node

7 characters

This is the port property in the <Connector> element. For example, 8009 is 4 characters.

Scheme length for a node

6 characters

This is the protocol of the connector. Possible values are http, https, and ajp.

Cookie name length

30 characters

This is the header cookie name for the session ID. The default value is JSESSIONID based on the org.apache.catalina.Globals.SESSION_COOKIE_NAME property.

Path name length

30 characters

This is the parameter name for the session ID. The default value is JSESSIONID based on the org.apache.catalina.Globals.SESSION_PARAMETER_NAME property.

Session ID length

120 characters

A session ID is in the following type of format: BE81FAA969BF64C8EC2B6600457EAAAA.node01

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat, Inc.