Chapter 6. Clustering


When clustering Red Hat JBoss BPM Suite, consider which components need to be clustered. You can cluster the following:
  • GIT repository: virtual-file-system (VFS) repository that holds the business assets so that all cluster nodes use the same repository
  • Execution Server and Web applications: the runtime server that resides in the container (such as, Red Hat JBoss EAP) along with BRMS and BPM Suite web applications so that nodes share the same runtime data.
    For instructions on clustering the application, refer to the container clustering documentation.
  • Back-end database: database with the state data, such as, process instances, KIE sessions, history log, etc., for fail-over purposes

Figure 6.1. Schema of Red Hat JBoss BPM Suite system with individual system components

GIT repository clustering mechanism

To cluster the GIT repository the following is used:
  • Apache Zookeeper brings all parts together.
  • Apache Helix is the cluster management component that registers all cluster details (the cluster itself, nodes, resources).
The runtime environment, that is the Execution Server, utilizes the following to provide the clustering capabilities:
  • uberfire framework which provides the backbone of the web applications

Figure 6.2. Clustering schema with Helix and Zookeeper

A typical clustering setup involves the following:
  • Setting up the cluster itself using Zookeeper and Helix
  • Setting up the back-end database with Quartz tables and configuration
  • Configuring clustering on your container (this documentation provides only clustering instructions for Red Hat JBoss EAP 6)

Clustering Maven Repositories

Various operations within the Business Central publish JARs to the Business Central's internal Maven Repository.
This repository exists on the application server's file-system as regular files and is not cluster aware. This folder is not synchronized across the various nodes in the cluster and must be synchronized using external tools like rsync.
An alternate to the use of an external synchronization tool is to set the system property org.guvnor.m2repo.dir on each cluster node to point to a SAN or NAS. In this case clustering of the Maven repository folder is not needed.

6.1. Setting up a Cluster

To cluster your GIT (VFS) repository in Business Central, do the following (If you don't use Business Central, you may skip this section):
  1. Download the jboss-bpmsuite-brms-VERSION-supplementary-tools.zip, which contains Apache Zookeeper, Apache Helix, and quartz DDL scripts. After downloading, unzip the archive: the Zookeeper directory ($ZOOKEEPER_HOME) and the Helix directory ($HELIX_HOME) are created.
  2. Now Configure ZooKeeper:
    1. In the ZooKeeper directory, go to conf directory and do the following:
      cp zoo_sample.cfg zoo.cfg
      Copy to Clipboard Toggle word wrap
    2. Open zoo.cfg for editing and adjust the settings including the following:
      # the directory where the snapshot is stored.
      dataDir=$ZOOKEEPER_HOME/data/
      # the port at which the clients connects
      clientPort=2181
      server.1=server1:2888:3888
      server.2=server2:2888:3888
      server.3=server3:2888:3888
      
      Copy to Clipboard Toggle word wrap
      Make sure the dataDir location exists and is accessible.
    3. Assign a node ID to each member that will run ZooKeeper. For example, use "1", "2" and "3" respectively for node 1, node 2 and node 3 respectively. ZooKeeper should have an odd number of instances, at least 3 in order to recover from failure.
      The node ID is specified in a field called myid under the data directory of ZooKeeper on each node. For example, on node 1, run: $ echo "1" > /zookeeper/data/myid
  3. Set up ZooKeeper, so you can use it when creating the cluster with Helix:
    1. Go to the $ZOOKEEPER_HOME/bin/ directory and start ZooKeeper:
      ./zkServer.sh start
      Copy to Clipboard Toggle word wrap
      You can check the ZooKeeper log in the $ZOOKEEPER_HOME/bin/zookeeper.out file. Check this log to ensure that the 'ensemble' (cluster) is formed successfully. One of the nodes should be elected as leader with the other two nodes following it.
  4. Once the ZooKeeper ensemble is started, the next step is to configure and start Helix. Helix only needs to be configured once and from a single node. The configuration is then stored by the ZooKeeper ensemble and shared as appropriate.
    Set up the cluster with the ZooKeeper server as the master of the configuration:
    1. Create the cluster by providing the ZooKeeper Host and port as a comma separated list:
      $HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addCluster CLUSTER_NAME
      Copy to Clipboard Toggle word wrap
    2. Add your nodes to the cluster:
      $HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addNode CLUSTER_NAME NODE_NAMEUNIQUE_ID
      Copy to Clipboard Toggle word wrap

      Example 6.1. Adding three cluster nodes

      ./helix-admin.sh --zkSvr server1:2181,server2:2181,server3:2181 --addNode bpms-cluster nodeOne:12345
      ./helix-admin.sh --zkSvr server1:2181,server2:2181,server3:2181 --addNode bpms-cluster nodeTwo:12346 
      ./helix-admin.sh --zkSvr server1:2181,server2:2181,server3:2181 --addNode bpms-cluster nodeThree:12347
      Copy to Clipboard Toggle word wrap
  5. Add resources to the cluster.

    Example 6.2. Adding vfs-repo as resource

    ./helix-admin.sh --zkSvr server1:2181,server2:2181,server3:2181 --addResource bpms-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
    Copy to Clipboard Toggle word wrap
  6. Rebalance the cluster with the three nodes.

    Example 6.3. Rebalancing the bpms-cluster

    ./helix-admin.sh --zkSvr server1:2181,server2:2181,server3:2181 --rebalance bpms-cluster vfs-repo 3
    
    Copy to Clipboard Toggle word wrap
    In the above command, 3 stands for three zookeeper nodes.
  7. Start the Helix controller in all the nodes in the cluster.

    Example 6.4. Starting the Helix controller

    ./run-helix-controller.sh --zkSvr server1:2181,server2:2181,server3:2181 --cluster bpms-cluster 2>&1 > /tmp/controller.log &
    Copy to Clipboard Toggle word wrap

Note

Zookeeper should an odd number of instances, at least 3 in order to recover from failure. After a failure, the remaining number of nodes still need to be able to form a majority. For example a cluster of five Zookeeper nodes can withstand loss of two nodes in order to fully recover. One Zookeeper instance is still possible, replication will work, however no recover possibilities are available if it fails.

Stopping Helix and Zookeeper

To stop Helix processes and the Zookeeper server, use the following procedure.

Procedure 6.1. Stopping Helix and Zookeeper

  1. Stop JBoss EAP server processes.
  2. Stop the Helix process that has been created by run-helix-controller.sh, for example,kill -15 <pid of HelixControllerMain>.
  3. Stop ZooKeeper server using the zkServer.sh stop command.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat