6.6. Generic Bundle Clustering
6.6.1. Setting a Cluster Copy linkLink copied to clipboard!
If you do not use Business Central, skip this section.
To cluster your Git (VFS) repository in Business Central:
-
Download the
jboss-bpmsuite-brms-VERSION-supplementary-tools.zip, which contains Apache ZooKeeper, Apache Helix, and Quartz DDL scripts. -
Unzip the archive: the
ZooKeeperdirectory (ZOOKEEPER_HOME) and theHelixdirectory (HELIX_HOME) are created. Configure Apache ZooKeeper:
In the ZooKeeper directory, change to
confand execute:cp zoo_sample.cfg zoo.cfg
cp zoo_sample.cfg zoo.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit
zoo.cfg:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMultiple ZooKeeper nodes are not required for clustering.
Make sure the
dataDirlocation exists and is accessible.Assign a node ID to each member that will run ZooKeeper. For example, use
1,2, and3for node 1, node 2 and node 3 respectively.The ZooKeeper node ID is specified in a field called
myidunder the data directory of ZooKeeper on each node. For example, on node 1, execute:echo "1" > /zookeeper/data/myid
echo "1" > /zookeeper/data/myidCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Provide further ZooKeeper configuration if necessary.
Change to
ZOOKEEPER_HOME/bin/and start ZooKeeper:./zkServer.sh start
./zkServer.sh startCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the ZooKeeper log in the
ZOOKEEPER_HOME/bin/zookeeper.outfile. Check this log to ensure that the ensemble (cluster) is formed successfully. One of the nodes should be elected as leader with the other two nodes following it.Once the ZooKeeper ensemble is started, configure and start Helix. Helix needs to be configured from a single node only. The configuration is then stored by the ZooKeeper ensemble and shared as appropriate.
Configure the cluster with the ZooKeeper server as the master of the configuration:
Create the cluster by providing the ZooKeeper Host and port as a comma-separated list:
$HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addCluster <clustername>
$HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addCluster <clustername>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add your nodes to the cluster:
HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addNode <clustername>:<name_uniqueID>
HELIX_HOME/bin/helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addNode <clustername>:<name_uniqueID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 6.7. Adding Three Cluster Nodes
./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeOne:12345 ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeTwo:12346 ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeThree:12347
./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeOne:12345 ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeTwo:12346 ./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addNode bpms-cluster nodeThree:12347Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add resources to the cluster.
helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addResource <clustername> <resourceName> <numPartitions> <stateModelName>
helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --addResource <clustername> <resourceName> <numPartitions> <stateModelName>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Learn more about state machine configuration at Helix Tutorial: State Machine Configuration.
Example 6.8. Adding vfs-repo as Resource
./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addResource bpms-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --addResource bpms-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCECopy to Clipboard Copied! Toggle word wrap Toggle overflow Rebalance the cluster with the three nodes.
helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --rebalance <clustername> <resourcename> <replicas>
helix-admin.sh --zkSvr ZOOKEEPER_HOST:ZOOKEEPER_PORT --rebalance <clustername> <resourcename> <replicas>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Learn more about rebalancing at Helix Tutorial: Rebalancing Algorithms.
Example 6.9. Rebalancing bpms-cluster
./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --rebalance bpms-cluster vfs-repo 3
./helix-admin.sh --zkSvr server1:2181,server2:2182,server3:2183 --rebalance bpms-cluster vfs-repo 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the above command,
3stands for three ZooKeeper nodes.Start the Helix controller in all the nodes in the cluster.
Example 6.10. Starting Helix Controller
./run-helix-controller.sh --zkSvr server1:2181,server2:2182,server3:2183 --cluster bpms-cluster 2>&1 > ./controller.log &
./run-helix-controller.sh --zkSvr server1:2181,server2:2182,server3:2183 --cluster bpms-cluster 2>&1 > ./controller.log &Copy to Clipboard Copied! Toggle word wrap Toggle overflow
In case you decide to cluster ZooKeeper, add an odd number of instances in order to recover from failure. After a failure, the remaining number of nodes still need to be able to form a majority. For example a cluster of five ZooKeeper nodes can withstand loss of two nodes in order to fully recover. One ZooKeeper instance is still possible, replication will work, however no recover possibilities are available if it fails.
6.6.2. Starting and Stopping a Cluster Copy linkLink copied to clipboard!
To start your cluster, see Section 6.5.2, “Starting a Cluster”. To stop your cluster, see Section 6.5.3, “Stopping a Cluster”.
6.6.3. Setting Quartz Copy linkLink copied to clipboard!
If you are not using Quartz (timers) in your business processes, or if you are not using the Intelligent Process Server, skip this section. If you want to replicate timers in your business process, use the Quartz component.
Before you can configure the database on your application server, you need to prepare the database for Quartz to create Quartz tables, which will hold the timer data, and the Quartz definition file.
To configure Quartz:
-
Configure the database. Make sure to use one of the supported non-JTA data sources. Since Quartz needs a non-JTA data source, you cannot use the Business Central data source. In the example code, PostgreSQL with the user
bpmsand passwordbpmsis used. The database must be connected to your application server. -
Create Quartz tables on your database to allow timer events synchronization. To do so, use the DDL script for your database, which is available in the extracted supplementary ZIP archive in
QUARTZ_HOME/docs/dbTables. Create the Quartz configuration file
quartz-definition.propertiesinJBOSS_HOME/MODE/configuration/directory and define the Quartz properties.Example 6.11. Quartz Configuration File for PostgreSQL Database
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the configured data sources that will accommodate the two Quartz schemes at the very end of the file.
Cluster Node Check IntervalThe recommended interval for cluster discovery is 20 seconds and is set in the
org.quartz.jobStore.clusterCheckinIntervalof thequartz-definition.propertiesfile. Depending on your set up consider the performance impact and modify the setting as necessary.The
org.quartz.jobStore.driverDelegateClassproperty that defines the database dialect. If you use Oracle, set it toorg.quartz.impl.jdbcjobstore.oracle.OracleDelegate.-
Provide the absolute path to your
quartz-definition.propertiesfile in theorg.quartz.propertiesproperty. For further details, see _cluster_properties_BRMS.