Chapter 28. Red Hat Process Automation Manager clusters in a development (authoring) environment
Developers can use Red Hat Process Automation Manager to author rules and processes that assist users with decision making.
You can configure Red Hat Process Automation Manager as a clustered development environment to benefit from high availability. With a clustered environment, if a developer is working on a node and that node fails, that developer’s work is preserved and visible on any other node of the cluster.
Most development environments consist of Business Central for creating rules and processes. and at least one KIE Server to test those rules and processes .
To create a Red Hat Process Automation Manager clustered development environment, you must perform the following tasks:
Configure the following components on each system that will be a node of the cluster:
Configure Red Hat JBoss EAP 7.4 with Red Hat Data Grid 8.1.
Red Hat Data Grid is built from the Infinispan open-source software project. It is a distributed in-memory key/value data store that has indexing capabilities that enable you to store, search, and analyze high volumes of data quickly and in near-real time. In a Red Hat Process Automation Manager clustered environment, it enables you to perform complex and efficient searches across cluster nodes.
Configure AMQ Broker, a Java messaging server (JMS) broker.
A JMS broker is a software component that receives messages, stores them locally, and forwards the messages to a recipient. AMQ Broker enables your applications to communicate with any messaging provider. It specifies how components such as message-driven beans, Enterprise JavaBeans, and servlets can send or receive messages.
- Configure an NFS file server.
- Download Red Hat JBoss EAP 7.4 and Red Hat Process Automation Manager 7.12, and then install them on each system that will be a cluster node.
- Configure and start Business Central on each node of the cluster.
28.1. Installing and configuring Red Hat Data Grid
For more efficient searching across cluster nodes, install Red Hat Data Grid and configure it for the Red Hat Process Automation Manager clustered environment.
For information about Red Hat Data Grid advanced installation and configuration options and Red Hat Data Grid modules for Red Hat JBoss EAP, see the Red Hat Data Grid Server Guide.
Do not install Red Hat Data Grid on the same node as Business Central.
Prerequisites
- A Java Virtual Machine (JVM) environment compatible with Java 8.0 or later is installed.
Procedure
Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options:
- Product: Data Grid
- Version: 8.1
-
Download and extract the Red Hat Data Grid 8.1.0 Server (
redhat-datagrid-8.1.0-server.zip
) installation file to the preferred location on your system. In the following examples, the extracted directory is referred to asJDG_HOME
. - Update Red Hat Data Grid to the latest version. For more information, see Red Hat Data Grid Red Hat Data Grid User Guide.
To add a Red Hat Data Grid user, navigate to
JDG_HOME/bin
and enter the following command:$ ./cli.sh user create <DATAGRID_USER_NAME> -p <DATA_GRID_PASSWORD> -r default
To change Red Hat Data Grid server memory parameters, open the
JDG_HOME/bin/server.conf
file and locate the following line:-Xms64m -Xmx512m -XX:MetaspaceSize=64M
Replace this line with the following content:
-Xms256m -Xmx2048m -XX:MetaspaceSize=256M
Open
JDG_HOME/server/conf/infinispan.xml
file and locate the following line:<hotrod-connector name="hotrod"/>
Replace this line with the following content:
<hotrod-connector name="hotrod"> <authentication> <sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN" server-name="infinispan" qop="auth"/> </authentication> </hotrod-connector>
To run Red Hat Data Grid, navigate to
JDG_HOME
and enter the following command:$ ./server.sh -b <HOST>
Replace
<HOST>
with the IP address or host name of the server where you installed Red Hat Data Grid.
28.2. Downloading and configuring AMQ Broker
Red Hat AMQ Broker enables your applications to communicate with any messaging provider. It specifies how components such as message-driven beans, Enterprise JavaBeans, and servlets can send or receive messages.
For information about advanced installations and configuration options, see Getting started with AMQ Broker.
Procedure
Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options:
- Product: AMQ Broker
- Version: 7.8.2
-
Click Download next to Red Hat AMQ Broker 7.8.2 (
amq-broker-7.8.2-bin.zip
). -
Extract the
amq-broker-7.8.2-bin.zip
file. -
Change directory to
amq-broker-7.8.2-bin/amq-broker-7.8.2/bin
. Enter the following command to create the broker and broker user:
./artemis create --host <HOST> --user <AMQ_USER> --password <AMQ_PASSWORD> --require-login <BROKER_NAME>
In this example, replace the following placeholders:
-
<HOST>
is the IP address or host name of the server where you installed AMQ Broker. -
<AMQ_USER>
and<AMQ_PASSWORD>
is a user name and password combination of your choice. -
<BROKER_NAME>
is a name for the broker that you are creating.
-
To run AMQ Broker, navigate to the
amq-broker-7.8.2-bin/amq-broker-7.8.2/bin
directory and enter the following command:<BROKER_NAME>/bin/artemis run
28.3. Configuring an NFS version 4 server
An NFS version 4 server with a shared file system is required for a Business Central clustered environment and each client node must have access to the shared file system.
Procedure
- Configure a server to export NFS version 4 shares. For instructions about exporting NFS shares on Red Hat Enterprise Linux, see Exporting NFS shares in Managing file systems. For more information about creating the NFS server, see How to configure NFS in RHEL 7.
-
On the server, open the
/etc/exports
file in a text editor. Add the following line to the
/etc/exports
file where<HOST_LIST>
is a space-separated list of IP addresses and options of hosts that are authorized to connect to the server:/opt/kie/data <HOST_LIST>
For example:
/opt/kie/data 192.168.1.0/24(rw,sync) 192.168.1.1/24(no_root_squash)
This creates an
/opt/kie/data
share with therw,sync,no_root_squash
minimum options that are required for NFS.NoteYou can use a different share name instead of
/opt/kie/data
. If you do, you must use the different name when configuring all nodes that run Business Central.
28.4. Downloading and extracting Red Hat JBoss EAP 7.4 and Red Hat Process Automation Manager
Download and install Red Hat JBoss EAP 7.4 and Red Hat Process Automation Manager 7.12 on each node of the cluster.
Procedure
Download Red Hat JBoss EAP 7.4 on each node of the cluster:
Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options:
- Product: Enterprise Application Platform
- Version: 7.4
-
Click Download next to Red Hat JBoss Enterprise Application Platform 7.4.1. (
JBEAP-7.4.1/jboss-eap-7.4.1.zip
).
-
Extract the
jboss-eap-7.4.1.zip
file. In the following steps,EAP_HOME
is thejboss-eap-7.4/jboss-eap-7.4
directory. - Download and apply the latest Red Hat JBoss EAP patch, if available.
Download Red Hat Process Automation Manager on each node of the cluster:
Navigate to the Software Downloads page in the Red Hat Customer Portal, and select the product and version from the drop-down options:
- Product: Process Automation Manager
- Version: 7.12
-
Download Red Hat Process Automation Manager 7.12.0 Business Central Deployable for Red Hat JBoss EAP 7 (
rhpam-7.12.0-business-central-eap7-deployable.zip
).
-
Extract the
rhpam-7.12.0-business-central-eap7-deployable.zip
file to a temporary directory. In the following commands this directory is calledTEMP_DIR
. -
Copy the contents of
TEMP_DIR/rhpam-7.12.0-business-central-eap7-deployable/jboss-eap-7.4
toEAP_HOME
. - Download and apply the latest Red Hat Process Automation Manager patch, if available.
- Configure Red Hat Single Sign-On for your high availability environment. For more information, see Integrating Red Hat Process Automation Manager with Red Hat Single Sign-On and the Red Hat Single Sign-On Server Administration Guide.
28.5. Configuring and running Business Central in a cluster
After you install Red Hat JBoss EAP and Business Central you can use Red Hat Data Grid and the AMQ Broker to configure the cluster. Complete these steps on each node of the cluster.
These steps describe a basic cluster configuration. For more complex configurations, see the Red Hat JBoss EAP 7.4 Configuration Guide.
Do not connect KIE Server to Business Central in high availability (HA) on premise environments.
Business Central instances are not able to keep in sync with the status of each KIE Server. For example, if a KIE Server is up but not in sync, Business Central will not be able to deploy through that instance.
Prerequisites
- Red Hat Data Grid 8.1 is installed as described in Section 28.1, “Installing and configuring Red Hat Data Grid”.
- AMQ Broker is installed and configured, as described in Section 28.2, “Downloading and configuring AMQ Broker”.
- Red Hat JBoss EAP and Red Hat Process Automation Manager are installed on each node of the cluster as described in Section 28.4, “Downloading and extracting Red Hat JBoss EAP 7.4 and Red Hat Process Automation Manager”.
- An NFS server with a shared folder is available as described in Section 28.3, “Configuring an NFS version 4 server”.
Procedure
To mount the directory shared over NFS as
/data
, enter the following commands as the root user:mkdir /data mount <NFS_SERVER_IP>:<DATA_SHARE> /data
Replace
<NFS_SERVER_IP>
with the IP address or hostname of the NFS server system. Replace<DATA_SHARE>
with the share name that you configured (for example,/opt/kie/data
).Create a
kie-wb-playground
directory in the/data
NFS directory:mkdir /kie-wb-playground
Create a
kie-wb-playground
directory in theEAP_HOME/bin
directory and mount the directory:mount -o rw,sync,actimeo=1 <NFS_SERVER_IP>:<DATA_SHARE>/kie-wb-playground kie-wb-playground
-
Open the
EAP_HOME/standalone/configuration/standalone-full.xml
file in a text editor. Edit or add the properties under the
<system-properties>
element and replace the following placeholders:-
<AMQ_USER>
and<AMQ_PASSWORD>
are the credentials that you defined when creating the AMQ Broker. -
<AMQ_BROKER_IP_ADDRESS>
is the IP address of the AMQ Broker. -
<DATA_GRID_NODE_IP>
is the IP address where Red Hat Data Grid is installed. -
<SERVER_NAME>
is the server name specified in your Red Hat Data Grid server configuration. -
<SASL_QOP>
is the combination ofauth
,auth-int
andauth-conf
values for your Red Hat Data Grid server configuration. <DATAGRID_USER_NAME>
and<DATA_GRID_PASSWORD>
are the credentials that you defined when creating the Red Hat Data Grid.<system-properties> <property name="appformer-jms-connection-mode" value="REMOTE"/> <property name="appformer-jms-username" value="<AMQ_USER>"/> <property name="appformer-jms-password" value="<AMQ_USER_PASSWORD>"/> <property name="appformer-jms-url" value="tcp://<AMQ_BROKER_IP_ADDRESS>:61616?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=-1"/> <property name="org.appformer.ext.metadata.infinispan.port" value="11222"/> <property name="org.appformer.ext.metadata.infinispan.host" value="<DATA_GRID_NODE_IP>"/> <property name="org.appformer.ext.metadata.infinispan.realm" value="ApplicationRealm"/> <property name="org.appformer.ext.metadata.infinispan.cluster" value="kie-cluster"/> <property name="org.appformer.ext.metadata.index" value="infinispan"/> <property name="org.uberfire.nio.git.dir" value="/data"/> <property name="es.set.netty.runtime.available.processors" value="false"/> <property name="org.appformer.ext.metadata.infinispan.username" value="<DATAGRID_USER_NAME>"/> <property name="org.appformer.ext.metadata.infinispan.password" value="<DATA_GRID_PASSWORD>"/> <property name="org.appformer.ext.metadata.index" value="infinispan"/> <property name="org.appformer.ext.metadata.infinispan.sasl.qop" value="auth"/> <property name="org.appformer.ext.metadata.infinispan.server.name" value="infinispan"/> <property name="org.appformer.ext.metadata.infinispan.realm" value="default"/> <property name="org.appformer.concurrent.managed.thread.limit" value="1000"/> <property name="org.appformer.concurrent.unmanaged.thread.limit" value="1000"/> <property name="org.appformer.concurrent.indexing.thread.limit" value="0"/> <property name="org.appformer.ext.metadata.infinispan.server.name" value="<SERVER_NAME>"/> <property name="org.appformer.ext.metadata.infinispan.sasl.qop" value="<SASL_QOP>"/> </system-properties>
-
-
Save the
standalone-full.xml
file. To start the cluster, navigate to
EAP_HOME/bin
and enter the following command:$ ./standalone.sh -c standalone-full.xml -b <HOST>
Replace
<HOST>
with the IP address or host name of the server where you installed Red Hat Process Automation Manager.
28.6. Testing your high availability (HA) on-premise infrastructure
When you create a production-ready high availability (HA) on-premises infrastructure for Business Central, you must ensure that it meets the minimum hardware and performance requirements for a viable HA environment. An HA on-premise infrastructure consists of the following four main components: Business Central, the message system (AMQ), the indexing server (Red Hat Data Grid), and a shared file system (NFS/GlusterFS/Ceph).
Prerequisites
A network environment of at least 3 nodes is configured with the following layout:
Node 1: Business Central
Node 2: Business Central
Node 3: AMQ, Red Hat Data Grid, and NFS
Procedure
Test the network speed:
In the command terminal of each server node, install
iPerf3
:$ dnf install iperf3
In the command terminal of the NFS server node (server node 3), start
iPerf3
in server mode:$ iperf3 -s
In the command terminal of each Business Central server node, enter the following command to start
iPerf3
in client mode with the NFS server node set as the host:$ iperf3 -c <NFS_SERVER_IP> + In this example, replace `<NFS_SERVER_IP>` with the IP address of the NFS server.
Compare the results from each server node with the following example of minimum values:
iperf3 -c 172.31.47.103 Connecting to host 172.31.47.103, port 5201 [ 5] local 172.31.39.4 port 44820 connected to 172.31.47.103 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 143 MBytes 1.20 Gbits/sec 0 419 KBytes [ 5] 1.00-2.00 sec 111 MBytes 928 Mbits/sec 6 848 KBytes [ 5] 2.00-3.00 sec 53.8 MBytes 451 Mbits/sec 0 1.08 MBytes [ 5] 3.00-4.00 sec 52.5 MBytes 440 Mbits/sec 1 1022 KBytes [ 5] 4.00-5.00 sec 53.8 MBytes 451 Mbits/sec 1 935 KBytes [ 5] 5.00-6.00 sec 53.8 MBytes 451 Mbits/sec 1 848 KBytes [ 5] 6.00-7.00 sec 52.5 MBytes 440 Mbits/sec 0 1.08 MBytes [ 5] 7.00-8.00 sec 53.8 MBytes 451 Mbits/sec 1 1.01 MBytes [ 5] 8.00-9.00 sec 53.8 MBytes 451 Mbits/sec 1 953 KBytes [ 5] 9.00-10.00 sec 52.5 MBytes 440 Mbits/sec 1 856 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 680 MBytes 570 Mbits/sec 12 sender [ 5] 0.00-10.04 sec 677 MBytes 566 Mbits/sec receiver iperf Done.
Verify the NFS information:
In the command terminal of each Business Central server node, mount the NFS node:
$ mount -o actimeo=1 <NFS_SERVER_IP>:/opt/nfs/kie /opt/kie/niogit
In the command terminal of each mounted node, enter
nfsiostat
:$ nfsiostat
Compare the results from each server node with the following example of minimum values:
nfsiostat ops/s rpc bklog 6.415 0.000 read: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) avg queue (ms) errors 0.031 0.045 1.452 0 (0.0%) 0.129 0.166 0.019 0 (0.0%) write: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) avg queue (ms) errors 0.517 0.467 0.903 0 (0.0%) 1.235 1.269 0.01 8 0 (0.0%)
Verify that the disk is an SSD:
In the command terminal of the NFS server, enter
df -h
to identify the disk as shown in the following example:$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 33M 3.8G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/xvda2 25G 3.2G 22G 13% / tmpfs 781M 0 781M 0% /run/user/1000 172.31.47.103:/root/nfs 25G 2.1G 23G 9% /root/nfs
Enter
lsblk -d
to verify that the disk is an SSD:$ lsblk -d
Enter
hdparm -Tt
to test the disk:$ hdparm -Tt /dev/xvda2
Compare the results from each server node with the following example of minimum values:
$ hdparm -Tt /dev/xvda2 /dev/xvda2: Timing cached reads: 18670 MB in 1.99 seconds = 9389.01 MB/sec Timing buffered disk reads: 216 MB in 3.03 seconds = 71.40 MB/sec
28.7. Verifying the Red Hat Process Automation Manager cluster
After configuring the cluster for Red Hat Process Automation Manager, create an asset to verify that the installation is working.
Procedure
-
In a web browser, enter
<node-IP-address>:8080/business-central
. Replace<node-IP-address>
with the IP address of a particular node. -
Enter the
admin
user credentials that you created during installation. The Business Central home page appears. -
Select Menu
Design Projects. - Open the MySpace space.
-
Click Try Samples
Mortgages OK. The Assets window appears. -
Click Add Asset
Data Object. -
Enter
MyDataObject
in the Data Object field and click OK. -
Click Spaces
MySpace Mortgages and confirm that MyDataObject
is in the list of assets. Enter the following URL in a web browser, where
<node_IP_address>
is the address of a different node of the cluster:http://<node_IP_address>:8080/business-central
-
Enter the same credentials that you used to log in to Business Central on the first node, where you created the
MyDataObject
asset. -
Select Menu→ Design
Projects. - Open the MySpace space.
- Select the Mortgages project.
-
Verify that
MyDataObject
is in the asset list. - Delete the Mortgages project.