Deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform
Abstract
Preface
As a developer or system administrator, you can deploy a variety of Red Hat Process Automation Manager environments on Red Hat OpenShift Container Platform, such as an authoring environment, a managed server environment, an immutable server environment, and other supported environment options.
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Part I. Deploying a Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators
As a system engineer, you can deploy a Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 to provide an infrastructure to develop or execute services, process applications, and other business assets. You can use OpenShift Operators to deploy the environment defined in a structured YAML file and to maintain and modify this environment as necessary.
Prerequisites
- A Red Hat OpenShift Container Platform 4 environment is available. For the exact versions of Red Hat OpenShift Container Platform that the current release supports, see Red Hat Process Automation Manager 7 Supported Configurations.
- The OpenShift project for the deployment is created.
- You are logged into the project using the OpenShift web console.
The following resources are available on the OpenShift cluster. Depending on the application load, higher resource allocation might be necessary for acceptable performance.
- For an authoring environment, 4 gigabytes of memory and 2 virtual CPU cores for the Business Central pod. In a high-availability deployment, these resources are required for each replica and two replicas are created by default.
- For a production or immutable environment, 2 gigabytes of memory and 1 virtual CPU core for each replica of the Business Central Monitoring pod.
- 2 gigabytes of memory and 1 virtual CPU core for each replica of each KIE Server pod.
- 1 gigabyte of memory and half a virtual CPU core for each replica of a Smart Router pod.
In a high-availability authoring deployment, additional resources according to the configured defaults are required for the MySQL, Red Hat AMQ, and Red Hat Data Grid pods.
NoteThe default values for
MaxMetaspaceSize
are:- Business Central images: 1024m
- KIE Server images: 512m
- For other images: 256m
Dynamic persistent volume (PV) provisioning is enabled. Alternatively, if dynamic PV provisioning is not enabled, enough persistent volumes must be available. By default, the deployed components require the following PV sizes:
- Each KIE Server deployment by default requires one 1Gi PV for the database. You can change the database PV size. You can deploy multiple KIE Servers; each requires a separate database PV. This requirement does not apply if you use an external database server.
- By default, Business Central requires one 1Gi PV. You can change the PV size for Business Central persistent storage.
- Business Central Monitoring requires one 64Mi PV.
- Smart Router requires one 64Mi PV.
-
If you intend to deploy a high-availability authoring environment or any environment with Business Central Monitoring pods, your OpenShift environment supports persistent volumes with
ReadWriteMany
mode. If your environment does not support this mode, you can use NFS to provision the volumes. For information about access mode support in OpenShift public and dedicated clouds, see Access Modes in Red Hat OpenShift Container Platform documentation.
Chapter 1. Overview of Red Hat Process Automation Manager on Red Hat OpenShift Container Platform
You can deploy Red Hat Process Automation Manager into a Red Hat OpenShift Container Platform environment.
In this solution, components of Red Hat Process Automation Manager are deployed as separate OpenShift pods. You can scale each of the pods up and down individually to provide as few or as many containers as required for a particular component. You can use standard OpenShift methods to manage the pods and balance the load.
The following key components of Red Hat Process Automation Manager are available on OpenShift:
KIE Server, also known as Execution Server, is the infrastructure element that runs decision services, process applications, and other deployable assets (collectively referred to as services) . All logic of the services runs on execution servers.
A database server is normally required for KIE Server. You can provide a database server in another OpenShift pod or configure an execution server on OpenShift to use any other database server. Alternatively, KIE Server can use an H2 database; in this case, you cannot scale the pod.
In some templates, you can scale up a KIE Server pod to provide as many copies as required, running on the same host or different hosts. As you scale a pod up or down, all of its copies use the same database server and run the same services. OpenShift provides load balancing and a request can be handled by any of the pods.
You can deploy a separate KIE Server pod to run a different group of services. That pod can also be scaled up or down. You can have as many separate replicated KIE Server pods as required.
Business Central is a web-based interactive environment used for authoring services. It also provides a management and monitoring console. You can use Business Central to develop services and deploy them to KIE Servers. You can also use Business Central to monitor the execution of processes.
Business Central is a centralized application. However, you can configure it for high availability, where multiple pods run and share the same data.
Business Central includes a Git repository that holds the source for the services that you develop on it. It also includes a built-in Maven repository. Depending on configuration, Business Central can place the compiled services (KJAR files) into the built-in Maven repository or (if configured) into an external Maven repository.
- Business Central Monitoring is a web-based management and monitoring console. It can manage the deployment of services to KIE Servers and provide monitoring information, but does not include authoring capabilities. You can use this component to manage staging and production environments.
- Smart Router is an optional layer between KIE Servers and other components that interact with them. When your environment includes many services running on different KIE Servers, Smart Router provides a single endpoint to all client applications. A client application can make a REST API call that requires any service. Smart Router automatically calls the KIE Server that can process a particular request.
You can arrange these and other components into various environment configurations within OpenShift.
1.1. Architecture of an authoring environment
In Red Hat Process Automation Manager, the Business Central component provides a web-based interactive user interface for authoring services. The KIE Server component runs the services.
KIE Server uses a database server to store the state of process services.
You can also use Business Central to deploy services onto a KIE Server. You can use several KIE Servers to run different services and control the servers from the same Business Central.
Single authoring environment
In a single authoring environment, only one instance of Business Central is running. Multiple users can access its web interface at the same time, however the performance can be limited and there is no failover capability.
Business Central includes a built-in Maven repository that stores the built versions of the services that you develop (KJAR files/artifacts). You can use your continuous integration and continuous deployment (CICD) tools to retrieve these artifacts from the repository and move them as necessary.
Business Central saves the source code in a built-in Git repository, stored in the .niogit
directory. It uses a built-in indexing mechanism to index the assets in your services.
Business Central uses persistent storage for the Maven repository and for the Git repository.
A single authoring environment, by default, includes one KIE Server instance. This KIE Server instance uses a built-in H2 database engine to store the state of process services.
A single authoring environment can use the controller strategy. Business Central includes the Controller, a component that can manage KIE Servers. When you configure KIE Server to connect to Business Central, KIE Server uses a REST API to connect to the Controller. This connection opens a persistent WebSocket. In an OpenShift deployment that uses the controller strategy, each KIE Server instance is initially configured to connect to the Business Central Controller.
When you use the Business Central user interface to deploy or manage a service on KIE Server, KIE Server receives the request through the Controller connection WebSocket. To deploy a service, KIE Server requests the necessary artifact from the Maven repository that is a part of Business Central.
Client applications use a REST API to use services that run on KIE Server.
Figure 1.1. Architecture diagram for a single authoring environment
Clustering KIE Servers and using multiple KIE Servers
You can scale a KIE Server pod to run a clustered KIE Server environment. To scale a KIE Server, you must ensure that it uses a database server in a separate pod or an external database server, and not a built-in H2 database engine.
In a clustered deployment, several instances of KIE Server run the same services. These servers can connect to the Business Central Controller using the same server ID, so they can receive the same requests from the controller. Red Hat OpenShift Container Platform provides load-balancing between the servers. Decision services and Red Hat build of OptaPlanner services that run on a clustered KIE Server instance must be stateless, because requests from the same client might be processed by different instances.
You can also deploy several independent KIE Servers to run different services. In this case, the servers connect to the Business Central Controller with different server ID values. You can use the Business Central UI to deploy services to each of the servers.
Smart Router
The optional Smart Router component provides a layer between client applications and KIE Server instances. It can be useful if you are using several independent KIE Server instances.
The client application can use services running on different KIE Server instances, but always connects to the Smart Router. The Smart Router automatically passes the request to the KIE Server instances that runs the required service. The Smart Router also enables management of service versions and provides an additional load-balancing layer.
High-availability authoring environment
In a high-availability (HA) authoring environment, the Business Central pod is scaled, so several instances of Business Central are running. Red Hat OpenShift Container Platform provides load balancing for user requests. This environment provides optimal performance for multiple users and supports failover.
Each instance of Business Central includes the Maven repository for the built artifacts and uses the .niogit
Git repository for source code. The instances use shared persistent storage for the repositories. A persistent volume with ReadWriteMany
access is required for this storage.
An instance of Red Hat DataGrid provides indexing of all projects and assets developed in Business Central.
An instance of Red Hat AMQ propagates Java CDI messages between all instances of Business Central. For example, when a new project is created or when an asset is locked or modified on one of the instances, this information is immediately reflected in all other instances.
The controller strategy is not suitable for clustered deployment. In an OpenShift deployment, a high-availability Business Central must manage KIE Servers using the OpenShift startup strategy.
Each KIE Server deployment (which can be scaled) creates a ConfigMap that reflects its current state. The Business Central discovers all KIE Servers by reading their ConfigMaps.
When the user requests a change in the KIE Server configuration (for example, deploys or undeploys a service), Business Central initiates a connection to KIE Server and sends a REST API request. KIE Server changes the ConfigMap to reflect the new configuration state and then triggers its redeployment, so that all instances are redeployed and reflect the new configuration. For more information about ConfigMaps, see KIE configuration and ConfigMaps.
You can deploy several independent KIE Servers in your OpenShift environment. Each of the KIE Servers has a separate ConfigMap with the necessary configuration. You can scale each of the KIE Servers separately.
You can include Smart Router in the OpenShift deployment.
Figure 1.2. Architecture diagram for a high-availability authoring environment
Chapter 2. Preparation for deploying Red Hat Process Automation Manager in your OpenShift environment
Before deploying Red Hat Process Automation Manager in your OpenShift environment, you must complete several procedures. You do not need to repeat these procedures if you want to deploy additional images, for example, for new versions of processes or for other processes.
If you are deploying a trial environment, complete the procedure described in Section 2.1, “Ensuring your environment is authenticated to the Red Hat registry” and do not complete any other preparation procedures.
2.1. Ensuring your environment is authenticated to the Red Hat registry
To deploy Red Hat Process Automation Manager components of Red Hat OpenShift Container Platform, you must ensure that OpenShift can download the correct images from the Red Hat registry.
OpenShift must be configured to authenticate with the Red Hat registry using your service account user name and password. This configuration is specific for a namespace, and if operators work, the configuration is already completed for the openshift
namespace.
However, if the image streams for Red Hat Process Automation Manager are not found in the openshift
namespace or if the operator is configured to update Red Hat Process Automation Manager to a new version automatically, the operator needs to download images into the namespace of your project. You must complete the authentication configuration for this namespace.
Procedure
-
Ensure you are logged in to OpenShift with the
oc
command and that your project is active. - Complete the steps documented in Registry Service Accounts for Shared Environments. You must log in to Red Hat Customer Portal to access the document and to complete the steps to create a registry service account.
- Select the OpenShift Secret tab and click the link under Download secret to download the YAML secret file.
-
View the downloaded file and note the name that is listed in the
name:
entry. Run the following commands:
oc create -f <file_name>.yaml oc secrets link default <secret_name> --for=pull oc secrets link builder <secret_name> --for=pull
Replace
<file_name>
with the name of the downloaded file and<secret_name>
with the name that is listed in thename:
entry of the file.
2.2. Creating the secrets for KIE Server
OpenShift uses objects called secrets to hold sensitive information such as passwords or keystores. For more information about OpenShift secrets, see What is a secret in the Red Hat OpenShift Container Platform documentation.
In order to provide HTTPS access, KIE Server uses an SSL certificate. The deployment can create a sample secret automatically. However, in production environments you must create an SSL certificate for KIE Server and provide it to your OpenShift environment as a secret.
Procedure
Generate an SSL keystore named
keystore.jks
with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security.NoteIn a production environment, generate a valid signed certificate that matches the expected URL for KIE Server.
-
Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is
jboss
. -
Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is
mykeystorepass
. Use the
oc
command to generate a secret namedkieserver-app-secret
from the new keystore file:$ oc create secret generic kieserver-app-secret --from-file=keystore.jks
2.3. Creating the secrets for Business Central
In order to provide HTTPS access, Business Central uses an SSL certificate. The deployment can create a sample secret automatically. However, in production environments you must create an SSL certificate for Business Central and provide it to your OpenShift environment as a secret.
Do not use the same certificate and keystore for Business Central and KIE Server.
Procedure
Generate an SSL keystore named
keystore.jks
with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security.NoteIn a production environment, generate a valid signed certificate that matches the expected URL for Business Central.
-
Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is
jboss
. -
Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is
mykeystorepass
. Use the
oc
command to generate a secret namedbusinesscentral-app-secret
from the new keystore file:$ oc create secret generic businesscentral-app-secret --from-file=keystore.jks
2.4. Creating the secrets for the AMQ broker connection
If you want to connect any KIE Server to an AMQ broker and to use SSL for the AMQ broker connection, you must create an SSL certificate for the connection and provide it to your OpenShift environment as a secret.
Procedure
Generate an SSL keystore named
keystore.jks
with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security.NoteIn a production environment, generate a valid signed certificate that matches the expected URL for the AMQ broker connection.
-
Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is
jboss
. -
Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is
mykeystorepass
. Use the
oc
command to generate a secret namedbroker-app-secret
from the new keystore file:$ oc create secret generic broker-app-secret --from-file=keystore.jks
2.5. Creating the secrets for Smart Router
In order to provide HTTPS access, Smart Router uses an SSL certificate. The deployment can create a sample secret automatically. However, in production environments you must create an SSL certificate for Smart Router and provide it to your OpenShift environment as a secret.
Do not use the same certificate and keystore for Smart Router as the ones used for KIE Server or Business Central.
Procedure
Generate an SSL keystore named
keystore.jks
with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security.NoteIn a production environment, generate a valid signed certificate that matches the expected URL for Smart Router.
-
Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is
jboss
. -
Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is
mykeystorepass
. Use the
oc
command to generate a secret namedsmartrouter-app-secret
from the new keystore file:$ oc create secret generic smartrouter-app-secret --from-file=keystore.jks
2.6. Building a custom KIE Server extension image for an external database
If you want to use an external database server for a KIE Server and the database server is not a MySQL or PostgreSQL server, you must build a custom KIE Server extension image with drivers for this server before deploying your environment.
Complete the steps in this build procedure to provide drivers for any of the following database servers:
- Microsoft SQL Server
- IBM DB2
- Oracle Database
- Sybase
Optionally, you can use this procedure to build a new version of drivers for any of the following database servers:
- MySQL
- MariaDB
- PostgreSQL
For the supported versions of the database servers, see Red Hat Process Automation Manager 7 Supported Configurations.
The build procedure creates a custom extension image that extends the existing KIE Server image. You must import this custom extension image into your OpenShift environment and then reference it in the EXTENSIONS_IMAGE
parameter.
Prerequisites
-
You are logged in to your OpenShift environment using the
oc
command. Your OpenShift user must have theregistry-viewer
role. For more information about assigning theregistry-viewer
role, see the "Accessing the registry" section in the "Registry" chapter of the OpenShift Container Platform 4.10 Documentation. - For Oracle Database, IBM DB2, or Sybase, you downloaded the JDBC driver from the database server vendor.
You have installed the following required software:
- Docker: For installation instructions, see Get Docker.
- CEKit version 3.11.0 or higher: For installation instructions, see Installation.
The following libraries and extensions for CEKit. For more information, see Dependencies.
-
docker
, provided by thepython3-docker
package or similar package -
docker-squash
, provided by thepython3-docker-squash
package or similar package -
behave
, provided by thepython3-behave
package or similar package
-
Procedure
- For IBM DB2, Oracle Database, or Sybase, provide the JDBC driver JAR file in a local directory.
-
Download the
rhpam-7.13.5-openshift-templates.zip
product deliverable file from the Software Downloads page of the Red Hat Customer Portal. -
Unzip the file and, using the command line, change to the
contrib/jdbc/cekit
directory of the unzipped file. This directory contains the source code for the custom build. Enter one of the following commands, depending on the database server type:
For Microsoft SQL Server:
make mssql
For MySQL:
make mysql
For PostgreSQL:
make postgresql
For MariaDB:
make mariadb
For IBM DB2:
make db2 artifact=/tmp/db2jcc4.jar version=10.2
In this command, replace
/tmp/db2jcc4.jar
with the path name of the IBM DB2 driver and10.2
with the version of the driver.For Oracle Database:
make oracle artifact=/tmp/ojdbc7.jar version=7.0
In this command, replace
/tmp/ojdbc7.jar
with the path name of the Oracle Database driver and7.0
with the version of the driver.For Sybase:
make build sybase artifact=/tmp/jconn4-16.0_PL05.jar version=16.0_PL05
In this command, replace
/tmp/jconn4-16.0_PL05.jar
with the path name of the downloaded Sybase driver and16.0_PL05
with the version of the driver.Alternatively, if you need to update the driver class or driver XA class for the Sybase driver, you can set the
DRIVER_CLASS
orDRIVER_XA_CLASS
variable for this command, for example:export DRIVER_CLASS=another.class.Sybase && make sybase artifact=/tmp/jconn4-16.0_PL05.jar version=16.0_PL05
Enter the following command to list the Docker images that are available locally:
docker images
Note the name of the image that was built, for example,
jboss-kie-db2-extension-openshift-image
, and the version tag of the image, for example,11.1.4.4
(not thelatest
tag).-
Access the registry of your OpenShift environment directly and push the image to the registry. Depending on your user permissions, you can push the image into the
openshift
namespace or into a project namespace. For instructions about accessing the registry and pushing the images, see Accessing registry directly from the cluster in the Red Hat OpenShift Container Platform product documentation.
2.7. Preparing Git hooks
In an authoring environment you can use Git hooks to execute custom operations when the source code of a project in Business Central is changed. The typical use of Git hooks is for interaction with an upstream repository.
To enable Git hooks to interact with an upstream repository using SSH authentication, you must also provide a secret key and a known hosts file for authentication with the repository.
Skip this procedure if you do not want to configure Git hooks.
Procedure
Create the Git hooks files. For instructions, see the Git hooks reference documentation.
NoteA
pre-commit
script is not supported in Business Central. Use apost-commit
script.Create a configuration map (ConfigMap) or persistent volume with the files. For more information about ConfigMaps, see KIE configuration and ConfigMaps.
If the Git hooks consist of one or several fixed script files, use the
oc
command to create a configuration map. For example:oc create configmap git-hooks --from-file=post-commit=post-commit
If the Git hooks consist of long files or depend on binaries, such as executable or JAR files, use a persistent volume. You must create a persistent volume, create a persistent volume claim and associate the volume with the claim, and transfer files to the volume.
For instructions about persistent volumes and persistent volume claims, see Storage in the Red Hat OpenShift Container Platform documentation. For instructions about copying files onto a persistent volume, see Transferring files in and out of containers.
If the Git hooks scripts must interact with an upstream repository using SSH authentication, prepare a secret with the necessary files:
-
Prepare the
id_rsa
file with a private key that matches a public key stored in the repository. -
Prepare the
known_hosts
file with the correct name, address, and public key for the repository. Create a secret with the two files using the
oc
command, for example:oc create secret git-hooks-secret --from-file=id_rsa=id_rsa --from-file=known_hosts=known_hosts
NoteWhen the deployment uses this secret, it mounts the
id_rsa
andknown_hosts
files into the/home/jboss/.ssh
directory on Business Central pods.
-
Prepare the
2.8. Provisioning persistent volumes with ReadWriteMany
access mode using NFS
If you want to deploy Business Central Monitoring or high-availability Business Central, your environment must provision persistent volumes with ReadWriteMany
access mode.
If your configuration requires provisioning persistent volumes with ReadWriteMany
access mode but your environment does not support such provisioning, use NFS to provision the volumes. Otherwise, skip this procedure.
Procedure
Deploy an NFS server and provision the persistent volumes using NFS. For information about provisioning persistent volumes using NFS, see the "Persistent storage using NFS" section of the OpenShift Container Platform Storage guide.
2.9. Extracting the source code from Business Central for use in an S2I build
If you are planning to create immutable KIE servers using the source-to-image (S2I) process, you must provide the source code for your services in a Git repository. If you are using Business Central for authoring services, you can extract the source code for your service and place it into a separate Git repository, such as GitHub or an on-premise installation of GitLab, for use in the S2I build.
Skip this procedure if you are not planning to use the S2I process or if you are not using Business Central for authoring services.
Procedure
Use the following command to extract the source code:
git clone https://<business-central-host>:443/git/<MySpace>/<MyProject>
In this command, replace the following variables:
-
<business-central-host>
with the host on which Business Central is running -
<MySpace>
with the name of the Business Central space in which the project is located -
<MyProject>
with the name of the project
NoteTo view the full Git URL for a project in Business Central, click Menu → Design → <MyProject> → Settings.
NoteIf you are using self-signed certificates for HTTPS communication, the command might fail with an
SSL certificate problem
error message. In this case, disable SSL certificate verification ingit
, for example, using theGIT_SSL_NO_VERIFY
environment variable:env GIT_SSL_NO_VERIFY=true git clone https://<business-central-host>:443/git/<MySpace>/<MyProject>
-
- Upload the source code to another Git repository, such as GitHub or GitLab, for the S2I build.
2.10. Preparing for deployment in a restricted network
You can deploy Red Hat Process Automation Manager in a restricted network that is not connected to the public Internet. For instructions about operator deployment in a restricted network, see Using Operator Lifecycle Manager on restricted networks in Red Hat OpenShift Container Platform documentation.
In Red Hat Process Automation Manager 7.13, deployment on restricted networks is for Technology Preview only. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
In order to use a deployment that does not have outgoing access to the public Internet, you must also prepare a Maven repository with a mirror of all the necessary artifacts. For instructions about creating this repository, see Section 2.11, “Preparing a Maven mirror repository for offline use”.
2.11. Preparing a Maven mirror repository for offline use
If your Red Hat OpenShift Container Platform environment does not have outgoing access to the public Internet, you must prepare a Maven repository with a mirror of all the necessary artifacts and make this repository available to your environment.
You do not need to complete this procedure if your Red Hat OpenShift Container Platform environment is connected to the Internet.
Prerequisites
- A computer that has outgoing access to the public Internet is available.
Procedure
Configure a Maven release repository to which you have write access. The repository must allow read access without authentication and your OpenShift environment must have network access to this repository.
You can deploy a Nexus repository manager in the OpenShift environment. For instructions about setting up Nexus on OpenShift, see Setting up Nexus in the Red Hat OpenShift Container Platform 3.11 documentation. The documented procedure is applicable to Red Hat OpenShift Container Platform 4.
Use this repository as a mirror to host the publicly available Maven artifacts. You can also provide your own services in this repository in order to deploy these services on immutable servers or to deploy them on managed servers using Business Central monitoring.
- On the computer that has an outgoing connection to the public Internet, complete the following steps:
Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options:
- Product: Process Automation Manager
Version: 7.13.5
-
Download and extract the Red Hat Process Automation Manager 7.13.5 Offliner Content List (
rhpam-7.13.5-offliner.zip
) product deliverable file. -
Extract the contents of the
rhpam-7.13.5-offliner.zip
file into any directory. Change to the directory and enter the following command:
./offline-repo-builder.sh offliner.txt
This command creates the
repository
subdirectory and downloads the necessary artifacts into this subdirectory. This is the mirror repository.If a message reports that some downloads have failed, run the same command again. If downloads fail again, contact Red Hat support.
-
Upload all artifacts from the
repository
subdirectory to the Maven mirror repository that you prepared. You can use the Maven Repository Provisioner utility, available from the Maven repository tools Git repository, to upload the artifacts.
-
Download and extract the Red Hat Process Automation Manager 7.13.5 Offliner Content List (
If you developed services outside of Business Central and they have additional dependencies, add the dependencies to the mirror repository. If you developed the services as Maven projects, you can use the following steps to prepare these dependencies automatically. Complete the steps on the computer that has an outgoing connection to the public Internet.
-
Create a backup of the local Maven cache directory (
~/.m2/repository
) and then clear the directory. -
Build the source of your projects using the
mvn clean install
command. For every project, enter the following command to ensure that Maven downloads all runtime dependencies for all the artifacts generated by the project:
mvn -e -DskipTests dependency:go-offline -f /path/to/project/pom.xml --batch-mode -Djava.net.preferIPv4Stack=true
Replace
/path/to/project/pom.xml
with the path of thepom.xml
file of the project.-
Upload all artifacts from the local Maven cache directory (
~/.m2/repository
) to the Maven mirror repository that you prepared. You can use the Maven Repository Provisioner utility, available from the Maven repository tools Git repository, to upload the artifacts.
-
Create a backup of the local Maven cache directory (
Chapter 3. Deployment and management of a Red Hat Process Automation Manager environment using OpenShift operators
To deploy a Red Hat Process Automation Manager environment, the OpenShift operator uses a YAML source that describes the environment. Red Hat Process Automation Manager provides an installer that you can use to form the YAML source and deploy the environment.
When the Business Automation operator deploys the environment, it creates a YAML description of the environment, and then ensures that the environment is consistent with the description at all times. You can edit the description to modify the environment.
You can remove the environment by deleting the operator application in Red Hat OpenShift Container Platform.
When you remove an environment with a high-availability Business Central, the operator does not delete Persistent Volume Claims that were created as part of the JBoss Datagrid and JBoss AMQ StatefulSet creation. This behaviour is a part of Kubernetes design, as deletion of the Persistent Volume Claims could cause data loss. For more information about handling persistent volumes during deletion of a StatefulSet, see the Kubernetes documentation.
If you create a new environment using the same namespace and the same application name, the environment reuses the persistent volumes for increased performance.
To ensure that new deployments do not use any old data, you can delete the Persistent Volume Claims manually.
3.1. Subscribing to the Business Automation operator
To be able to deploy Red Hat Process Automation Manager using operators, you must subscribe to the Business Automation operator in OpenShift.
Procedure
- Enter your project in the OpenShift Web cluster console.
- In the OpenShift Web console navigation panel, select Catalog → OperatorHub or Operators → OperatorHub.
- Search for Business Automation, select it and click Install.
On the Create Operator Subscription page, select your target namespace and approval strategy.
Optional: Set Approval strategy to
Automatic
to enable automatic operator updates. An operator update does not immediately update the product, but is required before you update the product. Configure automatic or manual product updates using the settings in every particular product deployment.- Click Subscribe to create a subscription.
3.2. Deploying a Red Hat Process Automation Manager environment using the operator
After you subscribe to the Business Automation operator, you can use the installer wizard to configure and deploy a Red Hat Process Automation Manager environment.
In Red Hat Process Automation Manager 7.13, the operator installer wizard is for Technology Preview only. For more information on Red Hat Technology Preview features, see Technology Preview Features Support Scope.
3.2.1. Starting the deployment of a Red Hat Process Automation Manager environment using the Business Automation operator
To start deploying a Red Hat Process Automation Manager environment using the Business Automation operator, access the installer wizard. The installer wizard is deployed when you subscribe to the operator.
Prerequisites
- You subscribed to the Business Automation operator. For instructions about subscribing to the operator, see Section 3.1, “Subscribing to the Business Automation operator”.
Procedure
- In the Red Hat OpenShift Container Platform web cluster console menu, select Catalog → Installed operators or Operators → Installed operators.
-
Click the name of the operator that contains
businessautomation
. Information about this operator is displayed. - Click the Installer link located on the right side of the window.
- If prompted, log in with your OpenShift credentials.
Result
The Installation tab of the wizard is displayed.
3.2.2. Setting the basic configuration of the environment
After you start to deploy a Red Hat Process Automation Manager environment using the Business Automation operator, you must select the type of the environment and set other basic configuration.
Prerequisites
- You started to deploy a Red Hat Process Automation Manager environment using the Business Automation operator and accessed the installer wizard according to the instructions in Section 3.2.1, “Starting the deployment of a Red Hat Process Automation Manager environment using the Business Automation operator”.
Procedure
- In the Application Name field, enter a name for the OpenShift application. This name is used in the default URLs for all components.
In the Environment list, select the type of environment. This type determines the default configuration; you can modify this configuration as necessary. The following types are available for Red Hat Process Automation Manager:
-
rhpam-trial
: A trial environment that you can set up quickly and use to evaluate or demonstrate developing and running assets. Includes Business Central and a KIE Server. This environment does not use any persistent storage, and any work you do in the environment is not saved. -
rhpam-authoring
: An environment for creating and modifying services using Business Central. It consists of pods that provide Business Central for the authoring work and a KIE Server for test execution of the services. rhpam-authoring-ha
: An environment for creating and modifying services using Business Central. It consists of pods that provide Business Central for the authoring work and a KIE Server for test execution of the services. This version of the authoring environment supports scaling the Business Central pod to ensure high availability.ImportantIn Red Hat Process Automation Manager 7.13, high-availability Business Central functionality deployment using the operator is for Technology Preview only. For more information about Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
rhpam-production
: An environment for running existing services for staging and production purposes. This environment includes Business Central Monitoring, Smart Router, and two groups of KIE Server pods. You can deploy and undeploy services on every such group and also scale the group up or down as necessary. Use Business Central Monitoring to deploy, run, and stop the services and to monitor their execution. rhpam-production-immutable
: An alternate environment for running existing services for staging and production purposes. You can configure one or more KIE Server pods that build services from source or pull them from a Maven repository. You can then replicate each pod as necessary.You cannot remove any service from the pod or add any new service to the pod. If you want to use another version of a service or to modify the configuration in any other way, deploy a new server image to replace the old one. You can use any container-based integration workflows to manage the pods.
When configuring this environment, in the KIE Servers tab you must customize KIE Server and either click the Set immutable server configuration button or set the
KIE_SERVER_CONTAINER_DEPLOYMENT
environment variable. For instructions about configuring KIE Server, see Section 3.2.5, “Setting custom KIE Server configuration of the environment”.
-
- Optionally, you can also use the Console tab to include Business Central Monitoring in this environment to monitor, stop, and restart the execution of process services. For instructions about configuring Business Central Monitoring, see Section 3.2.4, “Setting the Business Central configuration of the environment”.
If you want to enable automatic upgrades to new versions, select the Enable Upgrades box. If this box is selected, when a new patch version of Red Hat Process Automation Manager 7.13 becomes available, the operator automatically upgrades your deployment to this version. All services are preserved and normally remain available throughout the upgrade process.
If you also want to enable the same automatic upgrade process when a new minor version of Red Hat Process Automation Manager 7.x becomes available, select the Include minor version upgrades box.
NoteDisable automatic updates if you want to use a custom image for any component of Red Hat Process Automation Manager.
- If you want to use image tags for downloading images, select the Use Image Tags box. This setting is useful if you use a custom registry or if you are directed by Red Hat support.
If you want to disable SSL connections to your deployment, select the Disable SSL Routes box. In this case, all routes that are externally exposed use clear-text (HTTP) connections.
NoteIf this box is not selected, only secure (HTTPS) routes are exposed externally.
If you want to use a custom image registry, under Custom registry, enter the URL of the registry in the Image registry field. If this registry does not have a properly signed and recognized SSL certificate, select the Insecure box.
For instructions about configuring the image registry to use specific images, see Section 3.2.2.1, “Configuring the image registry to use specific images”.
Under Admin user, enter the user name and password for the administrative user for Red Hat Process Automation Manager in the Username and Password fields.
ImportantIf you use RH-SSO or LDAP authentication, the same user must be configured in your authentication system with the
kie-server,rest-all,admin
roles for Red Hat Process Automation Manager.Optional: Select the startup strategy. The
OpenShiftStartupStrategy
setting is enabled by default.In some authoring environments, you might need to ensure that several users can deploy services on the same KIE Server at the same time. By default, after deploying a service onto a KIE Server using Business Central, the user must wait a few seconds before more services can be deployed. The
OpenShiftStartupStrategy
setting is enabled by default and causes this limitation. To remove the limitation, select theControllerBasedStartupStrategy
setting from the Startup Strategy list.NoteDo not enable the controller-based startup strategy in an environment with a high-availability Business Central.
- Optional: If you want to use the OpenShift CA bundle as the trust store for HTTPS communication, select the Use OpenShift CA Bundle box.
Optional: If you want to use a secret that contains the credentials for the Admin user, complete the following tasks:
-
From the Admin user configuration list, select
Secret configuration
. -
Under OpenShift admin credentials secret, in the Secret field, enter the name of the secret. If the Secret field is left blank, the default secret from
kie-admin-credentials
is used. -
Under OpenShift admin credentials secret, in the Username field, enter the username for the admin user to use in the secret. If the Username field is left blank, the default username from
kie-admin-credentials
is used. Under OpenShift admin credentials secret, in the Password field, enter the password for the admin user to use in the secret. If the Password field is left blank, the default password from
kie-admin-credentials
is used.NoteIf
kie-admin-credentials
is missing,kie-admin-credentials
is generated with a default username and password.
-
From the Admin user configuration list, select
Next steps
If you want to deploy the environment with the default configuration, click Finish and then click Deploy to deploy the environment. Otherwise, continue to set other configuration parameters.
3.2.2.1. Configuring the image registry to use specific images
During the basic configuration of the environment you can configure the custom registry to use specific images. For more information about configuring the basic environment, see Section 3.2.2, “Setting the basic configuration of the environment”.
Prerequisites
- You started to configure a Red Hat Process Automation Manager environment using the Business Automation operator and accessed the installer wizard according to the instructions in Section 3.2.2, “Setting the basic configuration of the environment”.
- You have set the URL of the image registry in the Image registry field under the Custom registry of the Red Hat Process Automation Manager environment configuration according to the instructions in Section 3.2.2, “Setting the basic configuration of the environment”.
Procedure
During the Red Hat Process Automation Manager environment configuration, to use a specific image from the custom registry, complete one of the following steps:
If you want to specify the image using the installer wizard, set the image context, image name, and image tag parameters in the Console and KIE Server tabs:
Table 3.1. Parameters Name Description Image context
The context of the image in the registry.
Image
The name of the image.
Image tag
The tag of the image. If you do not set this field, the installation uses the latest tag.
If you want to specify the image by using a KieApp CR YAML file, add the image registry and image details to the file, for example:
Example
apiVersion: app.kiegroup.org/v2 kind: KieApp spec: ... useImageTags: true imageRegistry: registry: registry.example.com:5000 ... objects: ... servers: - id: ... ... image: YOUR_IMAGE_NAME imageContext: YOUR_IMAGE_CONTEXT imageTag: YOUR_IMAGE_TAG ...
If an ImageStream with the configured name exists in the namespace that you are using or in the Red Hat OpenShift Container Platform namespace, the operator uses this ImageStream and does not create a new ImageStream.
To configure an ImageStream to update automatically, the ImageStream property scheduledImportPolicy
must be set to true
in the KieApp CR YAML file. For example:
Example of scheduledImportPolicy
apiVersion: app.kiegroup.org/v2 kind: KieApp spec: ... useImageTags: true scheduledImportPolicy: true imageRegistry: registry: registry.example.com:5000 ... objects: ... servers: - id: ... ... image: YOUR_IMAGE_NAME imageContext: YOUR_IMAGE_CONTEXT imageTag: YOUR_IMAGE_TAG ...
3.2.3. Setting the security configuration of the environment
After you set the basic configuration of a Red Hat Process Automation Manager environment using the Business Automation operator, you can optionally configure authentication (security) settings for the environment.
Prerequisites
- You completed basic configuration of a Red Hat Process Automation Manager environment using the Business Automation operator in the installer wizard according to the instructions in Section 3.2.2, “Setting the basic configuration of the environment”.
-
If you want to use RH-SSO or LDAP for authentication, you created users with the correct roles in your authentication system. You must create at least one administrative user (for example,
adminUser
) with thekie-server,rest-all,admin
roles. This user must have the user name and password that you configured on the Installation tab. - If you want to use RH-SSO authentication, you created the clients in your RH-SSO system for all components of your environment, specifying the correct URLs. This action ensures maximum control. Alternatively, the deployment can create the clients.
Procedure
- If the Installation tab is open, click Next to view the Security tab.
In the Authentication mode list, select one of the following modes:
-
Internal
: You configure the initial administration user when deploying the environment. You can create a post-configuration script to add users in the Elytron security subsystem. For instructions about creating a post-configuration script, see Section 3.4, “Providing Elytron user configuration or other post-configuration settings”. -
RH-SSO
: Red Hat Process Automation Manager uses Red Hat Single Sign-On for authentication. -
LDAP
: Red Hat Process Automation Manager uses LDAP for authentication
-
Complete the security configuration based on the Authentication mode that you selected:
If you selected
RH-SSO
, configure RH-SSO authentication:- In the RH-SSO URL field, enter the RH-SSO URL.
- In the Realm field, enter the RH-SSO realm name.
- If you did not create RH-SSO clients for components of your environment enter the credentials of an administrative user for your RH-SSO system in the SSO admin user and SSO admin password fields.
- If your RH-SSO system does not have a proper signed SSL certificate, select the Disable SSL cert validation box.
- If you want to change the RH-SSO principal attribute used for the user name, in the Principal attribute field enter the name of the new attribute.
If you selected
LDAP
, configure LDAP authentication:ImportantThe
baseFilter
field is a legacy LDAP search filter used to locate the context of the user to authenticate. The inputusername
oruserDN
obtained from the login module callback is substituted into the filter anywhere a{0}
expression is used. A common example for the search filter is(uid={0})
. For Elytron based subsystems, this property should be configured only with the search filter parameter, without any search expression. For example, search foruid
instead of(uid={0})
.- In the LDAP URL field, enter the LDAP URL.
Set LDAP parameters. These parameters configure LDAP authentication using the Elytron subsystem of Red Hat JBoss EAP. For more information about using the Elytron subsystem of Red Hat JBoss EAP with LDAP, see Configure Authentication with an LDAP-Based Identity Store.
NoteIf you want to enable LDAP failover, you can set two or more LDAP server addresses in the
AUTH_LDAP_URL
parameter, separated by a space.
If you selected
RH-SSO
orLDAP
, if your RH-SSO or LDAP system does not define all the roles required for your deployment, you can map authentication system roles to Red Hat Process Automation Manager roles.To enable role mapping, you must provide role mapping either as a single configuration string or as a role mapping configuration file. If you use a file for role mapping configuration, you must provide the file in an OpenShift configuration map or secret object in the project namespace.
The string must use the
role=role1,role2;another-role=role2
pattern, for exampleadmins=kie-server,rest-all,admin;developers=kie-server,rest-all
.The file must contain entries in the following format:
ldap_role=product_role1, product_role2...
For example:
admins=kie-server,rest-all,admin
To enable the use of this string or file, make the following changes:
-
Under RoleMapper, in the Roles properties file field, enter the role configuration string or the fully qualified path name of the role mapping configuration file, for example
/opt/eap/standalone/configuration/rolemapping/rolemapping.properties
. - Optional: Select the Roles keep mapped box or the Roles keep non mapped box. If you define role mapping, by default only the roles that you define in the mapping are available. If you want to keep the original roles that are defined in the authentication system and that your mapping maps to other roles, select the Roles keep mapped box. If you want to keep the original roles that are defined in the authentication system and not mentioned in your mapping,select the Roles keep non mapped box.
If you are using a role configuration file, configure the fields under RoleMapper Configuration object:
-
Under the Kind label, select the kind of the object that provides the file (
ConfigMap
orSecret
). - In the Name field, enter the name of the object. This object is automatically mounted on Business Central and KIE Server pods in the path that you specified for the role mapping configuration file.
-
Under the Kind label, select the kind of the object that provides the file (
-
Under RoleMapper, in the Roles properties file field, enter the role configuration string or the fully qualified path name of the role mapping configuration file, for example
Configure other passwords, if necessary:
- AMQ password and AMQ cluster password are passwords for interaction with ActiveMQ using the JMS API.
- Keystore password is the password for the keystore files used in secrets for HTTPS communication. Set this password if you created secrets according to instructions in Section 2.2, “Creating the secrets for KIE Server” or Section 2.3, “Creating the secrets for Business Central”.
- Database password is the password for database server pods that are a part of the environments.
Next steps
If you want to deploy the environment with the default configuration of all components, click Finish and then click Deploy to deploy the environment. Otherwise, continue to set configuration parameters for Business Central, KIE Servers, and Smart Router.
3.2.4. Setting the Business Central configuration of the environment
After you set the basic and security configuration of a Red Hat Process Automation Manager environment using the Business Automation operator, you can optionally configure settings for the Business Central or Business Central Monitoring component of the environment.
All environment types except rhpam-production-immutable
include this component.
By default, the rhpam-production-immutable
environment does not include Business Central Monitoring. To include Business Central Monitoring in this environment, you must set the number of replicas for the Business Central Monitoring pod in the Replicas field or make any other change to the Business Central configuration fields.
Prerequisites
- You completed basic configuration of a Red Hat Process Automation Manager environment using the Business Automation operator in the installer wizard according to the instructions in Section 3.2.2, “Setting the basic configuration of the environment”.
- If you want to use RH-SSO or LDAP for authentication, you completed security configuration according to the instructions in Section 3.2.3, “Setting the security configuration of the environment”.
Procedure
- If the Installation or Security tab is open, click Next until you view the Console tab.
- If you created the secret for Business Central according to the instructions in Section 2.3, “Creating the secrets for Business Central”, enter the name of the secret in the Keystore secret field.
Optional: If you want to use a custom image for the Business Central deployment, complete the following additional steps:
- Set the custom registry in the Installation tab. If you do not set the custom registry, the installation uses the default Red Hat registry. For more information about setting the custom registry value, see Section 3.2.2, “Setting the basic configuration of the environment”.
In the Console tab, set the following fields:
- Image context: The context of the image in the registry.
- Image: The name of the image.
Image tag: The tag of the image. If you do not set this field, the installation uses the
latest
tag.For example, if the full address of the image is
registry.example.com/mycontext/mycentral:1.0-SNAPSHOT
, set the custom registry toregistry.example.com
, the Image context field tomycontext
, the Image field tomycentral
, and the Image tag field to1.0-SNAPSHOT
.
Optional: To set a custom hostname for the external route, enter a domain in the Custom hostname to be used on the Business Central external Route field, formatted as in the following example:
`businesscentral.example.com`
NoteThe custom hostname must be valid and resolvable.
To change the custom hostname, you can modify the
routeHostname
property.Optional: To enable and set the Edge termination route, complete the following steps:
- Under Change route termination, select Enable Edge termination.
- Optional: In the Key field, enter the private key.
- Optional: In the Certificate field, enter the certificate.
- Optional: In the CaCertificate field, enter the CaCertificate.
Optional: Configure Git hooks.
In an authoring environment, you can use Git hooks to facilitate interaction between the internal Git repository of Business Central and an external Git repository. If you want to use Git hooks, you must prepare a Git hooks directory in an OpenShift configuration map, secret, or persistent volume claim object in the project namespace. You can also prepare a secret with the SSH key and known hosts files for Git SSH authentication. For instructions about preparing Git hooks, see Section 2.7, “Preparing Git hooks”.
To use a Git hooks directory, make the following changes:
-
Under GitHooks, in the Mount path field, enter a fully qualified path for the directory, for example,
/opt/kie/data/git/hooks
. -
In the fields under GitHooks Configuration object, select the Kind of the object that provides the file (
ConfigMap
,Secret
, orPersistentVolumeClaim
) and enter the Name of the object. This object is automatically mounted on the Business Central pods in the path that you specified for the Git hooks directory.
-
Under GitHooks, in the Mount path field, enter a fully qualified path for the directory, for example,
- Optional: In the SSH secret field enter the name of the secret with the SSH key and known hosts files.
- Optional: To configure KIE Server for decision management only capabilities so that jBPM and case management features are disabled, select Execute Kie Server only with Decisions capabilities.
-
Optional: Enter the number of replicas for Business Central or Business Central monitoring in the Replicas field. Do not change this number in a
rhpam-authoring
environment. -
Optional: To set the Business Central persistent volume size
pvSize
, on the Console component page, enter the desired size in the Persistent Volume Size field. The default size is 1Gi for Business Central and 64Mb for Business Central Monitoring. - Optional: Enter requested and maximum CPU and memory limits in the fields under Resource quotas.
- If you want to customize the configuration of the Java virtual machine on the Business Central pods, select the Enable JVM configuration box and then enter information in any of the fields under Enable JVM configuration. All fields are optional. For the JVM parameters that you can configure, see Section 3.5, “JVM configuration parameters”.
If you selected RH-SSO authentication, configure RH-SSO for Business Central:
- Enter the client name in the Client name field and the client secret in the Client secret field. If a client with this name does not exist, the deployment attempts to create a new client with this name and secret.
- If the deployment is to create a new client, enter the HTTP and HTTPS URLs that will be used for accessing Business Central into the SSO HTTP URL and SSO HTTPS URL fields. This information is recorded in the client.
-
Optional: If you are configuring a high-availability environment, set the user name and password for the DataGrid component in the DataGrid username and DataGrid password fields. By default, the user name is
infinispan
and the password is generated automatically. Optional: Depending on your needs, set environment variables. To set an environment variable, click Add new Environment variable, then enter the name and value for the variable in the Name and Value fields.
In a
rhpam-production
orrhpam-production-immutable
environment, if you want Business Central Monitoring to run in a simplified mode that does not use a file system, set theORG_APPFORMER_SIMPLIFIED_MONITORING_ENABLED
totrue
.In the simplified mode, Business Central Monitoring does not require a persistent volume claim. You can use this mode in environments that do not support
ReadWriteMany
access to persistent storage. You can not use Business Central Monitoring in the simplified mode to design custom dashboards.Optional: If you want to configure the proxy settings, use the following environment variables:
-
https_proxy
: The location of the https proxy. This takes precedence overHTTPS_PROXY
,http_proxy
, andHTTP_PROXY
, and is used for both Maven builds and Java runtime. For example:myuser:mypass@127.0.0.1:8080
. -
HTTPS_PROXY
: The location of the https proxy. This takes precedence overhttp_proxy
andHTTP_PROXY
, and is used for both Maven builds and Java runtime. For example:myuser@127.0.0.1:8080
. -
http_proxy
: The location of the http proxy. This takes precedence overHTTP_PROXY
and is used for both Maven builds and Java runtime. For example:http://127.0.0.1:8080
. -
HTTP_PROXY
: The location of the http proxy. This is used for both Maven builds and Java runtime. For example:127.0.0.1:8080
. -
no_proxy
: A comma separated lists of hosts, IP addresses, or domains that can be accessed directly. This takes precedence overNO_PROXY
and is used for both Maven builds and Java runtime. For example:*.example.com
. -
NO_PROXY
: A comma separated lists of hosts, IP addresses, or domains that can be accessed directly. This is used for both Maven builds and Java runtime. For example:foo.example.com,bar.example.com
.
-
If you want to use an external Maven repository, set the following variables:
-
MAVEN_REPO_URL
: The URL for the Maven repository -
MAVEN_REPO_ID
: An identifier for the Maven repository, for example,repo-custom
-
MAVEN_REPO_USERNAME
: The user name for the Maven repository MAVEN_REPO_PASSWORD
The password for the Maven repositoryImportantIn an authoring environment, if you want Business Central to push a project into an external Maven repository, you must configure this repository during deployment and also configure exporting to the repository in every project. For information about exporting Business Central projects to an external Maven repository, see Packaging and deploying an Red Hat Process Automation Manager project.
-
If your OpenShift environment does not have a connection to the public Internet, configure access to a Maven mirror that you set up according to Section 2.11, “Preparing a Maven mirror repository for offline use”. Set the following variables:
-
MAVEN_MIRROR_URL
: The URL for the Maven mirror repository that you set up in Section 2.11, “Preparing a Maven mirror repository for offline use”. This URL must be accessible from a pod in your OpenShift environment. MAVEN_MIRROR_OF
: The value that determines which artifacts are to be retrieved from the mirror. For instructions about setting themirrorOf
value, see Mirror Settings in the Apache Maven documentation. The default value isexternal:*
. With this value, Maven retrieves every required artifact from the mirror and does not query any other repositories.If you configure an external Maven repository (
MAVEN_REPO_URL
), changeMAVEN_MIRROR_OF
to exclude the artifacts in this repository from the mirror, for example,external:*,!repo-custom
. Replacerepo-custom
with the ID that you configured inMAVEN_REPO_ID
.If your authoring environment uses a built-in Business Central Maven repository, change
MAVEN_MIRROR_OF
to exclude the artifacts in this repository from the mirror:external:*,!repo-rhpamcentr
.
-
In some cases, you might want to persist the Maven repository cache for Business Central. By default, the cache is not persisted, so when you restart or scale a Business Central pod, all Maven artifacts are downloaded again and all projects within Business Central must be built again. If you enable persistence for the cache, the download is not necessary and startup time can improve in some situations. However, significant additional space on the Business Central persistence volume is required.
To enable persistence for the Maven repository cache, set the
KIE_PERSIST_MAVEN_REPO
environment variable totrue
.If you set
KIE_PERSIST_MAVEN_REPO
totrue
, you can optionally set a custom path for the cache using theKIE_M2_REPO_DIR
variable. The default path is/opt/kie/data/m2
. Files in the/opt/kie/data
directory tree are persisted.
Next steps
If you want to deploy the environment with the default configuration of KIE Servers, without Smart Router, and without Process Instance Migration, click Finish and then click Deploy to deploy the environment. Otherwise, continue to set configuration parameters for KIE Servers and Smart Router.
3.2.5. Setting custom KIE Server configuration of the environment
Every environment type in the Business Automation operator includes one or several KIE Servers by default.
Optionally, you can set custom configuration for KIE Servers. In this case, default KIE Servers are not created and only the KIE Servers that you configure are deployed.
Prerequisites
- You completed basic configuration of a Red Hat Process Automation Manager environment using the Business Automation operator in the installer wizard according to the instructions in Section 3.2.2, “Setting the basic configuration of the environment”.
Procedure
- If the Installation, Security, or Console tab is open, click Next until you view the KIE Servers tab.
- Click Add new KIE Server to add a new KIE Server configuration.
- In the Id field, enter an identifier for this KIE Server instance. If the KIE Server instance connects to a Business Central or Business Central Monitoring instance, this identifier determines which server group the server joins.
- In the Name field, enter a name for the KIE Server.
- In the Deployments field, enter the number of similar KIE Servers that are to be deployed. The installer can deploy several KIE Servers with the same configuration. The identifiers and names of the KIE Servers are modified automatically and remain unique.
- If you created the secret for KIE Server according to the instructions in Section 2.2, “Creating the secrets for KIE Server”, enter the name of the secret in the Keystore secret field.
- Optional: To configure KIE Server for decision management only capabilities so that jBPM and case management features are disabled, select Execute Kie Server only with Decisions capabilities.
- Optional: Enter the number of replicas for the KIE Server deployment in the Replicas field.
Optional: To set a custom hostname for the external route, enter a domain in the Custom hostname to be used on the KIE Server external Route field, formatted as in the following example:
`kieserver.example.com`
NoteThe custom hostname must be valid and resolvable.
To change the custom hostname, you can modify the
routeHostname
property.Optional: To enable and set the Edge termination route, complete the following steps:
- Under Change route termination, select Enable Edge termination.
- Optional: In the Key field, enter the private key.
- Optional: In the Certificate field, enter the certificate.
- Optional: In the CaCertificate field, enter the CaCertificate.
Optional: If you want to use a custom KIE Server image, complete the following additional steps:
- Click Set KIE Server image.
-
From the Kind list, select
ImageStreamTag
if you want to pull the image from an OpenShift image stream or selectDockerImage
if you want to pull the image from any Docker registry. Set the image name by completing one of the following steps:
If you selected the
ImageStreamTag
kind, enter the image stream tag name of the image in the Name field, for example,my-custom-is-tag:1.0
.If a corresponding image stream does not exist in your environment, the operator creates this image stream using the default Red Hat registry and tags. If you configured a custom registry in the Installation tab, the operator uses this registry for creating the image stream.
If you selected the
DockerImage
kind, enter the fully qualified image name for the image in the Name field, for example,registry.io/test/testing:1.0
.You can configure an image from any registry that your environment can access.
If you want to use an image stream that is not in the
openshift
namespace, enter the namespace in the Namespace field.For instructions about creating custom images, see Section 3.7, “Creating custom images for KIE Server and Smart Router”.
Optional: If you want to configure an immutable KIE Server using a Source to Image (S2I) build, complete the following additional steps:
ImportantIf you want to configure an immutable KIE Server that pulls services from the Maven repository, do not click Set Immutable server configuration and do not complete these steps. Instead, set the
KIE_SERVER_CONTAINER_DEPLOYMENT
environment variable.- Click Set Immutable server configuration.
-
In the KIE Server container deployment field, enter the identifying information of the services (KJAR files) that the deployment must extract from the result of a Source to Image (S2I) build. The format is
<containerId>=<groupId>:<artifactId>:<version>
or, if you want to specify an alias name for the container,<containerId>(<aliasId>)=<groupId>:<artifactId>:<version>
. You can provide two or more KJAR files using the|
separator, as illustrated in the following example:containerId=groupId:artifactId:version|c2(alias2)=g2:a2:v2
. - If your OpenShift environment does not have a connection to the public Internet, enter the URL of the Maven mirror that you set up according to Section 2.11, “Preparing a Maven mirror repository for offline use” in the Maven mirror URL field.
- In the Artifact directory field, enter the path within the project that contains the required binary files (KJAR files and any other necessary files) after a successful Maven build. Normally this directory is the target directory of the build. However, you can provide prebuilt binaries in this directory in the Git repository.
-
If you want to use a custom base KIE Server image for the S2I build, click Set Base build image and then enter the name of the image stream in the Name field. If the image stream is not in the
openshift
namespace, enter the namespace in the Namespace field. If you want to use a Docker image name and not an OpenShift image stream tag, change the Kind value toDockerImage
. Click Set Git source and enter information in the following fields:
- S2I Git URI:The URI for the Git repository that contains the source for your services.
- Reference: The branch in the Git repository.
Context directory: (Optional) The path to the source within the project downloaded from the Git repository. By default, the root directory of the downloaded project is the source directory.
NoteIf you do not configure a Git source, the immutable KIE Server does not use an S2I build. Instead, it pulls the artifacts that you define in the KIE Server container deployment field from the configured Maven repository.
- If you are using S2I and want to set a Git Webhook so that changes in the Git repository cause an automatic rebuild of the KIE Server, click Add new Webhook. Then select the type of the Webhook in the Type field and enter the secret string for the Webhook in the Secret field.
- If you want to set a build environment variable for the S2I build, click Add new Build Config Environment variable and then enter the name and value for the variable in the Name and Value fields.
- Optional: Enter requested and maximum CPU and memory limits in the fields under Resource quotas. If you are configuring several KIE Servers, the limits apply to each server separately.
If you selected RH-SSO authentication, configure RH-SSO for the KIE Server:
- Enter the client name in the Client name field and the client secret in the Client secret field. If a client with this name does not exist, the deployment attempts to create a new client with this name and secret.
- If the deployment is to create a new client, enter the HTTP and HTTPS URLs that will be used for accessing this KIE Server instance into the SSO HTTP URL and SSO HTTPS URL fields. This information is recorded in the client.
If you want to interact with the KIE Server through JMS API using an external AMQ message broker, enable the Enable JMS Integration setting. Additional fields for configuring JMS Integration are displayed and you must enter the values as necessary:
- User name, Password: The user name and password of a standard broker user, if user authentication in the broker is required in your environment.
- Executor: Select this setting to disable the JMS executor. The executor is enabled by default.
- Executor transacted: Select this setting to enable JMS transactions on the executor queue.
- Enable signal: Select this setting to enable signal configuration through JMS.
- Enable audit: Select this setting to enable audit logging through JMS.
- Audit transacted: Select this setting to enable JMS transactions on the audit queue.
- Queue executor, Queue request, Queue response, Queue signal, Queue audit: Custom JNDI names of the queues to use. If you set any of these values, you must also set the AMQ queues parameter.
- AMQ Queues: AMQ queue names, separated by commas. These queues are automatically created when the broker starts and are accessible as JNDI resources in the JBoss EAP server. If you are using any custom queue names, you must enter the names of all the queues uses by the server in this field.
- Enable SSL integration: Select this setting if you want to use an SSL connection to the AMQ broker. In this case you must also provide the name of the secret that you created in Section 2.4, “Creating the secrets for the AMQ broker connection” and the names and passwords of the key store and trust store that you used for the secret.
- If you want to customize the configuration of the Java virtual machine on the KIE Server pods, select the Enable JVM configuration box and then enter information in any of the fields under Enable JVM configuration. All fields are optional. For the JVM parameters that you can configure, see Section 3.5, “JVM configuration parameters”.
In the Database type field, select the database that the KIE Server must use. The following values are available:
-
mysql
: A MySQL server, created in a separate pod. -
postgresql
: A PostgreSQL server, created in a separate pod. Use this setting unless you have a specific reason to use any other setting. -
h2
: A built-inh2
database engine that does not require a separate pod. Do not scale the KIE Server pod if you use this setting. -
external
: An external database server.
-
If you selected any database except
external
, a Persistent Volume Claim will be created to store the database. Optionally, set configuration parameters for the persistent volume:- In the Size field, enter the size of the persistence volume.
- In the StorageClass name field, enter the storage class name for the persistent volume.
Optional: If you selected the
external
database, configure the KIE Server extension image. If you want to use any database server except PostgreSQL, MySQL, or MariaDB, you must provide a KIE Server extension image with the database server driver according to instructions in Section 2.6, “Building a custom KIE Server extension image for an external database”. To configure the KIE Server to use this extension image, make the following changes:- Select the Enable extension image stream box.
-
In the Extension image stream tag field, enter the ImageStreamTag definition for the image that you created, for example,
jboss-kie-db2-extension-openshift-image:11.1.4.4
-
Optional: In the Extension image stream namespace field, enter the namespace into which you pushed the image. If you do not enter any value in this field, the operator expects the image to be in the
openshift
namespace. - Optional: In the Extension image install directory field, enter the directory within the extensions image where the extensions are located. If you used the procedure in Section 2.6, “Building a custom KIE Server extension image for an external database” to build the image, do not enter any value for this field.
If you selected an external database server, provide the following information in additional fields:
Driver: Enter the database server driver, depending on the server type:
-
mysql
-
postgresql
-
mariadb
-
mssql
-
db2
-
oracle
-
sybase
-
Dialect: Enter the Hibernate dialect for the server, depending on the server type. The common settings are:
-
org.hibernate.dialect.MySQL5InnoDBDialect
-
org.hibernate.dialect.MySQL8Dialect
-
org.hibernate.dialect.MariaDB102Dialect
-
org.hibernate.dialect.PostgreSQL95Dialect
-
org.hibernate.dialect.PostgresPlusDialect
(used for EntrepriseDB Postgres Advanced Server) -
org.hibernate.dialect.SQLServer2012Dialect
(used for MS SQL) -
org.hibernate.dialect.DB2Dialect
-
org.hibernate.dialect.Oracle10gDialect
org.hibernate.dialect.SybaseASE15Dialect
For a complete list of supported dialects, see the Hibernate SQL Dialects table in Hibernate properties in the Red Hat JBoss EAP documentation.
-
- Host: Enter the host name of the external database server.
- Port: Enter the port number of the external database server.
Jdbc URL: Enter the JDBC URL for the external database server.
NoteIf you are using the EntrepriseDB Postgres database server, use an URL starting with
jdbc:postgresql://
and not withjdbc:edb://
. Alternatively, do not set the URL and set the host and port parameters instead.- NonXA: Select this box if you want to configure the data source in non-XA mode.
- JNDI name: Enter the JNDI name that the application uses for the data source.
- User name and Password: Enter the user name and password for the external database server.
- Background validation: Optionally, select this box to enable background SQL validation and enter the background validation interval.
- Optional: Set the minimum and maximum connection pool sizes, valid connection checker class, and exception sorter class for the database server.
If you use a MySQL version 8 external database server, enable the
mysql_native_password
plugin and use it for authentication. For instructions about this plugin, see Native Pluggable Authentication in the MySQL 8.0 Reference Manual.If you use a MySQL version 8 image provided by Red Hat on Red Hat OpenShift Container Platform, to enable the plugin, set the
MYSQL_DEFAULT_AUTHENTICATION_PLUGIN
environment variable tomysql_native_password
.If you create users on the MySQL version 8 server before enabling the
mysql_native_password
plugin, you must update themysql-user
table after you enable the plugin.Optional: If you want to configure the proxy settings, use the following environment variables:
-
https_proxy
: The location of the https proxy. This takes precedence overHTTPS_PROXY
,http_proxy
, andHTTP_PROXY
, and is used for both Maven builds and Java runtime. For example:myuser:mypass@127.0.0.1:8080
. -
HTTPS_PROXY
: The location of the https proxy. This takes precedence overhttp_proxy
andHTTP_PROXY
, and is used for both Maven builds and Java runtime. For example:myuser@127.0.0.1:8080
. -
http_proxy
: The location of the http proxy. This takes precedence overHTTP_PROXY
and is used for both Maven builds and Java runtime. For example:http://127.0.0.1:8080
. -
HTTP_PROXY
: The location of the http proxy. This is used for both Maven builds and Java runtime. For example:127.0.0.1:8080
. -
no_proxy
: A comma separated lists of hosts, IP addresses, or domains that can be accessed directly. This takes precedence overNO_PROXY
and is used for both Maven builds and Java runtime. For example:*.example.com
. -
NO_PROXY
: A comma separated lists of hosts, IP addresses, or domains that can be accessed directly. This is used for both Maven builds and Java runtime. For example:foo.example.com,bar.example.com
.
-
Optional: Depending on your needs, set environment variables. To set an environment variable, click Add new Environment variable, then enter the name and value for the variable in the Name and Value fields.
If you want to configure an immutable KIE server that pulls services from the configured Maven repository, enter the following settings:
-
Set the
KIE_SERVER_CONTAINER_DEPLOYMENT
environment variable. The variable must contain the identifying information of the services (KJAR files) that the deployment must pull from the Maven repository. The format is<containerId>=<groupId>:<artifactId>:<version>
or, if you want to specify an alias name for the container,<containerId>(<aliasId>)=<groupId>:<artifactId>:<version>
. You can provide two or more KJAR files using the|
separator, as illustrated in the following example:containerId=groupId:artifactId:version|c2(alias2)=g2:a2:v2
. - Configure an external Maven repository.
-
Set the
If you want to configure an external Maven repository, set the following variables:
-
MAVEN_REPO_URL
: The URL for the Maven repository -
MAVEN_REPO_ID
: An identifier for the Maven repository, for example,repo-custom
-
MAVEN_REPO_USERNAME
: The user name for the Maven repository -
MAVEN_REPO_PASSWORD
: The password for the Maven repository
-
If your OpenShift environment does not have a connection to the public Internet, configure access to a Maven mirror that you set up according to Section 2.11, “Preparing a Maven mirror repository for offline use”. Set the following variables:
-
MAVEN_MIRROR_URL
: The URL for the Maven mirror repository that you set up in Section 2.11, “Preparing a Maven mirror repository for offline use”. This URL must be accessible from a pod in your OpenShift environment. If you configured this KIE Server as S2I, you already entered this URL. MAVEN_MIRROR_OF
: The value that determines which artifacts are to be retrieved from the mirror. If you configured this KIE Server as S2I, do not set this value. For instructions about setting themirrorOf
value, see Mirror Settings in the Apache Maven documentation. The default value isexternal:*
. With this value, Maven retrieves every required artifact from the mirror and does not query any other repositories.If you configure an external Maven repository (
MAVEN_REPO_URL
), changeMAVEN_MIRROR_OF
to exclude the artifacts in this repository from the mirror, for example,external:*,!repo-custom
. Replacerepo-custom
with the ID that you configured inMAVEN_REPO_ID
.If your authoring environment uses a built-in Business Central Maven repository, change
MAVEN_MIRROR_OF
to exclude the artifacts in this repository from the mirror:external:*,!repo-rhpamcentr
.
-
-
If you want to configure your KIE Server deployment to use Prometheus to collect and store metrics, set the
PROMETHEUS_SERVER_EXT_DISABLED
environment variable tofalse
. For instructions about configuring Prometheus metrics collection, see Managing and monitoring KIE Server. If you are using Red Hat Single Sign-On authentication and the interaction of your application with Red Hat Single Sign-On requires support for cross-origin resource sharing (CORS), configure CORS Filters configuration:
- To use CORS with the default configuration, ensure Default configuration is selected from the CORS Filters configuration list and select Enable CORS with Default values.
- To use CORS with a custom configuration, select Custom configuration from the CORS Filters configuration list and enter the relevant values for the CORS filters.
Next steps
To configure additional KIE Servers, click Add new KIE Server again and repeat the procedure for the new server configuration.
If you want to deploy the environment without Smart Router and without Process Instance Migration, click Finish and then click Deploy to deploy the environment. Otherwise, continue to set configuration parameters for Smart Router.
3.2.6. Setting Smart Router configuration for the environment
By default, the deployed environment does not include Smart Router. You can add a Smart Router to the environment. You can also set configuration options for the Smart Router.
Prerequisites
- You completed basic configuration of a Red Hat Process Automation Manager environment using the Business Automation operator in the installer wizard according to the instructions in Section 3.2.2, “Setting the basic configuration of the environment”.
Procedure
- If the Installation, Security, Console, or KIE Servers tab is open, click Next until you view the Smart Router tab.
- Click Set Smart Router to add Smart Router to the environment and to configure Smart Router.
If you have created a custom Smart Router image according to the instructions in Section 3.7.3, “Creating a custom Smart Router image with an additional JAR file to implement custom routing”, set the following values:
-
Image context: The project name, for example,
rhpam-project
Image: The custom image name, for example,
rhpam-smartrouter-rhel8-custom
If you used a custom tag for the image, set the Image tag field to this tag.
-
Image context: The project name, for example,
- If you created the secret for Smart Router according to the instructions in Section 2.5, “Creating the secrets for Smart Router”, enter the name of the secret in the Secret field.
- Optional: To configure KIE Server for decision management only capabilities so that jBPM and case management features are disabled, select Execute Kie Server only with Decisions capabilities.
- Optional: Enter the number of replicas for the Smart Router in the Replicas field.
Optional: To set a custom hostname for the external route, enter a domain in the Custom hostname to be used on the Smart Router external Route field, formatted as in the following example:
`smartrouter.example.com`
NoteThe custom hostname must be valid and resolvable.
To change the custom hostname, you can modify the
routeHostname
property.Optional: To enable and set the Edge termination route, complete the following steps:
- Under Change route termination, select Enable Edge termination.
- Optional: In the Key field, enter the private key.
- Optional: In the Certificate field, enter the certificate.
- Optional: In the CaCertificate field, enter the CaCertificate.
- Optional: Enter requested and maximum CPU and memory limits in the fields under Resource quotas.
Optional: Set the logging level using an environment variable:
- Click Add new Environment variable.
-
In the Name field, enter
LOG_LEVEL
. - In the Value field, enter a Java logging level. For a list of the available logging levels, see class Level.
Optional: Set different logging levels for components by package name:
- Click Add new Environment variable.
-
In the Name field, enter
LOG_LEVEL
. In the Value field, enter packages and logging levels for them, formatted as in the following example:
com.example.abc=FINEST,com.example.def=SEVERE,com.example.xyz=FINE
Next steps
If you want to deploy the Process Instance Migration service, continue to deploy the service. Otherwise, click Finish and then click Deploy to deploy the environment.
3.2.7. Setting Process Instance Migration configuration for the environment
You can use the operator to deploy the Process Instance Migration (PIM) service. You can use the PIM service to define the migration between two different process definitions, known as a migration plan. You can apply the migration plan to the running process instances in a specific KIE Server.
The PIM service uses a database server for its operation.
Prerequisites
- You completed basic configuration of a Red Hat Process Automation Manager environment using the Business Automation operator in the installer wizard according to the instructions in Section 3.2.2, “Setting the basic configuration of the environment”.
Procedure
- If the Installation, Security, Console, KIE Servers, or Smart Router tab is open, click Next until you view the Process Instance Migration tab.
- Click Set Process Instance Migration to add PIM to the environment and to configure PIM.
In the Database type field, select the database that the PIM service must use. The following values are available:
-
mysql
: A MySQL server, created in a separate pod. -
postgresql
: A PostgreSQL server, created in a separate pod. Use this setting unless you have a specific reason to use any other setting. -
h2
: A built-inh2
database engine that does not require a separate pod.
-
Optional: Set configuration parameters of the persistent volume for the database:
- In the Size field, enter the size of the persistence volume
- In the StorageClass name field, enter the storage class name for the persistent volume
Next steps
Click Finish and then click Deploy to deploy the environment.
For instructions about using the PIM service, see Process Instance Migration in Managing and monitoring business processes in Business Central.
3.3. Modifying an environment that is deployed using operators
If an environment is deployed using operators, you cannot modify it using typical OpenShift methods. For example, if you delete a deployment configuration or a service, it is re-created automatically with the same parameters.
To modify the environment, you must modify the YAML description of the environment. You can change common settings such as passwords, add new KIE Servers, and scale KIE Servers.
Procedure
- Enter your project in the OpenShift web cluster console.
- In the OpenShift Web console navigation panel, select Catalog → Installed operators or Operators → Installed operators.
-
Find the
Business Automation
operator line in the table and clickKieApp
in the line. Information about the environments that you deployed using this operator is displayed. - Click the name of a deployed environment.
Select the YAML tab.
A YAML source is displayed. In this YAML source, you can edit the content under
spec:
to change the configuration of the environment.If you want to change the deployed version of Red Hat Process Automation Manager, add the following line under
spec:
version: 7.13.5
You can replace
7.13.5
with another required version. Use this setting to upgrade Red Hat Process Automation Manager to a new version if automatic updates are disabled, for example, if you use a custom image.-
If you want to change common settings, such as passwords, edit the values under
commonConfig:
. If you want to add new KIE Servers, add their descriptions at the end of the block under
servers:
, as shown in the following examples:To add two servers named
server-a
andserver-a-2
, add the following lines:- deployments: 2 name: server-a
To add an immutable KIE Server that includes services built from source in an S2I process, add the following lines:
- build: kieServerContainerDeployment: <deployment> gitSource: uri: <url> reference: <branch> contextDir: <directory>
Replace the following values:
-
<deployment>
: The identifying information of the decision service (KJAR file) that is built from your source. The format is<containerId>=<groupId>:<artifactId>:<version>
. You can provide two or more KJAR files using the|
separator, for examplecontainerId=groupId:artifactId:version|c2=g2:a2:v2
. The Maven build process must produce all these files from the source in the Git repository. -
<url>
: The URL for the Git repository that contains the source for your decision service. -
<branch>
: The branch in the Git repository. -
<directory>
: The path to the source within the project downloaded from the Git repository.
-
-
If you want to scale a KIE Server, find the description of the server in the block under
servers:
and add areplicas:
setting under that description. For example,replicas: 3
scales the server to three pods. If you want to make other changes, review the CRD source for the available settings. To view the CRD source, log in to the Red Hat OpenShift Container Platform environment with the
oc
command as an administrative user and then enter the following command:oc get crd kieapps.app.kiegroup.org -o yaml
-
Click Save and then wait for a
has been updated
pop-up message. - Click Reload to view the new YAML description of the environment.
3.4. Providing Elytron user configuration or other post-configuration settings
If you do not use LDAP or RH-SSO authentication, Red Hat Process Automation Manager relies on internal users in the Elytron subsystem of Red Hat JBoss EAP. By default, only the administrative user is created. You might need to add other users to the Elytron security subsystem of Red Hat JBoss EAP. To do so, you must run an Red Hat JBoss EAP post-configuration script.
You can configure this post-configuration script, or any other Red Hat JBoss EAP post-configuration script, in a deployment of Red Hat Process Automation Manager on Red Hat OpenShift Container Platform.
Procedure
- Download sample files from the GitHub repository.
Prepare the following files based on the sample files:
-
postconfigure.sh
: The post-configuration shell script that Red Hat JBoss EAP must run. In the example, this script uses theadd-users.cli
script to add Elytron users. If you want to complete post-configuration tasks outside of the CLI script, modify this script. -
delayedpostconfigure.sh
: An empty file, required in Red Hat Process Automation Manager version 7.13.5. add-users.cli
: The Red Hat JBoss EAP command line interface script for configuring Elytron users or for any other CLI tasks. Add your commands between the following lines:embed-server --std-out=echo --server-config=standalone-openshift.xml batch <your jboss-cli commands> run-batch quit
-
-
Log in to your Red Hat OpenShift Container Platform cluster with the
oc
command and change to the namespace of your deployment. Create a ConfigMap with the files that you prepared by using the following command:
oc create configmap postconfigure \ --from-file=add-users.cli=add-users.cli \ --from-file=delayedpostconfigure.sh=delayedpostconfigure.sh \ --from-file=postconfigure.sh=postconfigure.sh
Enter the following command to edit the
kieconfigs-7.13.5
config map:oc edit cm kieconfigs-7.13.5
In the file, modify the deployment configuration under the
console:
section to add the configuration to Business Central and modify all deployment configurations under theservers:
section to add the configuration to KIE Server instances.In each deployment configuration, make the following changes:
Under
deploymentConfigs.metadata.spec.template.spec.containers.volumeMounts
, add the following lines:- name: postconfigure-mount mountPath: /opt/eap/extensions
Under
deploymentConfigs.metadata.spec.template.spec.containers.volumeMounts
, add the following lines:- name: "postconfigure-mount" configMap: name: "postconfigure" defaultMode: 0555
- Save the file. After this point, new operator deployments contain the post-configuration settings.
In existing deployments, if the the post-configuration settings are not added automatically, you can delete the Business Central and KIE Server pods. The operator automatically starts updated versions with the post-configuration settings.
3.5. JVM configuration parameters
When deploying Red Hat Process Automation Manager using the operator, you can optionally set a number of JVM configuration parameters for Business Central and KIE Servers. These parameters set environment variables for the corresponding containers.
The following table lists all JVM configuration parameters that you can set when deploying Red Hat Process Automation Manager using the operator.
The default settings are optimal for most use cases. Make any changes only when they are required.
Configuration field | Environment variable | Description | Example |
---|---|---|---|
Java Opts append | JAVA_OPTS_APPEND | User specified Java options to be appended to generated options in JAVA_OPTS. |
|
Java max memory ratio | JAVA_MAX_MEM_RATIO |
The maximum percentage of container memory that can be used for the Java Virtual Machine. The remaining memory is used for the operating system. The default value is |
|
Java initial memory ratio | JAVA_INITIAL_MEM_RATIO |
The percentage of container memory that is initially used for the Java Virtual Machine. The default value is |
|
Java max initial memory | JAVA_MAX_INITIAL_MEM |
The maximum amount of memory, in megabytes, that can be initially used for the Java Virtual Machine. If the initial allocated memory, as set in the Java initial memory ratio parameter, would otherwise be greater than this value, the amount of memory set in this value is allocated using the |
|
Java diagnostics | JAVA_DIAGNOSTICS | Enable this setting to enable output of additional JVM diagnostic information to the standard output. Disabled by default. |
|
Java debug | JAVA_DEBUG |
Enable this setting to switch on remote debugging. Disabled by default. Adds the |
|
Java debug port | JAVA_DEBUG_PORT |
The port that is used for remote debugging. The default value is |
|
GC min heap free ratio | GC_MIN_HEAP_FREE_RATIO |
Minimum percentage of heap free after garbage collection (GC) to avoid expansion. Sets the |
|
GC max heap free ratio | GC_MAX_HEAP_FREE_RATIO |
Maximum percentage of heap free after GC to avoid shrinking. Sets the |
|
GC time ratio | GC_TIME_RATIO |
Specifies the ratio of the time spent outside the garbage collection (for example, the time spent for application execution) to the time spent in the garbage collection. Sets the |
|
GC adaptive size policy weight | GC_ADAPTIVE_SIZE_POLICY_WEIGHT |
The weighting given to the current GC time versus previous GC times. Sets the |
|
GC max metaspace size | GC_MAX_METASPACE_SIZE |
The maximum metaspace size. Sets the |
|
3.6. KIE configuration and ConfigMaps
When installing the Red Hat Process Automation Manager operator, the operator creates ConfigMaps, a YAML file, prefixed as kieconfig-$VERSION
, for the current namespace. A ConfigMap is a YAML file containing the configuration for functionality such as DeploymentConfigs
, secrets, routes, and services for the Red Hat Process Automation Manager components for that namespace.
Example ConfigMap:
# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: mysql.yaml: | ## KIE Databases BEGIN databases: ## RANGE BEGINS #[[ range $index, $Map := .Databases ]] …
The operator uses ConfigMaps to configure and deploy the components. This includes all supported Red Hat Process Automation Manager components such as KIE Server, Smart Router, Business Central and service-related configuration such as for persistent volumes, build configuration, and routes. When ConfigMaps are edited manually, the operator uses the new values to create the deployments when the environment is reconciled.
The Red Hat Process Automation Manager operator can use the current version and the previous version of Red Hat Process Automation Manager components concurrently, with ConfigMaps for each version, for example, 7.13.0 and 7.12.1.
- kieconfigs-7.13.5
This contains the
common.yaml
configuration file. For more information, seecommon.yaml
on GitHub. You can configure the following components using this configuration file:-
KIE Server, the server object, is identified by the
## KIE Servers BEGIN
placeholder. -
Business Central and Business Central Monitoring, the console object, is defined on the first line of the
common.yaml
. -
Smartrouter, the smartrouter object, is identified by the
## KIE smartrouter BEGIN
placeholder.
-
KIE Server, the server object, is identified by the
kieconfigs-7.13.5
also holds the routes and services relating to these three components.
- kieconfigs-7.13.5-dashbuilder
This contains the configuration YAML file for the Dashbuilder component.
For more information, see
rhpam-standalone-dashbuilder.yaml
on GitHub.- kieconfigs-7.13.5-dbs
This contains the base
DeploymentConfig
for the MySQL and PostgreSQL databases.For more information about the MySQL configuration, see
mysql.yaml
on GitHub.For more information about the PostgreSQL configuration, see
postgresql.yaml
on GitHub.- kieconfigs-7.13.5-dbs-pim
This contains the snippet configuration for the Process Instance Migration (PIM) supported databases; external, MySQL, and PostgreSQL.
For more information about the PIM external configuration, see
external.yaml
on GitHub.For more information about the PIM MySQL configuration, see
mysql.yaml
on GitHub.For more information about the PIM PostgreSQL configuration, see
postgresql.yaml
on GitHub.NoteThese YAML files only hold the specific configuration for the PIM database configured using
application.properties
in this configMap.- kieconfigs-7.13.5-dbs-servers
This contains the snippet configuration for the supported database configurations; external, h2, MySQL, and PostgreSQL.
For more information about the external configuration, see
external.yaml
on GitHub.For more information about the h2 configuration, see
h2.yaml
on GitHub. Note that the h2 configuration is not supported on production environments.For more information about the MySQL configuration, see
mysql.yaml
on GitHub.For more information about the PostgreSQL configuration, see
postgresql.yaml
on GitHub.- kieconfigs-7.13.5-envs
This contains the specific configurations for each Red Hat Process Automation Manager environment such as the authoring or trial environments. This ConfigMap contains the following YAML files:
- rhdm-authoring-ha.yaml
- rhdm-authoring.yaml
- rhdm-production-immutable.yaml
- rhdm-trial.yaml
- rhpam-authoring-ha.yaml
- rhpam-authoring.yaml
- rhpam-production-immutable.yaml
- rhpam-production.yaml
- rhpam-standalone-dashbuilder.yaml
rhpam-trial.yaml
For more information about specific configurations for each Red Hat Process Automation Manager environment, see ConfigMaps for Red Hat Process Automation Manager environments on GitHub.
- kieconfigs-7.13.5-jms
-
This contains the ActiveMQ configuration for KIE Server when the JMS Executor is enabled. For more information about the JMS Executor configuration, see
activemq-jms-config
on GitHub. - kieconfigs-7.13.5-pim
-
This contains the process instance migration (PIM) DeploymentConfig and related PIM configuration. If you are using MySQL or any other database with PIM you must use the
kieconfigs-7.13.5-dbs-pim
configMap and edit themysql.yaml
file.
3.6.1. Using ConfigMaps
You can use ConfigMaps to customize the Red Hat Process Automation Manager Operator and apply the related configuration. To make changes to ConfigMaps, you can use the oc
command tool or the Red Hat OpenShift Container Platform console.
Prerequisites
- Operator is installed in your current namespace.
- A KieApp is available.
kieconfig-$VERSION-*
are available.To check if
kieconfig-$VERSION-*
are available, run the following command:$ oc get cm | grep kieconfigs
Example output of
oc get cm | grep kieconfigs
:kieconfigs-7.13.0 1 64m kieconfigs-7.13.0-dashbuilder 1 64m kieconfigs-7.13.0-dbs 2 64m kieconfigs-7.13.0-dbs-pim 3 64m kieconfigs-7.13.0-dbs-servers 4 64m kieconfigs-7.13.0-envs 10 64m kieconfigs-7.13.0-jms 1 64m kieconfigs-7.13.0-pim 1 64m kieconfigs-7.12.1 1 64m kieconfigs-7.12.1-dashbuilder 1 64m kieconfigs-7.12.1-dbs 2 64m kieconfigs-7.12.1-dbs-pim 3 64m kieconfigs-7.12.1-dbs-servers 4 64m kieconfigs-7.12.1-envs 10 64m kieconfigs-7.12.1-jms 1 64m kieconfigs-7.12.1-pim 1 64m
Procedure
Create an
rhpam-authoring environment
with one replica for Business Central and KIE Server.Example
rhpam-authoring
:apiVersion: app.kiegroup.org/v2 kind: KieApp metadata: name: rhpam-authoring annotations: consoleName: rhpam-authoring consoleTitle: PAM Authoring consoleDesc: Deploys a PAM Authoring environment spec: environment: rhpam-authoring objects: servers: - replicas: 1 console: replicas: 1
Complete one of the following steps:
To open a specific ConfigMap with the
oc
tool, run the following command:$ oc edit cm/<CONFIGMAP_NAME>
NoteThe
oc
tool is similar to thevim
text editing tool.-
To open a specific ConfigMap using the Red Hat OpenShift Container Platform console, navigate to
kieconfigs-7.13.5
on the ConfigMaps page and edit it by opening it’s YAML version.
To modify the YAML file, add
annotations
fields that contain your changes, for example:Add the following to
Console.deploymentConfigs.metadata
:annotations: my.custom.annotation/v1: v1-rhpam-app-console
To update KIE Server, add the following to the
## KIE Servers Start
placeholder identifier:annotations: my.custom.annotation/v1: v1-rhpam-app-kieserver
NoteYou can also edit the Smart Router configuration using this ConfigMap.
To save the changes, complete one of the following steps:
-
In the
oc
tool editor, to save the change the change and exit, enter:wq!
. - In the the Red Hat OpenShift Container Platform console, to save the change using the Red Hat OpenShift Container Platform console, click Save.
-
In the
If an environment is running and the operator didn’t automatically start redeployment for the component that was changed, you must manually delete the DeploymentConfig of the target component by using
oc
command tool by completing the following steps:To return the DeploymentConfig, run the following command:
$ oc get dc
The output of
oc get dc
is returned.Example output of
oc get dc
:$ oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY rhpam-authoring-kieserver 1 1 1 config rhpam-authoring-rhpamcentr 1 1 1 config
To delete the DeploymentConfig of the target components, run the following commands:
$ oc delete dc/rhpam-authoring-kieserver
$ oc delete dc/rhpam-authoring-rhpamcentr
The deployment is redeployed with the changes applied in the ConfigMap.
To verify that the change was applied by checking the annotations for the KIE Server DeploymentConfig, run the following command:
$ oc describe dc/rhpam-authoring-kieserver
Example output of
oc describe dc/rhpam-authoring-kieserver
:Name: rhpam-authoring-kieserver Namespace: examplenamespace Created: 15 minutes ago Labels: app=rhpam-authoring application=rhpam-authoring service=rhpam-authoring-kieserver services.server.kie.org/kie-server-id=rhpam-authoring-kieserver Annotations: my.custom.annotation/v1=v1-rhpam-app-kieserver
3.7. Creating custom images for KIE Server and Smart Router
You can create custom images to add files to KIE Server and Smart Router deployments. You must push the images to your own container registry. When deploying Red Hat Process Automation Manager, you can configure the operator to use the custom images.
If you use a custom image, you must disable automatic version updates. When you want to install a new version, build the image with the same name as before and the new version tag and push the image into your registry. You can then change the version and the operator automatically pulls the new image. For instructions about changing the product version in the operator, see Section 3.3, “Modifying an environment that is deployed using operators”.
In particular, you can create the following types of custom images:
- A custom image of KIE Server that includes an additional RPM package
- A custom image of KIE Server that includes an additional JAR class library
- A custom image of Smart Router that includes an additional JAR class library to implement custom routing
3.7.1. Creating a custom KIE Server image with an additional RPM package
You can create a custom KIE Server image where an additional RPM package is installed. You can push this image into your custom registry and then use it to deploy KIE Server.
You can install any package from the Red Hat Enterprise Linux 8 repository. This example installs the procps-ng
package, which provides the ps
utility, but you can modify it to install other packages.
Procedure
-
Authenticate to the
registry.redhat.io
registry using thepodman login
command. For instructions about authenticating to the registry, see Red Hat Container Registry Authentication. To download the supported KIE Server base image, enter the following command:
podman pull registry.redhat.io/rhpam-7/rhpam-kieserver-rhel8:7.13.5
Create a
Dockerfile
that defines a custom image based on the base image. The file must change the current user toroot
, install the RPM package using theyum
command, and then revert toUSER 185
, the Red Hat JBoss EAP user. The following example shows the content of theDockerfile
file:FROM registry.redhat.io/rhpam-7/rhpam-kieserver-rhel8:7.13.5 USER root RUN yum -y install procps-ng USER 185
Replace the name of the RPM file as necessary. The
yum
command automatically installs all dependencies from the Red Hat Enterprise Linux 8 repository. You might need to install several RPM files, in this case, use severalRUN
commands.Build the custom image using the
Dockerfile
. Supply the fully qualified name for the image, including the registry name. You must use the same version tag as the version of the base image. To build the image, enter the following command:podman build . --tag registry_address/image_name:7.13.5
For example:
podman build . --tag registry.example.com/custom/rhpam-kieserver-rhel8:7.13.5
After the build completes, run the image, log in to it, and verify that the customization was successful. Enter the following command:
podman run -it --rm registry_address/image_name:7.13.5 /bin/bash
For example:
podman run -it --rm registry.example.com/custom/rhpam-kieserver-rhel8:7.13.5 /bin/bash
In the shell prompt for the image, enter the command to test that the RPM is installed, then enter
exit
. For example, forprocps-ng
, run theps
command:[jboss@c2fab36b778e ~]$ ps PID TTY TIME CMD 1 pts/0 00:00:00 bash 13 pts/0 00:00:00 ps [jboss@c2fab36b778e ~]$ exit
To push the custom image into your registry, enter the following command:
podman push registry_address/image_name:7.13.5 docker://registry_address/image_name:7.13.5
For example:
podman push registry.example.com/custom/rhpam-kieserver-rhel8:7.13.5 docker://registry.example.com/custom/rhpam-kieserver-rhel8:7.13.5
Next steps
When deploying the KIE Server, set the image name and namespace to specify the custom image in your registry. Click Set KIE Server image, change the Kind value to DockerImage, and then provide the image name including the registry name, but without the version tag, for example:
registry.example.com/custom/rhpam-kieserver-rhel8
For instructions about deploying KIE Server using the operator, see Section 3.2.5, “Setting custom KIE Server configuration of the environment”.
3.7.2. Creating a custom KIE Server image with an additional JAR file
You can create a custom KIE Server image where an additional JAR file (or several JAR files) is installed to extend the capabilities of the server. You can push this image into your custom registry and then use it to deploy KIE Server.
For example, you can create a custom class JAR to provide custom Prometheus metrics in KIE Server. For instructions about creating the custom class, see Extending Prometheus metrics monitoring in KIE Server with custom metrics in Managing and monitoring KIE Server.
Procedure
Develop a custom library that works with KIE Server. You can use the following documentation and examples to develop the library:
-
Build the library using Maven, so that the JAR file is placed in the
target
directory. This example uses thecustom-kieserver-ext-1.0.0.Final.jar
file name. -
Authenticate to the
registry.redhat.io
registry using thepodman login
command. For instructions about authenticating to the registry, see Red Hat Container Registry Authentication. To download the supported KIE Server base image, enter the following command:
podman pull registry.redhat.io/rhpam-7/rhpam-kieserver-rhel8:7.13.5
Create a
Dockerfile
that defines a custom image based on the base image. The file must copy the JAR file (or several JAR files) into the/opt/eap/standalone/deployments/ROOT.war/WEB-INF/lib/
directory. The following example shows the content of theDockerfile
file:FROM registry.redhat.io/rhpam-7/rhpam-kieserver-rhel8:7.13.5 COPY target/custom-kieserver-ext-1.0.0.Final.jar /opt/eap/standalone/deployments/ROOT.war/WEB-INF/lib/
Build the custom image using the
Dockerfile
. Supply the fully qualified name for the image, including the registry name. You must use the same version tag as the version of the base image. To build the image, enter the following command:podman build . --tag registry_address/image_name:7.13.5
For example:
podman build . --tag registry.example.com/custom/rhpam-kieserver-rhel8:7.13.5
To push the custom image into your registry, enter the following command:
podman push registry_address/image_name:7.13.5 docker://registry_address/image_name:7.13.5
For example:
podman push registry.example.com/custom/rhpam-kieserver-rhel8:7.13.5 docker://registry.example.com/custom/rhpam-kieserver-rhel8:7.13.5
Next steps
When deploying the KIE Server, set the image name and namespace to specify the custom image in your registry. Click Set KIE Server image, change the Kind value to DockerImage, and then provide the image name including the registry name, but without the version tag, for example:
registry.example.com/custom/rhpam-kieserver-rhel8
For instructions about deploying KIE Server using the operator, see Section 3.2.5, “Setting custom KIE Server configuration of the environment”.
3.7.3. Creating a custom Smart Router image with an additional JAR file to implement custom routing
By default, Smart Router routes requests based on the container alias. If several KIE Servers provide a service with the same container alias, Smart Router balances the load between them.
In some cases, custom routing functionality is required. You can create a custom class to implement the custom routing and then create a custom Smart Router image with the class. You can push this image into your custom registry and then use it to deploy Smart Router.
Prerequisites
- A JDK and Apache Maven are installed.
- The project for deploying Red Hat Process Automation Manager is created in your Red Hat OpenShift Container Platform environment
- You know the route for the Red Hat OpenShift Container Platform image registry and have the permission to push images into the registry. For instructions about configuring the registry, see Registry in Red Hat OpenShift Container Platform product documentation.
Procedure
- Download the sample source of the router extention from the GitHub repository.
- Modify the sample source of the router extension as necessary. The existing code implements simple routing based on the version of the container.
Build the source code with Maven:
mvn clean package
The build process generates the following JAR file:
target/router-ext-0.0.1-SNAPSHOT.jar
Create a working directory for creating the custom image, copy the generated JAR file into the directory, and then change to the directory, for example:
mkdir /tmp/smartrouter cp target/router-ext-0.0.1-SNAPSHOT.jar /tmp/smartrouter cd /tmp/smartrouter
-
Authenticate to the
registry.redhat.io
registry using thepodman login
command. For instructions about authenticating to the registry, see Red Hat Container Registry Authentication. To download the supported Smart Router base image, enter the following command:
podman pull registry.redhat.io/rhpam-7/rhpam-smartrouter-rhel8:7.13.5
Extract the
openshift-launch.sh
file from the official Smart Router image:podman run --rm registry.redhat.io/rhpam-7/rhpam-smartrouter-rhel8:7.13.5 \ cat /opt/rhpam-smartrouter/openshift-launch.sh > openshift-launch.sh
Edit the
openshift-launch.sh
file. In the last line of the file, find theexec
instruction that looks like the following text:exec ${JAVA_HOME}/bin/java ${SHOW_JVM_SETTINGS} ${JAVA_OPTS} ${JAVA_OPTS_APPEND} ${JAVA_PROXY_OPTIONS} "${D_ARR[@]}" -jar /opt/${JBOSS_PRODUCT}/${KIE_ROUTER_DISTRIBUTION_JAR}
Change the instruction to the following text:
exec ${JAVA_HOME}/bin/java ${SHOW_JVM_SETTINGS} "${D_ARR[@]}" \ -cp /opt/${JBOSS_PRODUCT}/router-ext-0.0.1-SNAPSHOT.jar:/opt/${JBOSS_PRODUCT}/${KIE_ROUTER_DISTRIBUTION_JAR} \ org.kie.server.router.KieServerRouter
This change adds the extension JAR file to the Java Class Path.
Create a
Dockerfile
file that defines a custom image based on the base image. The following example shows the content of theDockerfile
file:FROM registry.redhat.io/rhpam-7/rhpam-smartrouter-rhel8:7.13.5 RUN rm -rfv /opt/rhpam-smartrouter/openshift-launch.sh COPY openshift-launch.sh /opt/rhpam-smartrouter/openshift-launch.sh COPY router-ext-0.0.1-SNAPSHOT.jar /opt/rhpam-smartrouter/router-ext-0.0.1-SNAPSHOT.jar USER root RUN chown jboss. /opt/rhpam-smartrouter/router-ext-0.0.1-SNAPSHOT.jar /opt/rhpam-smartrouter/openshift-launch.sh RUN chmod +x /opt/rhpam-smartrouter/openshift-launch.sh USER 185
This file includes the following actions:
-
Add the JAR file and the new
openshift-launch.sh
file -
Change the current user to
root
-
Set the necessary permissions for the
openshift-launch.sh
file -
Revert to
USER 185
, the Red Hat JBoss EAP user
-
Add the JAR file and the new
-
Log in to your Red Hat OpenShift Container Platform cluster with the
oc
command. -
Log in to the Red Hat OpenShift Container Platform cluster registry with the
podman login
command. Build the custom image using the
Dockerfile
. Tag the image for your Red Hat OpenShift Container Platform cluster registry and your project namespace. Use a custom name for the image and the same version tag as the version of the base image. To build the image, enter the following command:podman build . --tag registry-route/project-name_/image-name:7.13.5
For example:
podman build . --tag route-openshift-image-registry.openshift.example.com/rhpam-project/rhpam-smartrouter-rhel8-custom:7.13.5
After the build completes, run the image and verify that the customization was successful. Enter the following command:
podman run registry-route/project-name/image-name:7.13.5
For example:
podman run route-openshift-image-registry.openshift.example.com/rhpam-project/rhpam-smartrouter-rhel8-custom:7.13.5
Ensure that the output mentions the custom service, as in the following example:
INFO: Using 'LatestVersionContainerResolver' container resolver and restriction policy 'ByPassUserNotAllowedRestrictionPolicy'
Push the custom image into the registry:
podman push registry-route/project-name/image-name:7.13.5
For example:
podman push route-openshift-image-registry.openshift.example.com/rhpam-project/rhpam-smartrouter-rhel8-custom:7.13.5
Next steps
When deploying Red Hat Process Automation Manager, set the following values in the Smart Router tab:
-
Image context: The project name, for example,
rhpam-project
-
Image: The custom image name, for example,
rhpam-smartrouter-rhel8-custom
For instructions about deploying the Smart Router using the operator, see Section 3.2.6, “Setting Smart Router configuration for the environment”.
You can also use a custom tag instead of the current version tag. However, if you use the current version tag, you can later create an image for a new version using the version tag for it. Then, when you upgrade the Red Hat Process Automation Manager version, the new image is included automatically. For instructions about upgrading the Red Hat Process Automation Manager version, see Section 3.3, “Modifying an environment that is deployed using operators”.
If you use a custom tag, when deploying Red Hat Process Automation Manager, in the Smart Router tab set the Image Tag value to the custom tag.
Chapter 4. Deploying Dashbuilder Standalone on Red Hat OpenShift Container Platform
You can use Dashbuilder Standalone to view dashboards in OpenShift that were created in and exported from Business Central. This is useful for reviewing business metrics in environments that do not have Business Central. Use the Dashbuilder Standalone operator to deploy Dashbuilder Standalone on Red Hat OpenShift Container Platform separately from other services.
Prerequisites
- Dashbuilder Standalone is available in the OpenShift registry.
- You have prepared your OpenShift environment as described in Chapter 3, Deployment and management of a Red Hat Process Automation Manager environment using OpenShift operators
- You have created and exported a dashboard in Business Central.
Procedure
- On the Operator Installation page, enter a name for your application in the Application name field.
-
In the Environment field, enter a name for your environment, for example
rhpam-standalone-dashbuilder
. - Click Next.
- Optional: On the Security page, configure LDAP or Red Hat Single Sign-On.
- On the Components page, select Dashbuilder from the Components list.
To add a KIE Server data set, complete the following tasks:
NoteYou can add additional KIE Server data sets by repeating this step.
- Click Add new KIE Server DataSets.
-
In the DataSet name field, enter
kieserver-1
. -
In the Kie Server Location field, enter the location of your KIE Server, for example
https://my-kie-server:80/services/rest/server
. To set your credentials, complete one of the following tasks:
- If you do not have a token set, in the Username and Password fields, enter your username and password. Leave the Token field blank.
If you have a token, in the Token field, enter your token. Leave the Username and Password fields blank.
The custom resource example:
apiVersion: app.kiegroup.org/v2 kind: KieApp metadata: name: standalone-dashbuilder spec: environment: rhpam-standalone-dashbuilder objects: dashbuilder: config: kieServerDataSets: - name: kieserver-1 location: 'https://my-kie-server:80/services/rest/server' user: kieserverAdmin password: kieserverAdminPwd replaceQuery: true
To add a KIE Server template, complete the following tasks:
NoteYou can add additional KIE Server templates by repeating this step.
- Click Add new KIE Server Templates.
-
In the Template name field, enter a name for your template, for example
kieserver-template
. -
In the KIE Server Location field, enter the location of your KIE Server, for example
https://my-other-kie-server:80/services/rest/server
. To set your credentials, complete one of the following tasks:
- If you do not have a token set, in the Username and Password fields, enter your username and password. Leave the Token field blank.
If you have a token, in the Token field, enter your token. Leave the Username and Password fields blank.
apiVersion: app.kiegroup.org/v2 kind: KieApp metadata: name: standalone-dashbuilder spec: environment: rhpam-standalone-dashbuilder objects: dashbuilder: config: kieServerDataSets: - name: kieserver-1 location: 'https://my-kie-server:80/services/rest/server' user: kieserverAdmin password: kieserverAdminPwd replaceQuery: true kieServerTemplates: - name: kieserver-template location: 'https://my-another-kie-server:80/services/rest/server' user: user password: pwd replaceQuery: true
Optional: To set a custom hostname for the external route, enter a domain in the Custom hostname to be used on the Dashbuilder external Route field, formatted as in the following example:
`dashbuilder.example.com`
NoteThe custom hostname must be valid and resolvable.
To change the custom hostname, you can modify the
routeHostname
property.Optional: To enable and set the Edge termination route, complete the following steps:
- Under Change route termination, select Enable Edge termination.
- Optional: In the Key field, enter the private key.
- Optional: In the Certificate field, enter the certificate.
- Optional: In the CaCertificate field, enter the CaCertificate.
4.1. Dashbuilder Standalone environment variables
When you use the Dashbuilder Container Image within operator, you can configure Dashbuilder by using the environment variables or through Custom Resource.
Parameter | Equivalent Environment Variable | Description | Example value |
---|---|---|---|
|
| Allows downloading of external (remote) files. Default value is false. | False |
|
| Enables external components. | True |
|
| Enables partitioning of components by the Runtime Model ID. Default value is true. | True |
|
| Allows the use of the properties file with Dashbuilder configurations. Unique properties are appended and if a property is set more than once, the one from the properties file is used. | True |
|
| Enables partitioning of Dataset IDs by the Runtime Model ID. Default value is true. | True |
| — | Enables integration with Business Central by configuring Business Central and Dashbuilder automatically. Only available on operator. | True |
| — | Enables integration with KIE Server by configuring KIE Server and Dashbuilder automatically. Only available on operator. | True |
|
|
Sets the base directory where dashboard ZIP files are stored. If | — |
|
| Sets a static dashboard to run automatically. If this property is set, imports are not allowed. | — |
|
|
Sets the base directory where dashboard ZIP files are stored. If | — |
|
| Defines the KIE Server data sets access configuration. | — |
|
| Defines the KIE Server Templates access configuration. | — |
|
| Enables automatic removal of model file from the file system. Default value is false. | False |
|
| Allows Runtime to check model last update in the file system to update the content. Default value is true. | True |
| `` |
Sets Dashbuilder as not ephemeral. If | True |
|
| Allows Runtime to allow imports (multi-tenancy). Default value is false. | False |
|
| Sets the size limit for dashboard uploads (in kb). Default value is 10485760 kb. | 10485760 |
| — | Represents an environment variable present in a Container. | — |
You can use operator to set environment variables by using the env
property. The following example sets the value of the DASHBUILDER_UPLOAD_SIZE
property to 1000
.
apiVersion: app.kiegroup.org/v2 kind: KieApp metadata: name: standalone-dashbuilder spec: environment: rhpam-standalone-dashbuilder objects: dashbuilder: env: - name: DASHBUILDER_UPLOAD_SIZE value: '1000'
Chapter 5. Migration of information from a deployment on Red Hat OpenShift Container Platform 3
If you previously used a Red Hat Process Automation Manager deployment on Red Hat OpenShift Container Platform 3, you can migrate the information from that deployment to a new deployment on Red Hat OpenShift Container Platform 4.
Before migrating information, you must deploy a new Red Hat Process Automation Manager infrastructure on Red Hat OpenShift Container Platform 4 using the operator. Include the same elements in the new infrastructure as those present in the old deployment. For example:
- For any existing authoring deployment, create a new authoring infrastructure, including Business Central and at least one KIE Server.
- For any existing immutable KIE Server, deploy a new immutable KIE Server with the same artifacts.
- For any existing KIE Server with a MySQL or PostgreSQL database pod, deploy a new KIE Server with the same type of database pod.
- For any existing KIE Server that uses an external database server, deploy a new KIE Server that uses the same external database server with the same credentials. The server connects to the same database and therefore can read the process context state.
If a KIE Server uses the H2 built-in database, migration of the process context state is not supported.
No migration is required for Smart Router. A new deployment of Smart Router automatically works with the services on the new KIE Servers.
5.1. Migrating information in Business Central
If you have an existing authoring environment in Red Hat OpenShift Container Platform 3, you can copy the .niogit
repository and the Maven repository from Business Central in this environment to Business Central in a new deployment on Red Hat OpenShift Container Platform 4. This action makes all the same projects and artifacts available in the new deployment.
Prerequisites
- You must have a machine that has network access to both the Red Hat OpenShift Container Platform 3 and Red Hat OpenShift Container Platform 4 infrastructures.
-
The
oc
command-line client from Red Hat OpenShift Container Platform 4 must be installed on the machine. For instructions about installing the command-line client, see CLI tools in Red Hat OpenShift Container Platform documentation.
Procedure
- Ensure that no web clients and no client applications are connected to any elements of the old and new deployment, including Business Central and KIE Servers.
- Create an empty temporary directory and change into it.
-
Using the
oc
command, log in to the Red Hat OpenShift Container Platform 3 infrastructure and switch to the project containing the old deployment. To view the pod names in the old deployment, run the following command:
oc get pods
Find the Business Central pod. The name of this pod includes
rhpamcentr
. In a high-availability deployment, you can use any of the Business Central pods.Use the
oc
command to copy the.niogit
repository and the Maven repository from the pod to the local machine, for example:oc cp myapp-rhpamcentr-5-689mw:/opt/kie/data/.niogit .niogit oc cp myapp-rhpamcentr-5-689mw:/opt/kie/data/maven-repository maven-repository
-
Using the
oc
command, log in to the Red Hat OpenShift Container Platform 4 infrastructure and switch to the project containing the new deployment. To view the pod names in the new deployment, run the following command:
oc get pods
Find the Business Central pod. The name of this pod includes
rhpamcentr
. In a high-availability deployment, you can use any of the Business Central pods.Use the
oc
command to copy the.niogit
repository and the Maven repository from the local machine to the pod, for example:oc cp .niogit myappnew-rhpamcentr-abd24:/opt/kie/data/.niogit oc cp maven-repository myappnew-rhpamcentr-abd24:/opt/kie/data/maven-repository
5.2. Migrating a MySQL database for a KIE Server
If your environment in Red Hat OpenShift Container Platform 3 includes a KIE Server that uses a MySQL database pod, copy the MySQL database content from the old deployment to the new deployment. This action copies the existing process state to the new deployment.
Prerequisites
- You must have a machine that has network access to both the Red Hat OpenShift Container Platform 3 and Red Hat OpenShift Container Platform 4 infrastructures.
-
The
oc
command-line client from Red Hat OpenShift Container Platform 4 must be installed on the machine. For instructions about installing the command-line client, see CLI tools in Red Hat OpenShift Container Platform documentation. -
The
mysql
andmysqldump
client applications provided by MySQL version 8 or later or by MariaDB version 10 or later must be installed.
Procedure
- Ensure that no web clients and no client applications are connected to any elements of the old and new deployment, including Business Central and KIE Servers.
- Create an empty temporary directory and change into it.
-
Using the
oc
command, log in to the Red Hat OpenShift Container Platform 3 infrastructure and switch to the project containing the old deployment. To view the deployment configuration names in the old deployment, run the following command:
oc get dc
Find the
mysql
deployment configuration that corresponds to the required KIE Server instance.View the configuration YAML of the deployment configuration, for example:
oc edit dc/myapp-mysql
In this file, find the user name (normally
rhpam
) and password for the database server, for example:- name: MYSQL_USER value: rhpam - name: MYSQL_PASSWORD value: NDaJIV7!
Record the user name and password. Do not make any changes to the file.
NoteYou can also use the following commands to retrieve the user name and password:
oc get dc/myapp-mysql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="MYSQL_USER")]}'.value oc get dc/myapp-mysql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="MYSQL_PASSWORD")]}'.value
To view the service names in the old deployment, run the following command:
oc get svc
Find the
mysql
service that corresponds to the required KIE Server instance.In a separate terminal window, start port forwarding from the local host to the
mysql
service, using the name and port number displayed for the service, for example:oc port-forward service/myapp-mysql 3306:3306
Create a full database dump using the recorded user name, for example:
mysqldump --all-databases -u rhpam -p -h 127.0.0.1 > mysqldump.sql
When prompted, enter the recorded password. The dump creation can take considerable time.
- Stop the port forwarding in the separate window using the Ctrl+C key combination.
-
Using the
oc
command, log in to the Red Hat OpenShift Container Platform 4 infrastructure and switch to the project containing the new deployment. To view the deployment configuration names in the new deployment, run the following command:
oc get dc
Find the
mysql
deployment configuration that corresponds to the required KIE Server instance.View the configuration YAML of the deployment configuration, for example:
oc edit dc/myappnew-mysql
In this file, find the user name (normally
rhpam
) and password for the database server. Record the user name and password. Do not make any changes to the file.NoteYou can also use the following commands to retrieve the user name and password:
oc get dc/myapp-mysql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="MYSQL_USER")]}'.value oc get dc/myapp-mysql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="MYSQL_PASSWORD")]}'.value
To view the service names in the new deployment, run the following command:
oc get svc
Find the
mysql
service that corresponds to the required KIE Server instance.In a separate terminal window, start port forwarding from the local host to the
mysql
service, using the name and port number displayed for the service, for example:oc port-forward service/myappnew-mysql 3306:3306
Restore the database dump using the recorded user name, for example:
mysql -u rhpam -p -h 127.0.0.1 < mysqldump.sql
When prompted, enter the recorded password. The restoration can take considerable time.
- Stop the port forwarding in the separate window using the Ctrl+C key combination.
5.3. Migrating a PostgreSQL database for a KIE Server
If your environment in Red Hat OpenShift Container Platform 3 includes a KIE Server that uses a PostgreSQL database pod, copy the PostgreSQL database content from the old deployment to the new deployment. This action copies the existing process state to the new deployment.
Prerequisites
- You must have a machine that has network access to both the Red Hat OpenShift Container Platform 3 and Red Hat OpenShift Container Platform 4 infrastructures.
-
The
oc
command-line client from Red Hat OpenShift Container Platform 4 must be installed on the machine. For instructions about installing the command-line client, see CLI tools in Red Hat OpenShift Container Platform documentation. -
The
psql
andpg_dump
client applications provided by PostgreSQL version 10 or later must be installed.
Procedure
- Ensure that no web clients and no client applications are connected to any elements of the old and new deployment, including Business Central and KIE Servers.
- Create an empty temporary directory and change into it.
-
Using the
oc
command, log in to the Red Hat OpenShift Container Platform 3 infrastructure and switch to the project containing the old deployment. To view the deployment configuration names in the old deployment, run the following command:
oc get dc
Find the
postgresql
deployment configuration that corresponds to the required KIE Server instance.View the configuration YAML of the deployment configuration, for example:
oc edit dc/myapp-postgresql
In this file, find the user name (normally
rhpam
), password, and database name (normallyrhpam7
) for the database server, for example:- name: POSTGRESQL_USER value: rhpam - name: POSTGRESQL_PASSWORD value: NDaJIV7! - name: POSTGRESQL_DATABASE value: rhpam7
Record the user name, password, and database name. Do not make any changes to the file.
NoteYou can also use the following commands to retrieve the user name, password, and database name:
oc get dc/myapp-postgresql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="POSTGRESQL_USER")]}'.value oc get dc/myapp-postgresql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="POSTGRESQL_PASSWORD")]}'.value oc get dc/myapp-postgresql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="POSTGRESQL_DATABASE")]}'.value
To view the service names in the old deployment, run the following command:
oc get svc
Find the
postgresql
service that corresponds to the required KIE Server instance.In a separate terminal window, start port forwarding from the local host to the
postgresql
service, using the name and port number displayed for the service, for example:oc port-forward service/myapp-postgresql 5432:5432
Create a dump of the database using the recorded user name and database name, for example:
pg_dump rhpam7 -h 127.0.0.1 -U rhpam -W > pgdump.sql
When prompted, enter the recorded password. The dump creation can take considerable time.
- Stop the port forwarding in the separate window using the Ctrl+C key combination.
-
Using the
oc
command, log in to the Red Hat OpenShift Container Platform 4 infrastructure and switch to the project containing the new deployment. To view the deployment configuration names in the new deployment, run the following command:
oc get dc
Find the
postgresql
deployment configuration that corresponds to the required KIE Server instance.View the configuration YAML of the deployment configuration, for example:
oc edit dc/myappnew-postgresql
In this file, find the user name (normally
rhpam
), password, , and database name (normallyrhpam7
) for the database server. Record the user name, password, and database name. Do not make any changes to the file.NoteYou can also use the following commands to retrieve the user name, password, and database name:
oc get dc/myapp-postgresql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="POSTGRESQL_USER")]}'.value oc get dc/myapp-postgresql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="POSTGRESQL_PASSWORD")]}'.value oc get dc/myapp-postgresql -ojsonpath='{.spec.template.spec.containers[0].env[?(@.name=="POSTGRESQL_DATABASE")]}'.value
To view the service names in the new deployment, run the command:
oc get svc
Find the
postgresql
service that corresponds to the required KIE Server instance.In a separate terminal window, start port forwarding from the local host to the
postgresql
service, using the name and port number displayed for the service, for example:oc port-forward service/myappnew-postgresql 5432:5432
Restore the database dump using the recorded user name and database name, for example:
psql -h 127.0.0.1 -d rhpam7 -U rhpam -W < pgdump.sql
When prompted, enter the recorded password. The restoration can take considerable time.
Review any displayed database error messages. Messages about objects that already exist are normal.
- Stop the port forwarding in the separate window using the Ctrl+C key combination.
Appendix A. Versioning information
Documentation last updated on Thursday, March 14th, 2024.
Appendix B. Contact information
Red Hat Process Automation Manager documentation team: brms-docs@redhat.com