Deploying a Red Hat Process Automation Manager authoring environment on Red Hat OpenShift Container Platform
Abstract
Preface
As a system engineer, you can deploy a Red Hat Process Automation Manager authoring environment on Red Hat OpenShift Container Platform to provide a platform for development of services, process applications, and other business assets.
Prerequisites
- Red Hat OpenShift Container Platform version 3.11 is deployed.
- At least four gigabytes of memory are available in the OpenShift cluster/namespace.
For a high-availability deployment, the following resources are available on the OpenShift cluster:
- For the Business Central replicated pod, 8 gigabytes of memory and 2 CPU cores are required for each replica. Two replicas are created by default.
- For the Process Server replicated pod, 1 gigabyte of memory and 1 CPU core are required for each replica. Two replicas are created by default.
- For the Red Hat Data Grid replicated pod, 2 gigabytes of memory and 1 CPU core are required for each replica. Two replicas are created by default.
- The Red Hat AMQ replicated pod uses the default resource limits configured on your cluster.
- The MySQL replicated pod uses the default resource limits configured on your cluster.
For instructions about checking the capacity of your cluster, see Analyzing cluster capacity in the Red Hat OpenShift Container Platform 3.11 product documentation.
- The OpenShift project for the deployment is created.
-
You are logged in to the project using the
oc
command. For more information about theoc
command-line tool, see the OpenShift CLI Reference. If you want to use the OpenShift Web console to deploy templates, you must also be logged on using the Web console. Dynamic persistent volume (PV) provisioning is enabled. Alternatively, if dynamic PV provisioning is not enabled, enough persistent volumes must be available. By default, the deployed components require the following PV sizes:
- The replicated set of Process Server pods requires one 1Gi PV for the database by default. You can change the database PV size in the template parameters. This requirement does not apply if you use an external database server.
- Business Central requires one 1Gi PV by default. You can change the PV size for Business Central persistent storage in the template parameters.
Your OpenShift environment supports persistent volumes with
ReadWriteMany
mode. If your environment does not support this mode, you can use NFS to provision the volumes. However, for best performance and reliability, use GlusterFS to provision persistent volumes for a high-availability authoring environment. For information about access mode support in OpenShift Online volume plug-ins, see Access Modes.ImportantReadWriteMany
mode is not supported on OpenShift Online and OpenShift Dedicated.
Since Red Hat Process Automation Manager version 7.5, support for Red Hat OpenShift Container Platform 3.x is deprecated, including using templates to install Red Hat Process Automation Manager. This functionality will be removed in a future release.
Do not use Red Hat Process Automation Manager templates with Red Hat OpenShift Container Platform 4.x. To deploy Red Hat Process Automation Manager on Red Hat OpenShift Container Platform 4.x, see the instructions in Deploying a Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform using Operators.
Chapter 1. Overview of Red Hat Process Automation Manager on Red Hat OpenShift Container Platform
You can deploy Red Hat Process Automation Manager into a Red Hat OpenShift Container Platform environment.
In this solution, components of Red Hat Process Automation Manager are deployed as separate OpenShift pods. You can scale each of the pods up and down individually to provide as few or as many containers as required for a particular component. You can use standard OpenShift methods to manage the pods and balance the load.
The following key components of Red Hat Process Automation Manager are available on OpenShift:
Process Server, also known as Execution Server or KIE Server, is the infrastructure element that runs decision services, process applications, and other deployable assets (collectively referred to as services) . All logic of the services runs on execution servers.
A database server is normally required for Process Server. You can provide a database server in another OpenShift pod or configure an execution server on OpenShift to use any other database server. Alternatively, Process Server can use an H2 database; in this case, you cannot scale the pod.
You can scale up a Process Server pod to provide as many copies as required, running on the same host or different hosts. As you scale a pod up or down, all of its copies use the same database server and run the same services. OpenShift provides load balancing and a request can be handled by any of the pods.
You can deploy a separate Process Server pod to run a different group of services. That pod can also be scaled up or down. You can have as many separate replicated Process Server pods as required.
Business Central is a web-based interactive environment used for authoring services. It also provides a management and monitoring console. You can use Business Central to develop services and deploy them to Process Servers. You can also use Business Central to monitor the execution of processes.
Business Central is a centralized application. However, you can configure it for high availability, where multiple pods run and share the same data.
Business Central includes a Git repository that holds the source for the services that you develop on it. It also includes a built-in Maven repository. Depending on configuration, Business Central can place the compiled services (KJAR files) into the built-in Maven repository or (if configured) into an external Maven repository.
- Business Central Monitoring is a web-based management and monitoring console. It can manage the deployment of services to Process Servers and provide monitoring information, but does not include authoring capabilities. You can use this component to manage staging and production environments.
- Smart Router is an optional layer between Process Servers and other components that interact with them. When your environment includes many services running on different Process Servers, Smart Router provides a single endpoint to all client applications. A client application can make a REST API call that requires any service. Smart Router automatically calls the Process Server that can process a particular request.
You can arrange these and other components into various environment configurations within OpenShift.
The following environment types are typical:
- Authoring: An environment for creating and modifying services using Business Central. It consists of pods that provide Business Central for the authoring work and a Process Server for test execution of the services. For instructions about deploying this environment, see Deploying a Red Hat Process Automation Manager authoring environment on Red Hat OpenShift Container Platform.
Managed deployment: An environment for running existing services for staging and production purposes. This environment includes several groups of Process Server pods; you can deploy and undeploy services on every such group and also scale the group up or down as necessary. Use Business Central Monitoring to deploy, run, and stop the services and to monitor their execution.
You can deploy two types of managed environment. In a freeform server environment, you initially deploy Business Central Monitoring and one Process Server. You can additionally deploy any number of Process Servers. Business Central Monitoring can connects to all servers in the same namespace. For instructions about deploying this environment, see Deploying a Red Hat Process Automation Manager freeform managed server environment on Red Hat OpenShift Container Platform.
Alternatively, you can deploy a fixed managed server environment. A single deployment includes Business Central Monitoring, Smart Router, and a preset number of Process Servers (by default, two servers, but you can modify the template to change the number). You cannot easily add or remove servers at a later time. For instructions about deploying this environment, see Deploying a Red Hat Process Automation Manager fixed managed server environment on Red Hat OpenShift Container Platform.
- Deployment with immutable servers: An alternate environment for running existing services for staging and production purposes. In this environment, when you deploy a Process Server pod, it builds an image that loads and starts a service or group of services. You cannot stop any service on the pod or add any new service to the pod. If you want to use another version of a service or modify the configuration in any other way, you deploy a new server image and displace the old one. In this system, the Process Server runs like any other pod on the OpenShift environment; you can use any container-based integration workflows and do not need to use any other tools to manage the pods. Optionally, you can use Business Central Monitoring to monitor the performance of the environment and to stop and restart some of the service instances, but not to deploy additional services to any Process Server or undeploy any existing ones (you cannot add or remove containers). For instructions about deploying this environment, see Deploying a Red Hat Process Automation Manager immutable server environment on Red Hat OpenShift Container Platform.
You can also deploy a trial or evaluation environment. This environment includes Business Central and a Process Server. You can set it up quickly and use it to evaluate or demonstrate developing and running assets. However, the environment does not use any persistent storage, and any work you do in the environment is not saved. For instructions about deploying this environment, see Deploying a Red Hat Process Automation Manager trial environment on Red Hat OpenShift Container Platform.
To deploy a Red Hat Process Automation Manager environment on OpenShift, you can use the templates that are provided with Red Hat Process Automation Manager. You can modify the templates to ensure that the configuration suits your environment.
Chapter 2. Preparing to deploy Red Hat Process Automation Manager in your OpenShift environment
Before deploying Red Hat Process Automation Manager in your OpenShift environment, you must complete several tasks. You do not need to repeat these tasks if you want to deploy additional images, for example, for new versions of processes or for other processes.
2.1. Ensuring the availability of image streams and the image registry
To deploy Red Hat Process Automation Manager components on Red Hat OpenShift Container Platform, you must ensure that OpenShift can download the correct images from the Red Hat registry. To download the images, OpenShift requires image streams, which contain the information about the location of images. OpenShift also must be configured to authenticate with the Red Hat registry using your service account user name and password.
Some versions of the OpenShift environment include the required image streams. You must check if they are available. If image streams are available in OpenShift by default, you can use them if the OpenShift infrastructure is configured for registry authentication server. The administrator must complete the registry authentication configuration when installing the OpenShift environment.
Otherwise, you can configure registry authentication in your own project and install the image streams in that project.
Procedure
- Determine whether Red Hat OpenShift Container Platform is configured with the user name and password for Red Hat registry access. For details about the required configuration, see Configuring a Registry Location. If you are using an OpenShift Online subscription, it is configured for Red Hat registry access.
If Red Hat OpenShift Container Platform is configured with the user name and password for Red Hat registry access, enter the following commands:
$ oc get imagestreamtag -n openshift | grep -F rhpam-businesscentral | grep -F 7.6 $ oc get imagestreamtag -n openshift | grep -F rhpam-kieserver | grep -F 7.6
If the outputs of both commands are not empty, the required image streams are available in the
openshift
namespace and no further action is required.If the output of one or both of the commands is empty or if OpenShift is not configured with the user name and password for Red Hat registry access, complete the following steps:
-
Ensure you are logged in to OpenShift with the
oc
command and that your project is active. - Complete the steps documented in Registry Service Accounts for Shared Environments. You must log in to the Red Hat Customer Portal to access the document and to complete the steps to create a registry service account.
- Select the OpenShift Secret tab and click the link under Download secret to download the YAML secret file.
-
View the downloaded file and note the name that is listed in the
name:
entry. Enter the following commands:
oc create -f <file_name>.yaml oc secrets link default <secret_name> --for=pull oc secrets link builder <secret_name> --for=pull
Replace
<file_name>
with the name of the downloaded file and<secret_name>
with the name that is listed in thename:
entry of the file.-
Download the
rhpam-7.6.0-openshift-templates.zip
product deliverable file from the Software Downloads page and extract therhpam76-image-streams.yaml
file. Enter the following command:
$ oc apply -f rhpam76-image-streams.yaml
NoteIf you complete these steps, you install the image streams into the namespace of your project. In this case, when you deploy the templates, you must set the
IMAGE_STREAM_NAMESPACE
parameter to the name of this project.
-
Ensure you are logged in to OpenShift with the
2.2. Creating the secrets for Process Server
OpenShift uses objects called secrets to hold sensitive information such as passwords or keystores. For more information about OpenShift secrets, see the Secrets chapter in the OpenShift documentation.
You must create an SSL certificate for HTTP access to Process Server and provide it to your OpenShift environment as a secret.
Procedure
Generate an SSL keystore with a private and public key for SSL encryption for Process Server. For more information on how to create a keystore with self-signed or purchased SSL certificates, see Generate a SSL Encryption Key and Certificate.
NoteIn a production environment, generate a valid signed certificate that matches the expected URL for Process Server.
-
Save the keystore in a file named
keystore.jks
. -
Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is
jboss
. -
Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is
mykeystorepass
. Use the
oc
command to generate a secret namedkieserver-app-secret
from the new keystore file:$ oc create secret generic kieserver-app-secret --from-file=keystore.jks
2.3. Creating the secrets for Business Central
You must create an SSL certificate for HTTP access to Business Central and provide it to your OpenShift environment as a secret.
Do not use the same certificate and keystore for Business Central and Process Server.
Procedure
Generate an SSL keystore with a private and public key for SSL encryption for Business Central. For more information on how to create a keystore with self-signed or purchased SSL certificates, see Generate a SSL Encryption Key and Certificate.
NoteIn a production environment, generate a valid signed certificate that matches the expected URL for Business Central.
-
Save the keystore in a file named
keystore.jks
. -
Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is
jboss
. -
Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is
mykeystorepass
. Use the
oc
command to generate a secret namedbusinesscentral-app-secret
from the new keystore file:$ oc create secret generic businesscentral-app-secret --from-file=keystore.jks
2.4. Changing GlusterFS configuration
You must check whether your OpenShift environment uses GlusterFS to provide permanent storage volumes. If it uses GlusterFS, to ensure optimal performance of Business Central, you must tune your GlusterFS storage by changing the storage class configuration.
Procedure
To check whether your environment uses GlusterFS, enter the following command:
oc get storageclass
In the results, check whether the
(default)
marker is on the storage class that listsglusterfs
. For example, in the following output the default storage class isgluster-container
, which does listglusterfs
:NAME PROVISIONER AGE gluster-block gluster.org/glusterblock 8d gluster-container (default) kubernetes.io/glusterfs 8d
If the result has a default storage class that does not list
glusterfs
or if the result is empty, you do not need to make any changes. In this case, skip the rest of this procedure.To save the configuration of the default storage class into a YAML file, enter the following command:
oc get storageclass <class-name> -o yaml >storage_config.yaml
Replace
<class-name>
with the name of the default storage class. Example:oc get storageclass gluster-container -o yaml >storage_config.yaml
Edit the
storage_config.yaml
file:Remove the lines with the following keys:
-
creationTimestamp
-
resourceVersion
-
selfLink
-
uid
-
If you are planning to use Business Central only as a single pod, without high-availability configuration, on the line with the
volumeoptions
key, add the following options:features.cache-invalidation on performance.nl-cache on
For example:
volumeoptions: client.ssl off, server.ssl off, features.cache-invalidation on, performance.nl-cache on
If you are planning to use Business Central in a high-availability configuration, on the line with the
volumeoptions
key, add the following options:features.cache-invalidation on nfs.trusted-write on nfs.trusted-sync on performance.nl-cache on performance.stat-prefetch off performance.read-ahead off performance.write-behind off performance.readdir-ahead off performance.io-cache off performance.quick-read off performance.open-behind off locks.mandatory-locking off performance.strict-o-direct on
For example:
volumeoptions: client.ssl off, server.ssl off, features.cache-invalidation on, nfs.trusted-write on, nfs.trusted-sync on, performance.nl-cache on, performance.stat-prefetch off, performance.read-ahead off, performance.write-behind off, performance.readdir-ahead off, performance.io-cache off, performance.quick-read off, performance.open-behind off, locks.mandatory-locking off, performance.strict-o-direct on
To remove the existing default storage class, enter the following command:
oc delete storageclass <class-name>
Replace
<class-name>
with the name of the default storage class. Example:oc delete storageclass gluster-container
To re-create the storage class using the new configuration, enter the following command:
oc create -f storage_config.yaml
2.5. Provisioning persistent volumes with ReadWriteMany
access mode using NFS
If you want to deploy high-availability Business Central or any Process Servers that use the H2 database, which is the default setting for a non-high-availability authoring environment, your environment must provision persistent volumes with ReadWriteMany
access mode.
If you want to deploy a high-availability authoring environment, for optimal performance and reliability, provision persistent volumes using GlusterFS. Configure the GlusterFS storage class as described in Section 2.4, “Changing GlusterFS configuration”.
If your configuration requires provisioning persistent volumes with ReadWriteMany
access mode but your environment does not support such provisioning, use NFS to provision the volumes. Otherwise, skip this procedure.
Procedure
Deploy an NFS server and provision the persistent volumes using NFS. For information about provisioning persistent volumes using NFS, see the "Persistent storage using NFS" section of the Configuring Clusters guide.
2.6. Preparing a Maven mirror repository for offline use
If your Red Hat OpenShift Container Platform environment does not have outgoing access to the public Internet, you must prepare a Maven repository with a mirror of all the necessary artifacts and make this repository available to your environment.
You do not need to complete this procedure if your Red Hat OpenShift Container Platform environment is connected to the Internet.
Prerequisites
- A computer that has outgoing access to the public Internet is available.
Procedure
Prepare a Maven release repository to which you can write. The repository must allow read access without authentication. Your OpenShift environment must have access to this repository. You can deploy a Nexus repository manager in the OpenShift environment. For instructions about setting up Nexus on OpenShift, see Setting up Nexus. Use this repository as a separate mirror repository.
Alternatively, if you use a custom external repository (for example, Nexus) for your services, you can use the same repository as a mirror repository.
On the computer that has an outgoing connection to the public Internet, complete the following steps:
- Download the latest version of the Offliner tool.
-
Download the
rhpam-7.6.0-offliner.txt
product deliverable file from the Software Downloads page of the Red Hat Customer Portal. Enter the following command to use the Offliner tool to download the required artifacts:
java -jar offliner-<version>.jar -r https://maven.repository.redhat.com/ga/ -r https://repo1.maven.org/maven2/ -d /home/user/temp rhpam-7.6.0-offliner.txt
Replace
/home/user/temp
with an empty temporary directory and<version>
with the version of the Offliner tool that you downloaded. The download can take a significant amount of time.- Upload all artifacts from the temporary directory to the Maven mirror repository that you prepared. You can use the Maven Repository Provisioner utility to upload the artifacts.
If you developed services outside Business Central and they have additional dependencies, add the dependencies to the mirror repository. If you developed the services as Maven projects, you can use the following steps to prepare these dependencies automatically. Complete the steps on the computer that has an outgoing connection to the public Internet.
-
Create a backup of the local Maven cache directory (
~/.m2/repository
) and then clear the directory. -
Build the source of your projects using the
mvn clean install
command. For every project, enter the following command to ensure that Maven downloads all runtime dependencies for all the artifacts generated by the project:
mvn -e -DskipTests dependency:go-offline -f /path/to/project/pom.xml --batch-mode -Djava.net.preferIPv4Stack=true
Replace
/path/to/project/pom.xml
with the correct path to thepom.xml
file of the project.-
Upload all artifacts from the local Maven cache directory (
~/.m2/repository
) to the Maven mirror repository that you prepared. You can use the Maven Repository Provisioner utility to upload the artifacts.
-
Create a backup of the local Maven cache directory (
2.7. Building a custom Process Server extension image for an external database
If you want to use an external database server for a Process Server and the database server is not a MySQL or PostgreSQL server, you must build a custom Process Server extension image with drivers for this server before deploying your environment.
Complete the steps in this build procedure to provide drivers for any of the following database servers:
- Microsoft SQL Server
- MariaDB
- IBM DB2
- Oracle Database
- Sybase
For the supported versions of the database servers, see Red Hat Process Automation Manager 7 Supported Configurations.
The build procedure creates a custom extension image that extends the existing Process Server image. You must import this custom extension image into your OpenShift environment and then reference it in the EXTENSIONS_IMAGE
parameter.
Prerequisites
-
You are logged in to your OpenShift environment using the
oc
command. Your OpenShift user must have theregistry-editor
role. - For Oracle Database or Sybase, you downloaded the JDBC driver from the database server vendor.
You have installed the following required software:
- Docker
- Cekit version 3.2
The following libraries and extensions for Cekit:
-
odcs-client
, provided by thepython3-odcs-client
package or similar package -
docker
, provided by thepython3-docker
package or similar package -
docker-squash
, provided by thepython3-docker-squash
package or similar package -
behave
, provided by thepython3-behave
package or similar package -
s2i
, provided by thesource-to-image
package or similar package
-
Procedure
- For IBM DB2, Oracle Database, or Sybase, provide the JDBC driver JAR file in a local directory.
-
Download the
rhpam-7.6.0-openshift-templates.zip
product deliverable file from the Software Downloads page of the Red Hat Customer Portal. -
Unzip the file and, using the command line, change to the
templates/contrib/jdbc
directory of the unzipped file. This directory contains the source code for the custom build. Run one of the following commands, depending on the database server type:
For Microsoft SQL Server:
make build mssql
For MariaDB:
make build mariadb
For IBM DB2:
make build db2
For Oracle Database:
make build oracle artifact=/tmp/ojdbc7.jar version=7.0
In this command, replace
/tmp/ojdbc7.jar
with the path name of the downloaded Oracle Database driver and7.0
with the version of the driver.For Sybase:
make build sybase artifact=/tmp/jconn4-16.0_PL05.jar version=16.0_PL05
In this command, replace
/tmp/jconn4-16.0_PL05.jar
with the path name of the downloaded Sybase driver and16.0_PL05
with the version of the driver.
Run the following command to list the Docker images that are available locally:
docker images
Note the name of the image that was built, for example,
jboss-kie-db2-extension-openshift-image
, and the version tag of the image, for example,11.1.4.4
(not thelatest
tag).-
Access the registry of your OpenShift environment directly and push the image to the registry. Depending on your user permissions, you can push the image into the
openshift
namespace or into a project namespace. For instructions about accessing the registry and pushing the images, see Accessing the Registry Directly. When configuring your Process Server deployment with a template that supports an external database server, set the following parameters:
-
Drivers Extension Image (
EXTENSIONS_IMAGE
): The ImageStreamTag definition of the extension image, for example,jboss-kie-db2-extension-openshift-image:11.1.4.4
-
Drivers ImageStream Namespace (
EXTENSIONS_IMAGE_NAMESPACE
): The namespace to which you uploaded the extension image, for example,openshift
or your project namespace.
-
Drivers Extension Image (
Chapter 3. Authoring environment
You can deploy an environment for creating and modifying processes using Business Central. It consists of Business Central for the authoring work and Process Server for test execution of the processes. If necessary, you can connect additional Process Servers to the Business Central.
Depending on your needs, you can deploy either a single authoring environment template or a high-availability (HA) authoring environment template.
A single authoring environment contains two pods. One of the pods runs Business Central, the other runs Process Server. The Process Server by default includes an embedded H2 database engine. This environment is most suitable for single-user authoring or when your OpenShift infrastructure has limited resources.
In an HA authoring environment, both Business Central and Process Server are provided in scalable pods. When pods are scaled, persistent storage is shared between the copies. The database is provided by a separate pod. To enable high-availability functionality in Business Central, additional pods with AMQ and Data Grid are required. These pods are configured and deployed by the high-availability authoring template. Use a high-availability authoring environment to provide maximum reliability and responsiveness, especially if several users are involved in authoring at the same time.
In the current version of Red Hat Process Automation Manager, an HA authoring environment is supported with certain limitations:
- If a Business Central pod crashes while a user works with it, the user can get an error message and then is redirected to another pod. Logging on again is not required.
- If a Business Central pod crashes during a user operation, data that was not committed (saved) might be lost.
- If a Business Central pod crashes during creation of a project, an unusable project might be created.
- If a Business Central pod crashes during creation of an asset, the asset might be created but not indexed, so it cannot be used. The user can open the asset in Business Central and save it again to make it indexed.
Another limitation also applies to all authoring environment deployments. If a user is deploying a service to the Process Server, no user is able to deploy another service to the same Process Server until the first deployment completes.
You can also deploy additional managed or immutable Process Servers, if required. Business Central can automatically discover any Process Servers in the same namespace, including immutable Process Servers and managed Process Servers. This feature requires the OpenShiftStartupStrategy
setting, which is enabled for all Process Servers except those deployed in a fixed managed infrastructure. For instructions about deploying managed Process Servers with the OpenShiftStartupStrategy
setting enabled, see Deploying a Red Hat Process Automation Manager freeform managed server environment on Red Hat OpenShift Container Platform. For instructions about deploying immutable Process Servers, see Deploying a Red Hat Process Automation Manager immutable server environment on Red Hat OpenShift Container Platform.
3.1. Deploying an authoring environment
You can use OpenShift templates to deploy a single or high-availability authoring environment. This environment consists of Business Central and a single Process Server.
3.1.1. Starting configuration of the template for an authoring environment
If you want to deploy a single authoring environment, use the rhpam76-authoring.yaml
template file. By default, the single authoring template uses the H2 database with permanent storage. If you prefer to create a MySQL or PostgreSQL pod or to use an external database server (outside the OpenShift project), modify the template before deploying the environment. For instructions about modifying the template, see Section 3.4, “Modifying the template for the single authoring environment”.
If you want to deploy a high-availability authoring environment, use the rhpam76-authoring-ha.yaml
template file. By default, the high-availability authoring template creates a MySQL pod to provide the database server for the Process Server. If you prefer to use PostgreSQL or to use an external server (outside the OpenShift project) you need to modify the template before deploying the environment. You can also modify the template to change the number of replicas initially created for Business Central. For instructions about modifying the template, see Section 3.5, “Modifying the template for the High Availability authoring environment”.
Procedure
-
Download the
rhpam-7.6.0-openshift-templates.zip
product deliverable file from the Software Downloads page of the Red Hat Customer Portal. - Extract the required template file.
Use one of the following methods to start deploying the template:
-
To use the OpenShift Web UI, in the OpenShift application console select Add to Project → Import YAML / JSON and then select or paste the
<template-file-name>.yaml
file. In the Add Template window, ensure Process the template is selected and click Continue. To use the OpenShift command line console, prepare the following command line:
oc new-app -f <template-path>/<template-file-name>.yaml -p BUSINESS_CENTRAL_HTTPS_SECRET=businesscentral-app-secret -p KIE_SERVER_HTTPS_SECRET=kieserver-app-secret -p PARAMETER=value
In this command line, make the following changes:
-
Replace
<template-path>
with the path to the downloaded template file. -
Replace
<template-file-name>
with the name of the template file. -
Use as many
-p PARAMETER=value
pairs as needed to set the required parameters.
-
Replace
-
To use the OpenShift Web UI, in the OpenShift application console select Add to Project → Import YAML / JSON and then select or paste the
Next steps
Set the parameters for the template. Follow the steps in Section 3.1.2, “Setting required parameters for an authoring environment” to set common parameters. You can view the template file to see descriptions for all parameters.
3.1.2. Setting required parameters for an authoring environment
When configuring the template to deploy an authoring environment, you must set the following parameters in all cases.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
Set the following parameters:
-
Business Central Server Keystore Secret Name (
BUSINESS_CENTRAL_HTTPS_SECRET
): The name of the secret for Business Central, as created in Section 2.3, “Creating the secrets for Business Central”. -
KIE Server Keystore Secret Name (
KIE_SERVER_HTTPS_SECRET
): The name of the secret for Process Server, as created in Section 2.2, “Creating the secrets for Process Server”. -
Business Central Server Certificate Name (
BUSINESS_CENTRAL_HTTPS_NAME
): The name of the certificate in the keystore that you created in Section 2.3, “Creating the secrets for Business Central”. -
Business Central Server Keystore Password (
BUSINESS_CENTRAL_HTTPS_PASSWORD
): The password for the keystore that you created in Section 2.3, “Creating the secrets for Business Central”. -
KIE Server Certificate Name (
KIE_SERVER_HTTPS_NAME
): The name of the certificate in the keystore that you created in Section 2.2, “Creating the secrets for Process Server”. -
KIE Server Keystore Password (
KIE_SERVER_HTTPS_PASSWORD
): The password for the keystore that you created in Section 2.2, “Creating the secrets for Process Server”. -
Application Name (
APPLICATION_NAME
): The name of the OpenShift application. It is used in the default URLs for Business Central Monitoring and Process Server. OpenShift uses the application name to create a separate set of deployment configurations, services, routes, labels, and artifacts. -
Enable KIE server global discovery (
KIE_SERVER_CONTROLLER_OPENSHIFT_GLOBAL_DISCOVERY_ENABLED
): Set this parameter totrue
if you want Business Central to discover all Process Servers with theOpenShiftStartupStrategy
in the same namespace. By default, Business Central discovers only Process Servers that are deployed with the same value of theAPPLICATION_NAME
parameter as Business Central itself. -
ImageStream Namespace (
IMAGE_STREAM_NAMESPACE
): The namespace where the image streams are available. If the image streams were already available in your OpenShift environment (see Section 2.1, “Ensuring the availability of image streams and the image registry”), the namespace isopenshift
. If you have installed the image streams file, the namespace is the name of the OpenShift project.
-
Business Central Server Keystore Secret Name (
You can set the following user names and passwords. By default, the deployment automatically generates the passwords.
-
KIE Admin User (
KIE_ADMIN_USER
) and KIE Admin Password (KIE_ADMIN_PWD
): The user name and password for the administrative user. If you want to use the Business Central to control or monitor any Process Servers other than the Process Server deployed by the same template , you must set and record the user name and password. -
KIE Server User (
KIE_SERVER_USER
) and KIE Server Password (KIE_SERVER_PWD
): The user name and password that a client application can use to connect to any of the Process Servers.
-
KIE Admin User (
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
3.1.3. Configuring the image stream namespace for an authoring environment
If you created image streams in a namespace that is not openshift
, you must configure the namespace in the template.
If all image streams were already available in your Red Hat OpenShift Container Platform environment, you can skip this procedure.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
If you installed an image streams file according to instructions in Section 2.1, “Ensuring the availability of image streams and the image registry”, set the ImageStream Namespace (IMAGE_STREAM_NAMESPACE
) parameter to the name of your OpenShift project.
3.1.4. Setting an optional Maven repository for an authoring environment
When configuring the template to deploy an authoring environment, if you want to place the built KJAR files into an external Maven repository, you must set parameters to access the repository.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
To configure access to a custom Maven repository, set the following parameters:
-
Maven repository URL (
MAVEN_REPO_URL
): The URL for the Maven repository. -
Maven repository ID (
MAVEN_REPO_ID
): An identifier for the Maven repository. The default value isrepo-custom
. -
Maven repository username (
MAVEN_REPO_USERNAME
): The user name for the Maven repository. -
Maven repository password (
MAVEN_REPO_PASSWORD
): The password for the Maven repository.
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
To export or push Business Central projects as KJAR artifacts to the external Maven repository, you must also add the repository information in the pom.xml
file for every project. For information about exporting Business Central projects to an external repository, see Packaging and deploying a Red Hat Process Automation Manager project.
3.1.5. Specifying credentials to access the built-in Maven repository for an authoring environment
When configuring the template to deploy an authoring environment, if you want to use the Maven repository that is built into Business Central and to connect additional Process Servers to the Business Central, you must configure credentials for accessing this Maven repository. You can then use these credentials to configure the Process Servers.
Also, if you are configuring RH-SSO or LDAP authentication, you must set the credentials for the built-in Maven repository to a user name and password configured in RH-SSO or LDAP. This setting is required so that the Process Server can access the Maven repository.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
To configure credentials for the built-in Maven repository, set the following parameters:
-
Username for the Maven service hosted by Business Central (
BUSINESS_CENTRAL_MAVEN_USERNAME
): The user name for the built-in Maven repository. -
Password for the Maven service hosted by Business Central (
BUSINESS_CENTRAL_MAVEN_PASSWORD
): The password for the built-in Maven repository.
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
3.1.6. Configuring access to a Maven mirror in an environment without a connection to the public Internet for an authoring environment
When configuring the template to deploy an authoring environment, if your OpenShift environment does not have a connection to the public Internet, you must configure access to a Maven mirror that you set up according to Section 2.6, “Preparing a Maven mirror repository for offline use”.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
To configure access to the Maven mirror, set the following parameters:
-
Maven mirror URL (
MAVEN_MIRROR_URL
): The URL for the Maven mirror repository that you set up in Section 2.6, “Preparing a Maven mirror repository for offline use”. This URL must be accessible from a pod in your OpenShift environment. -
Maven mirror of (
MAVEN_MIRROR_OF
): The value that determines which artifacts are to be retrieved from the mirror. For instructions about setting themirrorOf
value, see Mirror Settings in the Apache Maven documentation. The default value isexternal:*,!repo-rhpamcentr
; with this value, Maven retrieves artifacts from the built-in Maven repository of Business Central directly and retrieves any other required artifacts from the mirror. If you configure an external Maven repository (MAVEN_REPO_URL
), changeMAVEN_MIRROR_OF
to exclude the artifacts in this repository, for example,external:*,!repo-custom
. Replacerepo-custom
with the ID that you configured inMAVEN_REPO_ID
.
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
3.1.7. Specifying the Git hooks directory for an authoring environment
You can use Git hooks to facilitate interaction between the internal Git repository of Business Central and an external Git repository.
If you want to use Git hooks, you must configure a Git hooks directory.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
To configure a Git hooks directory, set the following parameter:
-
Git hooks directory (
GIT_HOOKS_DIR
): The fully qualified path to a Git hooks directory, for example,/opt/kie/data/git/hooks
. You must provide the content of this directory and mount it at the specified path. For instructions about providing and mounting the Git hooks directory using a configuration map or a persistent volume, see Section 3.3, “(Optional) Providing the Git hooks directory”.
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
3.1.8. Configuring resource usage for a high-availability deployment
If you are deploying the high-availability template (rhpam76-authoring-ha.yaml
), you can optionally configure resource usage to optimize performance for your requirements.
If you are deploying the single authoring environment template (rhpam76-authoring.yaml
), skip this procedure.
For more information about sizing resources, see the following sections in the Red Hat OpenShift Container Platform 3.11 product documentation:
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
Set the following parameters of the template as applicable:
-
Business Central Container Memory Limit (
BUSINESS_CENTRAL_MEMORY_LIMIT
): The amount of memory requested in the OpenShift environment for the Business Central container. The default value is8Gi
. -
Business Central JVM Max Memory Ratio (
BUSINESS_CENTRAL_JAVA_MAX_MEM_RATIO
): The percentage of container memory that is used for the Java Virtual Machine for Business Central. The remaining memory is used for the operating system. The default value is80
, for a limit of 80%. -
Business Central Container CPU Limit (
BUSINESS_CENTRAL_CPU_LIMIT
): The maximum CPU usage for Business Central. The default value is2000m
. -
KIE Server Container Memory Limit (
KIE_SERVER_MEMORY_LIMIT
): The amount of memory requested in the OpenShift environment for the Process Server container. The default value is1Gi
. -
KIE Server Container CPU Limit (
KIE_SERVER_CPU_LIMIT
): The maximum CPU usage for Process Server. The default value is1000m
. -
DataGrid Container Memory Limit (
DATAGRID_MEMORY_LIMIT
): The amount of memory requested in the OpenShift environment for the Red Hat Data Grid container. The default value is2Gi
. -
DataGrid Container CPU Limit (
DATAGRID_CPU_LIMIT
): The maximum CPU usage for Red Hat Data Grid. The default value is1000m
.
3.1.9. Setting parameters for RH-SSO authentication for an authoring environment
If you want to use RH-SSO authentication, complete the following additional configuration when configuring the template to deploy an authoring environment.
Do not configure LDAP authentication and RH-SSO authentication in the same deployment.
Prerequisites
- A realm for Red Hat Process Automation Manager is created in the RH-SSO authentication system.
User names and passwords for Red Hat Process Automation Manager are created in the RH-SSO authentication system. For a list of the available roles, see Chapter 4, Red Hat Process Automation Manager roles and users. The following users are required in order to set the parameters for the environment:
-
An administrative user with the
kie-server,rest-all,admin
roles. This user can administer and use the environment. Process Servers use this user to authenticate with Business Central. -
A server user with the
kie-server,rest-all,user
roles. This user can make REST API calls to the Process Server. Business Central uses this user to authenticate with Process Servers.
-
An administrative user with the
- Clients are created in the RH-SSO authentication system for all components of the Red Hat Process Automation Manager environment that you are deploying. The client setup contains the URLs for the components. You can review and edit the URLs after deploying the environment. Alternatively, the Red Hat Process Automation Manager deployment can create the clients. However, this option provides less detailed control over the environment.
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
-
Set the
KIE_ADMIN_USER
andKIE_ADMIN_PASSWORD
parameters of the template to the user name and password of the administrative user that you created in the RH-SSO authentication system. -
Set the
KIE_SERVER_USER
andKIE_SERVER_PASSWORD
parameters of the template to the user name and password of the server user that you created in the RH-SSO authentication system. Set the following parameters:
-
RH-SSO URL (
SSO_URL
): The URL for RH-SSO. -
RH-SSO Realm name (
SSO_REALM
): The RH-SSO realm for Red Hat Process Automation Manager. -
RH-SSO Disable SSL Certificate Validation (
SSO_DISABLE_SSL_CERTIFICATE_VALIDATION
): Set totrue
if your RH-SSO installation does not use a valid HTTPS certificate.
-
RH-SSO URL (
Complete one of the following procedures:
If you created the clients for Red Hat Process Automation Manager within RH-SSO, set the following parameters in the template:
-
Business Central RH-SSO Client name (
BUSINESS_CENTRAL_SSO_CLIENT
): The RH-SSO client name for Business Central. -
Business Central RH-SSO Client Secret (
BUSINESS_CENTRAL_SSO_SECRET
): The secret string that is set in RH-SSO for the client for Business Central. -
KIE Server RH-SSO Client name (
KIE_SERVER_SSO_CLIENT
): The RH-SSO client name for Process Server. -
KIE Server RH-SSO Client Secret (
KIE_SERVER_SSO_SECRET
): The secret string that is set in RH-SSO for the client for Process Server.
-
Business Central RH-SSO Client name (
To create the clients for Red Hat Process Automation Manager within RH-SSO, set the following parameters in the template:
-
Business Central RH-SSO Client name (
BUSINESS_CENTRAL_SSO_CLIENT
): The name of the client to create in RH-SSO for Business Central. -
Business Central RH-SSO Client Secret (
BUSINESS_CENTRAL_SSO_SECRET
): The secret string to set in RH-SSO for the client for Business Central. -
KIE Server RH-SSO Client name (
KIE_SERVER_SSO_CLIENT
): The name of the client to create in RH-SSO for Process Server. -
KIE Server RH-SSO Client Secret (
KIE_SERVER_SSO_SECRET
): The secret string to set in RH-SSO for the client for Process Server. -
RH-SSO Realm Admin Username (
SSO_USERNAME
) and RH-SSO Realm Admin Password (SSO_PASSWORD
): The user name and password for the realm administrator user for the RH-SSO realm for Red Hat Process Automation Manager. You must provide this user name and password in order to create the required clients.
-
Business Central RH-SSO Client name (
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
After completing the deployment, review the URLs for components of Red Hat Process Automation Manager in the RH-SSO authentication system to ensure they are correct.
3.1.10. Setting parameters for LDAP authentication for an authoring environment
If you want to use LDAP authentication, complete the following additional configuration when configuring the template to deploy an authoring environment.
Do not configure LDAP authentication and RH-SSO authentication in the same deployment.
Prerequisites
You created user names and passwords for Red Hat Process Automation Manager in the LDAP system. For a list of the available roles, see Chapter 4, Red Hat Process Automation Manager roles and users. As a minimum, in order to set the parameters for the environment, you created the following users:
-
An administrative user with the
kie-server,rest-all,admin
roles. This user can administer and use the environment. -
A server user with the
kie-server,rest-all,user
roles. This user can make REST API calls to the Process Server.
-
An administrative user with the
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
In the LDAP service, create all user names in the deployment parameters. If you do not set any of the parameters, create users with the default user names. The created users must also be assigned to roles:
-
KIE_ADMIN_USER
: default user nameadminUser
, roles:kie-server,rest-all,admin
KIE_SERVER_USER
: default user nameexecutionUser
, roleskie-server,rest-all,guest
For the user roles that you can configure in LDAP, see Roles and users.
-
Set the
AUTH_LDAP*
parameters of the template. These parameters correspond to the settings of theLdapExtended
Login module of Red Hat JBoss EAP. For instructions about using these settings, see LdapExtended login module.If the LDAP server does not define all the roles required for your deployment, you can map LDAP groups to Red Hat Process Automation Manager roles. To enable LDAP role mapping, set the following parameters:
-
RoleMapping rolesProperties file path (
AUTH_ROLE_MAPPER_ROLES_PROPERTIES
): The fully qualified path name of a file that defines role mapping, for example,/opt/eap/standalone/configuration/rolemapping/rolemapping.properties
. You must provide this file and mount it at this path in all applicable deployment configurations; for instructions, see Section 3.2, “(Optional) Providing the LDAP role mapping file”. -
RoleMapping replaceRole property (
AUTH_ROLE_MAPPER_REPLACE_ROLE
): If set totrue
, mapped roles replace the roles defined on the LDAP server; if set tofalse
, both mapped roles and roles defined on the LDAP server are set as user application roles. The default setting isfalse
.
-
RoleMapping rolesProperties file path (
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
3.1.11. Setting parameters for using an external database server for an authoring environment
If you modified the template to use an external database server for the Process Server, as described in Section 3.4, “Modifying the template for the single authoring environment” or Section 3.5, “Modifying the template for the High Availability authoring environment”, complete the following additional configuration when configuring the template to deploy an authoring environment.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
Set the following parameters:
KIE Server External Database Driver (
KIE_SERVER_EXTERNALDB_DRIVER
): The driver for the server, depending on the server type:-
mysql
-
postgresql
-
mariadb
-
mssql
-
db2
-
oracle
-
sybase
-
-
KIE Server External Database User (
KIE_SERVER_EXTERNALDB_USER
) and KIE Server External Database Password (KIE_SERVER_EXTERNALDB_PWD
): The user name and password for the external database server -
KIE Server External Database URL (
KIE_SERVER_EXTERNALDB_URL
): The JDBC URL for the external database server KIE Server External Database Dialect (
KIE_SERVER_EXTERNALDB_DIALECT
): The Hibernate dialect for the server, depending on the server type:-
org.hibernate.dialect.MySQL5InnoDBDialect
(used for MySQL and MariaDB) -
org.hibernate.dialect.PostgreSQL82Dialect
-
org.hibernate.dialect.SQLServer2012Dialect
(used for MS SQL) -
org.hibernate.dialect.DB2Dialect
-
org.hibernate.dialect.Oracle10gDialect
-
org.hibernate.dialect.SybaseASE157Dialect
-
-
KIE Server External Database Host (
KIE_SERVER_EXTERNALDB_SERVICE_HOST
): The host name of the external database server -
KIE Server External Database Port (
KIE_SERVER_EXTERNALDB_SERVICE_PORT
): The port number of the external database server -
KIE Server External Database name (
KIE_SERVER_EXTERNALDB_DB
): The database name to use on the external database server -
JDBC Connection Checker class (
KIE_SERVER_EXTERNALDB_CONNECTION_CHECKER
): The name of the JDBC connection checker class for the database server. Without this information, a database server connection cannot be restored after it is lost, for example, if the database server is rebooted. -
JDBC Exception Sorter class (
KIE_SERVER_EXTERNALDB_EXCEPTION_SORTER
): The name of the JDBC exception sorter class for the database server. Without this information, a database server connection cannot be restored after it is lost, for example, if the database server is rebooted.
If you created a custom image for using an external database server other than MySQL or PostgreSQL, as described in Section 2.7, “Building a custom Process Server extension image for an external database”, set the following parameters:
-
Drivers Extension Image (
EXTENSIONS_IMAGE
): The ImageStreamTag definition of the extension image, for example,jboss-kie-db2-extension-openshift-image:11.1.4.4
-
Drivers ImageStream Namespace (
EXTENSIONS_IMAGE_NAMESPACE
): The namespace to which you uploaded the extension image, for example,openshift
or your project namespace.
-
Drivers Extension Image (
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
3.1.12. Enabling Prometheus metric collection for an authoring environment
If you want to configure your Process Server deployment to use Prometheus to collect and store metrics, enable support for this feature in Process Server at deployment time.
Prerequisites
- You started the configuration of the template, as described in Section 3.1.1, “Starting configuration of the template for an authoring environment”.
Procedure
To enable support for Prometheus metric collection, set the Prometheus Server Extension Disabled (PROMETHEUS_SERVER_EXT_DISABLED
) parameter to false
.
Next steps
If necessary, set additional parameters.
To complete the deployment, follow the procedure in Section 3.1.13, “Completing deployment of the template for an authoring environment”.
For instructions about configuring Prometheus metrics collection, see Managing and monitoring Process Server.
3.1.13. Completing deployment of the template for an authoring environment
After setting all the required parameters in the OpenShift Web UI or in the command line, complete deployment of the template.
Procedure
Depending on the method that you are using, complete the following steps:
In the OpenShift Web UI, click Create.
-
If the
This will create resources that may have security or project behavior implications
message appears, click Create Anyway.
-
If the
- Complete the command line and press Enter.
3.2. (Optional) Providing the LDAP role mapping file
If you configure the AUTH_ROLE_MAPPER_ROLES_PROPERTIES
parameter, you must provide a file that defines the role mapping. Mount this file on all affected deployment configurations.
Procedure
Create the role mapping properties file, for example,
my-role-map
. The file must contain entries in the following format:ldap_role = product_role1, product_role2...
For example:
admins = kie-server,rest-all,admin
Create an OpenShift configuration map from the file by entering the following command:
oc create configmap ldap-role-mapping --from-file=<new_name>=<existing_name>
Replace
<new_name>
with the name that the file is to have on the pods (it must be the same as the name specified in theAUTH_ROLE_MAPPER_ROLES_PROPERTIES
file) and<existing_name>
with the name of the file that you created. Example:oc create configmap ldap-role-mapping --from-file=rolemapping.properties=my-role-map
Mount the configuration map on every deployment configuration that is configured for role mapping.
The following deployment configurations can be affected in this environment:
-
myapp-rhpamcentr
: Business Central -
myapp-kieserver
: Process Server
Replace
myapp
with the application name. Sometimes, several Process Server deployments can be present under different application names.For every deployment configuration, run the command:
oc set volume dc/<deployment_config_name> --add --type configmap --configmap-name ldap-role-mapping --mount-path=<mapping_dir> --name=ldap-role-mapping
Replace
<mapping_dir>
with the directory name (without file name) set in theAUTH_ROLE_MAPPER_ROLES_PROPERTIES
parameter, for example,/opt/eap/standalone/configuration/rolemapping
.-
3.3. (Optional) Providing the Git hooks directory
If you configure the GIT_HOOKS_DIR
parameter, you must provide a directory of Git hooks and must mount this directory on the Business Central deployment.
The typical use of Git hooks is interaction with an upstream repository. To enable Git hooks to push commits into an upstream repository, you must also provide a secret key that corresponds to a public key configured on the upstream repository.
Procedure
If interaction with an upstream repository using SSH authentication is required, complete the following steps to prepare and mount a secret with the necessary files:
-
Prepare the
id_rsa
file with a private key that matches a public key stored in the repository. -
Prepare the
known_hosts
file with the correct name, address, and public key for the repository. Create a secret with the two files using the
oc
command, for example:oc create secret git-hooks-secret --from-file=id_rsa=id_rsa --from-file=known_hosts=known_hosts
Mount the secret in the SSH key path of the Business Central deployment, for example:
oc set volume dc/<myapp>-rhpamcentr --add --type secret --secret-name git-hooks-secret --mount-path=/home/jboss/.ssh --name=ssh-key
Replace
<myapp>
with the application name that you set when configuring the template.
-
Prepare the
Create the Git hooks directory. For instructions, see the Git hooks reference documentation.
For example, a simple Git hooks directory can provide a post-commit hook that pushes the changes upstream. If the project was imported into Business Central from a repository, this repository remains configured as the upstream repository. Create a file named
post-commit
with permission values755
and the following content:git push
Supply the Git hooks directory to the Business Central deployment. You can use a configuration map or a persistent volume.
If the Git hooks consist of one or several fixed script files, use a configuration map. Complete the following steps:
- Change into the Git hooks directory that you have created.
Create an OpenShift configuration map from the files in the directory. Run the following command:
oc create configmap git-hooks --from-file=<file_1>=<file_1> --from-file=<file_2>=<file_2> ...
Replace
file_1
,file_2
, and so on with Git hook script file names. Example:oc create configmap git-hooks --from-file=post-commit=post-commit
Mount the configuration map on the Business Central deployment in the path that you have configured:
oc set volume dc/<myapp>-rhpamcentr --add --type configmap --configmap-name git-hooks --mount-path=<git_hooks_dir> --name=git-hooks
Replace
<myapp>
with the application name that was set when configuring the template and<git_hooks_dir>
is the value ofGIT_HOOKS_DIR
that was set when configuring the template.
-
If the Git hooks consist of long files or depend on binaries, such as executable or KJAR files, use a persistence volume. You must create a persistent volume, create a persistent volume claim and associate the volume with the claim, transfer files to the volume, and mount the volume in the
myapp-rhpamcentr
deployment configuration (replace myapp with the application name). For instructions about creating and mounting persistence volumes, see Using persistent volumes. For instructions about copying files onto a persistent volume, see Transferring files in and out of containers.
Wait a few minutes, then review the list and status of pods in your project. Because Business Central does not start until you provide the Git hooks directory, the Process Server might not start at all. To see if it has started, check the output of the following command:
oc get pods
If a working Process Server pod is not present, start it:
oc rollout latest dc/<myapp>-kieserver
Replace
<myapp>
with the application name that was set when configuring the template.
3.4. Modifying the template for the single authoring environment
By default, the single authoring template uses the H2 database with permanent storage. If you prefer to create a MySQL or PostgreSQL pod or to use an external database server (outside the OpenShift project), you need to modify the template before deploying the environment.
An OpenShift template defines a set of objects that can be created by OpenShift. To change an environment configuration, you need to modify, add, or delete these objects. To simplify this task, comments are provided in the Red Hat Process Automation Manager templates.
Some comments mark blocks within the template, staring with BEGIN
and ending with END
. For example, the following block is named Sample block
:
## Sample block BEGIN sample line 1 sample line 2 sample line 3 ## Sample block END
For some changes, you might need to replace a block in one template file with a block from another template file provided with Red Hat Process Automation Manager. In this case, delete the block, then paste the new block in its exact location.
Procedure
Edit the rhpam76-authoring.yaml
template file to make any of the following changes as necessary.
If you want to use MySQL instead of the H2 database, you need to replace several blocks of the file, marked with comments from
BEGIN
toEND
, with blocks from therhpam76-kieserver-mysql.yaml
file that are also marked with comments. You also need to remove several other blocks and to add blocks in designated locations:-
Replace the block named
H2 database parameters
with the block namedMySQL database parameters
. (Take this block and all subsequent replacement blocks from therhpam76-kieserver-mysql.yaml
file.) -
Replace the block named
H2 driver settings
with the block namedMySQL driver settings
. -
Replace the block named
H2 persistent volume claim
with the block namedMySQL persistent volume claim
. -
Remove the blocks named
H2 volume mount
andH2 volume settings
. -
Under the comment
Place to add database service
, add the block namedMySQL service
. -
Under the comment
Place to add database deployment config
, add the block namedMySQL deployment config
.
-
Replace the block named
If you want to use PostgreSQL instead of the H2 database, you need to replace several blocks of the file, marked with comments from
BEGIN
toEND
, with blocks from therhpam76-kieserver-postgresql.yaml
file that are also marked with comments. You also need to remove several other blocks and to add blocks in designated locations:-
Replace the block named
H2 database parameters
with the block namedPostgreSQL database parameters
. (Take this block and all subsequent replacement blocks from therhpam76-kieserver-postgresql.yaml
file.) -
Replace the block named
H2 driver settings
with the block namedPostgreSQL driver settings
. -
Replace the block named
H2 persistent volume claim
with the block namedPostgreSQL persistent volume claim
. -
Remove the blocks named
H2 volume mount
andH2 volume settings
. -
Under the comment
Place to add database service
, add the block namedPostgreSQL service
. -
Under the comment
Place to add database deployment config
, add the block namedPostgreSQL deployment config
.
-
Replace the block named
If you want to use an external database server, replace several blocks of the file, marked with comments from
BEGIN
toEND
, with blocks from therhpam76-kieserver-externaldb.yaml
file, and also remove some blocks:-
Replace the block named
H2 database parameters
with the block namedExternal database parameters
. (Take this block and all subsequent replacement blocks from therhpam76-kieserver-externaldb.yaml
file.) -
Replace the block named
H2 driver settings
with the block namedExternal database driver settings
. Remove the following blocks of the file, marked with comments from
BEGIN
toEND
:-
H2 persistent volume claim
-
H2 volume mount
-
H2 volume settings
-
-
Replace the block named
The standard Process Server image includes drivers for MySQL and PostgreSQL external database servers. If you want to use another database server, you must build a custom Process Server image. For instructions, see Section 2.7, “Building a custom Process Server extension image for an external database”.
3.5. Modifying the template for the High Availability authoring environment
By default, the high-availability authoring template creates a MySQL pod to provide the database server for the Process Server. If you prefer to use PostgreSQL or to use an external server (outside the OpenShift project), you need to modify the template before deploying the environment.
You can also modify the High Availability authoring template to change the number of replicas initially created for Business Central.
An OpenShift template defines a set of objects that can be created by OpenShift. To change an environment configuration, you need to modify, add, or delete these objects. To simplify this task, comments are provided in the Red Hat Process Automation Manager templates.
Some comments mark blocks within the template, staring with BEGIN
and ending with END
. For example, the following block is named Sample block
:
## Sample block BEGIN sample line 1 sample line 2 sample line 3 ## Sample block END
For some changes, you might need to replace a block in one template file with a block from another template file provided with Red Hat Process Automation Manager. In this case, delete the block, then paste the new block in its exact location.
Procedure
Edit the rhpam76-authoring-ha.yaml
template file to make any of the following changes as necessary.
If you want to use PostgreSQL instead of MySQL, replace several blocks of the file, marked with comments from
BEGIN
toEND
, with blocks from therhpam76-kieserver-postgresql.yaml
file:-
Replace the block named
MySQL database parameters
with the block namedPostgreSQL database parameters
. (Take this block and all subsequent replacement blocks from therhpam76-kieserver-postgresql.yaml
file.) -
Replace the block named
MySQL service
with the block namedPostgreSQL service
. -
Replace the block named
MySQL driver settings
with the block namedPostgreSQL driver settings
. -
Replace the block named
MySQL deployment config
with the block namedPostgreSQL deployment config
. -
Replace the block named
MySQL persistent volume claim
with the block namedPostgreSQL persistent volume claim
.
-
Replace the block named
If you want to use an external database server, replace several blocks of the file, marked with comments from
BEGIN
toEND
, with blocks from therhpam76-kieserver-externaldb.yaml
file, and also remove some blocks:-
Replace the block named
MySQL database parameters
with the block namedExternal database parameters
. (Take this block and all subsequent replacement blocks from therhpam76-kieserver-externaldb.yaml
file.) -
Replace the block named
MySQL driver settings
with the block namedExternal database driver settings
. Remove the following blocks of the file, marked with comments from
BEGIN
toEND
:-
MySQL service
-
MySQL deployment config
-
MySQL persistent volume claim
-
-
Replace the block named
The standard Process Server image includes drivers for MySQL and PostgreSQL external database servers. If you want to use another database server, you must build a custom Process Server image. For instructions, see Section 2.7, “Building a custom Process Server extension image for an external database”.
-
If you want to change the number of replicas initially created for Business Central, on the line below the comment
## Replicas for Business Central
, change the number of replicas to the desired value.
Chapter 4. Red Hat Process Automation Manager roles and users
To access Business Central or Process Server, you must create users and assign them appropriate roles before the servers are started.
The Business Central and Process Server use Java Authentication and Authorization Service (JAAS) login module to authenticate the users. If both Business Central and Process Server are running on a single instance, then they share the same JAAS subject and security domain. Therefore, a user, who is authenticated for Business Central can also access Process Server.
However, if Business Central and Process Server are running on different instances, then the JAAS login module is triggered for both individually. Therefore, a user, who is authenticated for Business Central, needs to be authenticated separately to access the Process Server (for example, to view or manage process definitions in Business Central). In case, the user is not authenticated on the Process Server, then 401 error is logged in the log file, displaying Invalid credentials to load data from remote server. Contact your system administrator.
message in Business Central.
This section describes available Red Hat Process Automation Manager user roles.
The admin
, analyst
, developer
, manager
, process-admin
, user
, and rest-all
roles are reserved for Business Central. The kie-server
role is reserved for Process Server. For this reason, the available roles can differ depending on whether Business Central, Process Server, or both are installed.
-
admin
: Users with theadmin
role are the Business Central administrators. They can manage users and create, clone, and manage the repositories. They have full access to make required changes in the application. Users with theadmin
role have access to all areas within Red Hat Process Automation Manager. -
analyst
: Users with theanalyst
role have access to all high-level features. They can model and execute their projects. However, these users cannot add contributors to spaces or delete spaces in the Design → Projects view. Access to the Deploy → Execution Servers view, which is intended for administrators, is not available to users with theanalyst
role. However, the Deploy button is available to these users when they access the Library perspective. -
developer
: Users with thedeveloper
role have access to almost all features and can manage rules, models, process flows, forms, and dashboards. They can manage the asset repository, they can create, build, and deploy projects, and they can use Red Hat CodeReady Studio to view processes. Only certain administrative functions such as creating and cloning a new repository are hidden from users with thedeveloper
role. -
manager
: Users with themanager
role can view reports. These users are usually interested in statistics about the business processes and their performance, business indicators, and other business-related reporting. A user with this role has access only to process and task reports. -
process-admin
: Users with theprocess-admin
role are business process administrators. They have full access to business processes, business tasks, and execution errors. These users can also view business reports and have access to the Task Inbox list. -
user
: Users with theuser
role can work on the Task Inbox list, which contains business tasks that are part of currently running processes. Users with this role can view process and task reports and manage processes. -
rest-all
: Users with therest-all
role can access Business Central REST capabilities. -
kie-server
: Users with thekie-server
role can access Process Server (KIE Server) REST capabilities. This role is mandatory for users to have access to Manage and Track views in Business Central.
Chapter 5. OpenShift template reference information
Red Hat Process Automation Manager provides the following OpenShift templates. To access the templates, download and extract the rhpam-7.6.0-openshift-templates.zip
product deliverable file from the Software Downloads page of the Red Hat customer portal.
-
rhpam76-authoring.yaml
provides a Business Central and a Process Server connected to the Business Central. The Process Server uses an H2 database with persistent storage. You can use this environment to author processes, services, and other business assets. For details about this template, see Section 5.1, “rhpam76-authoring.yaml template”. -
rhpam76-authoring-ha.yaml
provides a high-availability Business Central, a Process Server connected to the Business Central, and a MySQL instance that the Process Server uses. You can use this environment to author processes, services, and other business assets. For details about this template, see Section 5.2, “rhpam76-authoring-ha.yaml template”.
5.1. rhpam76-authoring.yaml template
Application template for a non-HA persistent authoring environment, for Red Hat Process Automation Manager 7.6 - Deprecated
5.1.1. Parameters
Templates allow you to define parameters which take on a value. That value is then substituted wherever the parameter is referenced. References can be defined in any text field in the objects list field. Refer to the Openshift documentation for more information.
Variable name | Image Environment Variable | Description | Example value | Required |
---|---|---|---|---|
| — | The name for the application. | myapp | True |
|
| KIE administrator user name. | adminUser | False |
|
| KIE administrator password. | — | False |
|
| KIE server controller user name. (Sets the org.kie.server.controller.user system property) | controllerUser | False |
|
| KIE server controller password. (Sets the org.kie.server.controller.pwd system property) | — | False |
|
| KIE server controller token for bearer authentication. (Sets the org.kie.server.controller.token system property) | — | False |
|
| KIE server user name. (Sets the org.kie.server.user system property) | executionUser | False |
|
| KIE server password. (Sets the org.kie.server.pwd system property) | — | False |
|
| Allows the KIE server to bypass the authenticated user for task-related operations, for example, queries. (Sets the org.kie.server.bypass.auth.user system property) | false | False |
|
| KIE server persistence datasource. (Sets the org.kie.server.persistence.ds system property) | java:/jboss/datasources/rhpam | False |
KIE_SERVER_H2_USER |
| KIE server H2 database user name. | sa | False |
KIE_SERVER_H2_PWD |
| KIE server H2 database password. | — | False |
|
| The KIE Server mode. Valid values are 'DEVELOPMENT' or 'PRODUCTION'. In production mode, you can not deploy SNAPSHOT versions of artifacts on the KIE server and can not change the version of an artifact in an existing container. (Sets the org.kie.server.mode system property) |
| False |
|
| KIE server mbeans enabled/disabled. (Sets the kie.mbeans and kie.scanner.mbeans system properties) | enabled | False |
|
| KIE server class filtering. (Sets the org.drools.server.filter.classes system property) | true | False |
|
| If set to false, the prometheus server extension will be enabled. (Sets the org.kie.prometheus.server.ext.disabled system property) | false | False |
|
| Custom hostname for the http service route for Business Central. Leave blank for default hostname, e.g.: insecure-<application-name>-rhpamcentr-<project>.<default-domain-suffix> | — | False |
|
| Custom hostname for the https service route for Business Central. Leave blank for default hostname, e.g.: <application-name>-rhpamcentr-<project>.<default-domain-suffix> | — | False |
|
| Custom hostname for the http service route for KIE Server. Leave blank for default hostname, e.g.: insecure-<application-name>-kieserver-<project>.<default-domain-suffix> | — | False |
|
| Custom hostname for the https service route for KIE Server. Leave blank for default hostname, e.g.: <application-name>-kieserver-<project>.<default-domain-suffix> | — | False |
| — | The name of the secret containing the keystore file for Business Central. | businesscentral-app-secret | True |
|
| The name of the keystore file within the secret. | keystore.jks | False |
|
| The name associated with the server certificate. | jboss | False |
|
| The password for the keystore and certificate. | mykeystorepass | False |
| — | The name of the secret containing the keystore file for KIE server. | kieserver-app-secret | True |
|
| The name of the keystore file within the secret. | keystore.jks | False |
|
| The name associated with the server certificate. | jboss | False |
|
| The password for the keystore and certificate. | mykeystorepass | False |
| — | Size of persistent storage for the database volume. | 1Gi | True |
|
| If set to true, turns on KIE server global discovery feature (Sets the org.kie.server.controller.openshift.global.discovery.enabled system property) | false | False |
|
| If OpenShift integration of Business Central is turned on, setting this parameter to true enables connection to KIE Server via an OpenShift internal Service endpoint. (Sets the org.kie.server.controller.openshift.prefer.kieserver.service system property) | true | False |
|
| KIE ServerTemplate Cache TTL in milliseconds. (Sets the org.kie.server.controller.template.cache.ttl system property) | 60000 | False |
| — | Namespace in which the ImageStreams for Red Hat Process Automation Manager images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you installed the ImageStreams in a different namespace/project. | openshift | True |
| — | The name of the image stream to use for KIE server. Default is "rhpam-kieserver-rhel8". | rhpam-kieserver-rhel8 | True |
| — | A named pointer to an image in an image stream. Default is "7.6.0". | 7.6.0 | True |
|
| Maven mirror that Business Central and KIE server must use. If you configure a mirror, this mirror must contain all artifacts that are required for building and deploying your services. | — | False |
|
| Maven mirror configuration for KIE server. | external:*,!repo-rhpamcentr | False |
|
| The id to use for the maven repository. If set, it can be excluded from the optionally configured mirror by adding it to MAVEN_MIRROR_OF. For example: external:*,!repo-rhpamcentr,!repo-custom. If MAVEN_MIRROR_URL is set but MAVEN_MIRROR_ID is not set, an id will be generated randomly, but won’t be usable in MAVEN_MIRROR_OF. | repo-custom | False |
|
| Fully qualified URL to a Maven repository or service. | http://nexus.nexus-project.svc.cluster.local:8081/nexus/content/groups/public/ | False |
|
| User name for accessing the Maven repository, if required. | — | False |
|
| Password to access the Maven repository, if required. | — | False |
|
| User name for accessing the Maven service hosted by Business Central inside EAP. | mavenUser | True |
|
| Password to access the Maven service hosted by Business Central inside EAP. | — | True |
|
| The directory to use for git hooks, if required. |
| False |
| — | Size of the persistent storage for Business Central runtime data. | 1Gi | True |
| — | Business Central Container memory limit | 2Gi | False |
| — | KIE server Container memory limit | 1Gi | False |
|
| RH-SSO URL. | False | |
|
| RH-SSO Realm name. | — | False |
|
| Business Central RH-SSO Client name. | — | False |
|
| Business Central RH-SSO Client Secret. | 252793ed-7118-4ca8-8dab-5622fa97d892 | False |
|
| KIE Server RH-SSO Client name. | — | False |
|
| KIE Server RH-SSO Client Secret. | 252793ed-7118-4ca8-8dab-5622fa97d892 | False |
|
| RH-SSO Realm admin user name for creating the Client if it doesn’t exist. | — | False |
|
| RH-SSO Realm Admin Password used to create the Client. | — | False |
|
| RH-SSO Disable SSL Certificate Validation. | false | False |
|
| RH-SSO Principal Attribute to use as user name. | preferred_username | False |
|
| LDAP Endpoint to connect for authentication | ldap://myldap.example.com | False |
|
| Bind DN used for authentication. | uid=admin,ou=users,ou=example,ou=com | False |
|
| LDAP Credentials used for authentication. | Password | False |
|
| The JMX ObjectName of the JaasSecurityDomain used to decrypt the password. | — | False |
|
| LDAP Base DN of the top-level context to begin the user search. | ou=users,ou=example,ou=com | False |
|
| LDAP search filter used to locate the context of the user to authenticate. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. A common example for the search filter is (uid={0}). | (uid={0}) | False |
|
| The search scope to use. |
| False |
|
| The timeout in milliseconds for user or role searches. | 10000 | False |
|
| The name of the attribute in the user entry that contains the DN of the user. This may be necessary if the DN of the user itself contains special characters, backslash for example, that prevent correct user mapping. If the attribute does not exist, the entry’s DN is used. | distinguishedName | False |
|
| A flag indicating if the DN is to be parsed for the user name. If set to true, the DN is parsed for the user name. If set to false the DN is not parsed for the user name. This option is used together with usernameBeginString and usernameEndString. | true | False |
|
| Defines the String which is to be removed from the start of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. | — | False |
|
| Defines the String which is to be removed from the end of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. | — | False |
|
| Name of the attribute containing the user roles. | memberOf | False |
|
| The fixed DN of the context to search for user roles. This is not the DN where the actual roles are, but the DN where the objects containing the user roles are. For example, in a Microsoft Active Directory server, this is the DN where the user account is. | ou=groups,ou=example,ou=com | False |
|
| A search filter used to locate the roles associated with the authenticated user. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. The authenticated userDN is substituted into the filter anywhere a {1} is used. An example search filter that matches on the input username is (member={0}). An alternative that matches on the authenticated userDN is (member={1}). | (memberOf={1}) | False |
|
| The number of levels of recursion the role search will go below a matching context. Disable recursion by setting this to 0. | 1 | False |
|
| A role included for all authenticated users | user | False |
|
| Name of the attribute within the roleCtxDN context which contains the role name. If the roleAttributeIsDN property is set to true, this property is used to find the role object’s name attribute. | name | False |
|
| A flag indicating if the DN returned by a query contains the roleNameAttributeID. If set to true, the DN is checked for the roleNameAttributeID. If set to false, the DN is not checked for the roleNameAttributeID. This flag can improve the performance of LDAP queries. | false | False |
|
| Whether or not the roleAttributeID contains the fully-qualified DN of a role object. If false, the role name is taken from the value of the roleNameAttributeId attribute of the context name. Certain directory schemas, such as Microsoft Active Directory, require this attribute to be set to true. | false | False |
|
| If you are not using referrals, you can ignore this option. When using referrals, this option denotes the attribute name which contains users defined for a certain role, for example member, if the role object is inside the referral. Users are checked against the content of this attribute name. If this option is not set, the check will always fail, so role objects cannot be stored in a referral tree. | — | False |
|
| When present, the RoleMapping Login Module will be configured to use the provided file. This parameter defines the fully-qualified file path and name of a properties file or resource which maps roles to replacement roles. The format is original_role=role1,role2,role3 | — | False |
|
| Whether to add to the current roles, or replace the current roles with the mapped ones. Replaces if set to true. | — | False |
5.1.2. Objects
The CLI supports various object types. A list of these object types as well as their abbreviations can be found in the Openshift documentation.
5.1.2.1. Services
A service is an abstraction which defines a logical set of pods and a policy by which to access them. Refer to the container-engine documentation for more information.
Service | Port | Name | Description |
---|---|---|---|
| 8080 | http | All the Business Central web server’s ports. |
8443 | https | ||
| 8080 | http | All the KIE server web server’s ports. |
8443 | https |
5.1.2.2. Routes
A route is a way to expose a service by giving it an externally-reachable hostname such as www.example.com
. A defined route and the endpoints identified by its service can be consumed by a router to provide named connectivity from external clients to your applications. Each route consists of a route name, service selector, and (optionally) security configuration. Refer to the Openshift documentation for more information.
Service | Security | Hostname |
---|---|---|
insecure-${APPLICATION_NAME}-rhpamcentr-http | none |
|
| TLS passthrough |
|
insecure-${APPLICATION_NAME}-kieserver-http | none |
|
| TLS passthrough |
|
5.1.2.3. Deployment Configurations
A deployment in OpenShift is a replication controller based on a user defined template called a deployment configuration. Deployments are created manually or in response to triggered events. Refer to the Openshift documentation for more information.
5.1.2.3.1. Triggers
A trigger drives the creation of new deployments in response to events, both inside and outside OpenShift. Refer to the Openshift documentation for more information.
Deployment | Triggers |
---|---|
| ImageChange |
| ImageChange |
5.1.2.3.2. Replicas
A replication controller ensures that a specified number of pod "replicas" are running at any one time. If there are too many, the replication controller kills some pods. If there are too few, it starts more. Refer to the container-engine documentation for more information.
Deployment | Replicas |
---|---|
| 1 |
| 1 |
5.1.2.3.3. Pod Template
5.1.2.3.3.1. Service Accounts
Service accounts are API objects that exist within each project. They can be created or deleted like any other API object. Refer to the Openshift documentation for more information.
Deployment | Service Account |
---|---|
|
|
|
|
5.1.2.3.3.2. Image
Deployment | Image |
---|---|
| rhpam-businesscentral-rhel8 |
|
|
5.1.2.3.3.3. Readiness Probe
${APPLICATION_NAME}-rhpamcentr
Http Get on http://localhost:8080/rest/ready
${APPLICATION_NAME}-kieserver
Http Get on http://localhost:8080/services/rest/server/readycheck
5.1.2.3.3.4. Liveness Probe
${APPLICATION_NAME}-rhpamcentr
Http Get on http://localhost:8080/rest/healthy
${APPLICATION_NAME}-kieserver
Http Get on http://localhost:8080/services/rest/server/healthcheck
5.1.2.3.3.5. Exposed Ports
Deployments | Name | Port | Protocol |
---|---|---|---|
| jolokia | 8778 |
|
http | 8080 |
| |
https | 8443 |
| |
| jolokia | 8778 |
|
http | 8080 |
| |
https | 8443 |
|
5.1.2.3.3.6. Image Environment Variables
Deployment | Variable name | Description | Example value |
---|---|---|---|
|
| — |
|
| — |
| |
| KIE administrator user name. |
| |
| KIE administrator password. |
| |
| KIE server mbeans enabled/disabled. (Sets the kie.mbeans and kie.scanner.mbeans system properties) |
| |
| If set to true, turns on KIE server global discovery feature (Sets the org.kie.server.controller.openshift.global.discovery.enabled system property) |
| |
| If OpenShift integration of Business Central is turned on, setting this parameter to true enables connection to KIE Server via an OpenShift internal Service endpoint. (Sets the org.kie.server.controller.openshift.prefer.kieserver.service system property) |
| |
| KIE ServerTemplate Cache TTL in milliseconds. (Sets the org.kie.server.controller.template.cache.ttl system property) |
| |
| — | true | |
| KIE server controller user name. (Sets the org.kie.server.controller.user system property) |
| |
| KIE server controller password. (Sets the org.kie.server.controller.pwd system property) |
| |
| KIE server controller token for bearer authentication. (Sets the org.kie.server.controller.token system property) |
| |
| KIE server user name. (Sets the org.kie.server.user system property) |
| |
| KIE server password. (Sets the org.kie.server.pwd system property) |
| |
| Maven mirror that Business Central and KIE server must use. If you configure a mirror, this mirror must contain all artifacts that are required for building and deploying your services. |
| |
| The id to use for the maven repository. If set, it can be excluded from the optionally configured mirror by adding it to MAVEN_MIRROR_OF. For example: external:*,!repo-rhpamcentr,!repo-custom. If MAVEN_MIRROR_URL is set but MAVEN_MIRROR_ID is not set, an id will be generated randomly, but won’t be usable in MAVEN_MIRROR_OF. |
| |
| Fully qualified URL to a Maven repository or service. |
| |
| User name for accessing the Maven repository, if required. |
| |
| Password to access the Maven repository, if required. |
| |
| User name for accessing the Maven service hosted by Business Central inside EAP. |
| |
| Password to access the Maven service hosted by Business Central inside EAP. |
| |
| The directory to use for git hooks, if required. |
| |
| — |
| |
| The name of the keystore file within the secret. |
| |
| The name associated with the server certificate. |
| |
| The password for the keystore and certificate. |
| |
| — |
| |
| RH-SSO URL. |
| |
| — | ROOT.war | |
| RH-SSO Realm name. |
| |
| Business Central RH-SSO Client Secret. |
| |
| Business Central RH-SSO Client name. |
| |
| RH-SSO Realm admin user name for creating the Client if it doesn’t exist. |
| |
| RH-SSO Realm Admin Password used to create the Client. |
| |
| RH-SSO Disable SSL Certificate Validation. |
| |
| RH-SSO Principal Attribute to use as user name. |
| |
| Custom hostname for the http service route for Business Central. Leave blank for default hostname, e.g.: insecure-<application-name>-rhpamcentr-<project>.<default-domain-suffix> |
| |
| Custom hostname for the https service route for Business Central. Leave blank for default hostname, e.g.: <application-name>-rhpamcentr-<project>.<default-domain-suffix> |
| |
| LDAP Endpoint to connect for authentication |
| |
| Bind DN used for authentication. |
| |
| LDAP Credentials used for authentication. |
| |
| The JMX ObjectName of the JaasSecurityDomain used to decrypt the password. |
| |
| LDAP Base DN of the top-level context to begin the user search. |
| |
| LDAP search filter used to locate the context of the user to authenticate. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. A common example for the search filter is (uid={0}). |
| |
| The search scope to use. |
| |
| The timeout in milliseconds for user or role searches. |
| |
| The name of the attribute in the user entry that contains the DN of the user. This may be necessary if the DN of the user itself contains special characters, backslash for example, that prevent correct user mapping. If the attribute does not exist, the entry’s DN is used. |
| |
| A flag indicating if the DN is to be parsed for the user name. If set to true, the DN is parsed for the user name. If set to false the DN is not parsed for the user name. This option is used together with usernameBeginString and usernameEndString. |
| |
| Defines the String which is to be removed from the start of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Defines the String which is to be removed from the end of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Name of the attribute containing the user roles. |
| |
| The fixed DN of the context to search for user roles. This is not the DN where the actual roles are, but the DN where the objects containing the user roles are. For example, in a Microsoft Active Directory server, this is the DN where the user account is. |
| |
| A search filter used to locate the roles associated with the authenticated user. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. The authenticated userDN is substituted into the filter anywhere a {1} is used. An example search filter that matches on the input username is (member={0}). An alternative that matches on the authenticated userDN is (member={1}). |
| |
| The number of levels of recursion the role search will go below a matching context. Disable recursion by setting this to 0. |
| |
| A role included for all authenticated users |
| |
| Name of the attribute within the roleCtxDN context which contains the role name. If the roleAttributeIsDN property is set to true, this property is used to find the role object’s name attribute. |
| |
| A flag indicating if the DN returned by a query contains the roleNameAttributeID. If set to true, the DN is checked for the roleNameAttributeID. If set to false, the DN is not checked for the roleNameAttributeID. This flag can improve the performance of LDAP queries. |
| |
| Whether or not the roleAttributeID contains the fully-qualified DN of a role object. If false, the role name is taken from the value of the roleNameAttributeId attribute of the context name. Certain directory schemas, such as Microsoft Active Directory, require this attribute to be set to true. |
| |
| If you are not using referrals, you can ignore this option. When using referrals, this option denotes the attribute name which contains users defined for a certain role, for example member, if the role object is inside the referral. Users are checked against the content of this attribute name. If this option is not set, the check will always fail, so role objects cannot be stored in a referral tree. |
| |
| When present, the RoleMapping Login Module will be configured to use the provided file. This parameter defines the fully-qualified file path and name of a properties file or resource which maps roles to replacement roles. The format is original_role=role1,role2,role3 |
| |
| Whether to add to the current roles, or replace the current roles with the mapped ones. Replaces if set to true. |
| |
|
| — |
|
| — |
| |
| — | rhpam7 | |
| KIE server persistence datasource. (Sets the org.kie.server.persistence.ds system property) |
| |
| — | true | |
| — | h2 | |
| KIE server H2 database user name. |
| |
| KIE server H2 database password. |
| |
| — | false | |
| — | jdbc:h2:/opt/kie/data/h2/rhpam;AUTO_SERVER=TRUE | |
| — | org.hibernate.dialect.H2Dialect | |
| KIE administrator user name. |
| |
| KIE administrator password. |
| |
| The KIE Server mode. Valid values are 'DEVELOPMENT' or 'PRODUCTION'. In production mode, you can not deploy SNAPSHOT versions of artifacts on the KIE server and can not change the version of an artifact in an existing container. (Sets the org.kie.server.mode system property) |
| |
| KIE server mbeans enabled/disabled. (Sets the kie.mbeans and kie.scanner.mbeans system properties) |
| |
| KIE server class filtering. (Sets the org.drools.server.filter.classes system property) |
| |
| If set to false, the prometheus server extension will be enabled. (Sets the org.kie.prometheus.server.ext.disabled system property) |
| |
| Allows the KIE server to bypass the authenticated user for task-related operations, for example, queries. (Sets the org.kie.server.bypass.auth.user system property) |
| |
| — | — | |
| — |
| |
| KIE server persistence datasource. (Sets the org.kie.server.persistence.ds system property) |
| |
| — | OpenShiftStartupStrategy | |
| KIE server user name. (Sets the org.kie.server.user system property) |
| |
| KIE server password. (Sets the org.kie.server.pwd system property) |
| |
| Maven mirror that Business Central and KIE server must use. If you configure a mirror, this mirror must contain all artifacts that are required for building and deploying your services. |
| |
| Maven mirror configuration for KIE server. |
| |
| — | RHPAMCENTR,EXTERNAL | |
| — | repo-rhpamcentr | |
| — |
| |
| — |
| |
| User name for accessing the Maven service hosted by Business Central inside EAP. |
| |
| Password to access the Maven service hosted by Business Central inside EAP. |
| |
| The id to use for the maven repository. If set, it can be excluded from the optionally configured mirror by adding it to MAVEN_MIRROR_OF. For example: external:*,!repo-rhpamcentr,!repo-custom. If MAVEN_MIRROR_URL is set but MAVEN_MIRROR_ID is not set, an id will be generated randomly, but won’t be usable in MAVEN_MIRROR_OF. |
| |
| Fully qualified URL to a Maven repository or service. |
| |
| User name for accessing the Maven repository, if required. |
| |
| Password to access the Maven repository, if required. |
| |
| — |
| |
| The name of the keystore file within the secret. |
| |
| The name associated with the server certificate. |
| |
| The password for the keystore and certificate. |
| |
| RH-SSO URL. |
| |
| — | ROOT.war | |
| RH-SSO Realm name. |
| |
| KIE Server RH-SSO Client Secret. |
| |
| KIE Server RH-SSO Client name. |
| |
| RH-SSO Realm admin user name for creating the Client if it doesn’t exist. |
| |
| RH-SSO Realm Admin Password used to create the Client. |
| |
| RH-SSO Disable SSL Certificate Validation. |
| |
| RH-SSO Principal Attribute to use as user name. |
| |
| Custom hostname for the http service route for KIE Server. Leave blank for default hostname, e.g.: insecure-<application-name>-kieserver-<project>.<default-domain-suffix> |
| |
| Custom hostname for the https service route for KIE Server. Leave blank for default hostname, e.g.: <application-name>-kieserver-<project>.<default-domain-suffix> |
| |
| LDAP Endpoint to connect for authentication |
| |
| Bind DN used for authentication. |
| |
| LDAP Credentials used for authentication. |
| |
| The JMX ObjectName of the JaasSecurityDomain used to decrypt the password. |
| |
| LDAP Base DN of the top-level context to begin the user search. |
| |
| LDAP search filter used to locate the context of the user to authenticate. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. A common example for the search filter is (uid={0}). |
| |
| The search scope to use. |
| |
| The timeout in milliseconds for user or role searches. |
| |
| The name of the attribute in the user entry that contains the DN of the user. This may be necessary if the DN of the user itself contains special characters, backslash for example, that prevent correct user mapping. If the attribute does not exist, the entry’s DN is used. |
| |
| A flag indicating if the DN is to be parsed for the user name. If set to true, the DN is parsed for the user name. If set to false the DN is not parsed for the user name. This option is used together with usernameBeginString and usernameEndString. |
| |
| Defines the String which is to be removed from the start of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Defines the String which is to be removed from the end of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Name of the attribute containing the user roles. |
| |
| The fixed DN of the context to search for user roles. This is not the DN where the actual roles are, but the DN where the objects containing the user roles are. For example, in a Microsoft Active Directory server, this is the DN where the user account is. |
| |
| A search filter used to locate the roles associated with the authenticated user. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. The authenticated userDN is substituted into the filter anywhere a {1} is used. An example search filter that matches on the input username is (member={0}). An alternative that matches on the authenticated userDN is (member={1}). |
| |
| The number of levels of recursion the role search will go below a matching context. Disable recursion by setting this to 0. |
| |
| A role included for all authenticated users |
| |
| Name of the attribute within the roleCtxDN context which contains the role name. If the roleAttributeIsDN property is set to true, this property is used to find the role object’s name attribute. |
| |
| A flag indicating if the DN returned by a query contains the roleNameAttributeID. If set to true, the DN is checked for the roleNameAttributeID. If set to false, the DN is not checked for the roleNameAttributeID. This flag can improve the performance of LDAP queries. |
| |
| Whether or not the roleAttributeID contains the fully-qualified DN of a role object. If false, the role name is taken from the value of the roleNameAttributeId attribute of the context name. Certain directory schemas, such as Microsoft Active Directory, require this attribute to be set to true. |
| |
| If you are not using referrals, you can ignore this option. When using referrals, this option denotes the attribute name which contains users defined for a certain role, for example member, if the role object is inside the referral. Users are checked against the content of this attribute name. If this option is not set, the check will always fail, so role objects cannot be stored in a referral tree. |
| |
| When present, the RoleMapping Login Module will be configured to use the provided file. This parameter defines the fully-qualified file path and name of a properties file or resource which maps roles to replacement roles. The format is original_role=role1,role2,role3 |
| |
| Whether to add to the current roles, or replace the current roles with the mapped ones. Replaces if set to true. |
|
5.1.2.3.3.7. Volumes
Deployment | Name | mountPath | Purpose | readOnly |
---|---|---|---|---|
| businesscentral-keystore-volume |
| ssl certs | True |
| kieserver-keystore-volume |
| ssl certs | True |
5.1.2.4. External Dependencies
5.1.2.4.1. Volume Claims
A PersistentVolume
object is a storage resource in an OpenShift cluster. Storage is provisioned by an administrator by creating PersistentVolume
objects from sources such as GCE Persistent Disks, AWS Elastic Block Stores (EBS), and NFS mounts. Refer to the Openshift documentation for more information.
Name | Access Mode |
---|---|
| ReadWriteOnce |
| ReadWriteMany |
5.1.2.4.2. Secrets
This template requires the following secrets to be installed for the application to run.
businesscentral-app-secret kieserver-app-secret
5.2. rhpam76-authoring-ha.yaml template
Application template for a HA persistent authoring environment, for Red Hat Process Automation Manager 7.6 - Deprecated
5.2.1. Parameters
Templates allow you to define parameters which take on a value. That value is then substituted wherever the parameter is referenced. References can be defined in any text field in the objects list field. Refer to the Openshift documentation for more information.
Variable name | Image Environment Variable | Description | Example value | Required |
---|---|---|---|---|
| — | The name for the application. | myapp | True |
|
| KIE administrator user name. | adminUser | False |
|
| KIE administrator password. | — | False |
|
| KIE server controller user name. (Sets the org.kie.server.controller.user system property) | controllerUser | False |
|
| KIE server controller password. (Sets the org.kie.server.controller.pwd system property) | — | False |
|
| KIE server controller token for bearer authentication. (Sets the org.kie.server.controller.token system property) | — | False |
|
| KIE server user name. (Sets the org.kie.server.user system property) | executionUser | False |
|
| KIE server password. (Sets the org.kie.server.pwd system property) | — | False |
|
| Allows the KIE server to bypass the authenticated user for task-related operations, for example, queries. (Sets the org.kie.server.bypass.auth.user system property) | false | False |
|
| KIE server persistence datasource. (Sets the org.kie.server.persistence.ds system property) | java:/jboss/datasources/rhpam | False |
|
| MySQL database user name. | rhpam | False |
|
| MySQL database password. | — | False |
|
| MySQL database name. | rhpam7 | False |
| — | Size of persistent storage for the KIE server database volume. | 1Gi | True |
| — | Namespace in which the ImageStream for the MySQL image is installed. The ImageStream is already installed in the openshift namespace. You should only need to modify this if you installed the ImageStream in a different namespace/project. Default is "openshift". | openshift | False |
| — | The MySQL image version, which is intended to correspond to the MySQL version. Default is "5.7". | 5.7 | False |
|
| KIE server MySQL Hibernate dialect. | org.hibernate.dialect.MySQL57Dialect | True |
|
| The KIE Server mode. Valid values are 'DEVELOPMENT' or 'PRODUCTION'. In production mode, you can not deploy SNAPSHOT versions of artifacts on the KIE server and can not change the version of an artifact in an existing container. (Sets the org.kie.server.mode system property). |
| False |
|
| KIE server mbeans enabled/disabled. (Sets the kie.mbeans and kie.scanner.mbeans system properties) | enabled | False |
|
| KIE server class filtering. (Sets the org.drools.server.filter.classes system property) | true | False |
|
| If set to false, the prometheus server extension will be enabled. (Sets the org.kie.prometheus.server.ext.disabled system property) | false | False |
|
| Custom hostname for http service route for Business Central. Leave blank for default hostname, e.g.: insecure-<application-name>-rhpamcentr-<project>.<default-domain-suffix> | — | False |
|
| Custom hostname for https service route for Business Central. Leave blank for default hostname, e.g.: <application-name>-rhpamcentr-<project>.<default-domain-suffix> | — | False |
|
| Custom hostname for http service route for KIE Server. Leave blank for default hostname, e.g.: insecure-<application-name>-kieserver-<project>.<default-domain-suffix> | — | False |
|
| Custom hostname for https service route for KIE Server. Leave blank for default hostname, e.g.: <application-name>-kieserver-<project>.<default-domain-suffix> | — | False |
| — | The name of the secret containing the keystore file for Business Central. | businesscentral-app-secret | True |
|
| The name of the keystore file within the secret for Business Central. | keystore.jks | False |
|
| The name associated with the server certificate for Business Central. | jboss | False |
|
| The password for the keystore and certificate for Business Central. | mykeystorepass | False |
| — | The name of the secret containing the keystore file for KIE Server. | kieserver-app-secret | True |
|
| The name of the keystore file within the secret for KIE Server. | keystore.jks | False |
|
| The name associated with the server certificate for KIE Server. | jboss | False |
|
| The password for the keystore and certificate for KIE Server. | mykeystorepass | False |
|
| The user name for connecting to the JMS broker. | jmsBrokerUser | True |
|
| The password to connect to the JMS broker. | — | True |
| — | DataGrid image. | registry.redhat.io/jboss-datagrid-7/datagrid73-openshift:1.3 | True |
| — | DataGrid Container CPU limit. | 1000m | True |
| — | DataGrid Container memory limit | 2Gi | True |
| — | Size of the persistent storage for DataGrid’s runtime data. | 1Gi | True |
| — | AMQ Broker Image | registry.redhat.io/amq7/amq-broker:7.5 | True |
| — | User role for standard broker user. | admin | True |
| — | The name of the broker | broker | True |
| — | Specifies the maximum amount of memory that message data can consume. If no value is specified, half of the system’s memory is allocated. | 10 gb | False |
| — | Size of persistent storage for AMQ broker volume. | 1Gi | True |
| — | Number of broker replicas for a cluster | 2 | True |
|
| If set to true, turns on KIE server global discovery feature (Sets the org.kie.server.controller.openshift.global.discovery.enabled system property) | false | False |
|
| Enables connection to KIE Server via OpenShift internal Service endpoint (Sets the org.kie.server.controller.openshift.prefer.kieserver.service system property) | true | False |
|
| KIE ServerTemplate Cache TTL in milliseconds. (Sets the org.kie.server.controller.template.cache.ttl system property) | 60000 | False |
| — | Namespace in which the ImageStreams for Red Hat Process Automation Manager images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you installed the ImageStreams in a different namespace/project. | openshift | True |
| — | The name of the image stream to use for Business Central. Default is "rhpam-businesscentral-rhel8". | rhpam-businesscentral-rhel8 | True |
| — | The name of the image stream to use for KIE server. Default is "rhpam-kieserver-rhel8". | rhpam-kieserver-rhel8 | True |
| — | A named pointer to an image in an image stream. Default is "7.6.0". | 7.6.0 | True |
|
| Maven mirror that Business Central and KIE server must use. If you configure a mirror, this mirror must contain all artifacts that are required for building and deploying your services. | — | False |
|
| Maven mirror configuration for KIE server. | external:*,!repo-rhpamcentr | False |
|
| The id to use for the maven repository. If set, it can be excluded from the optionally configured mirror by adding it to MAVEN_MIRROR_OF. For example: external:*,!repo-rhpamcentr,!repo-custom. If MAVEN_MIRROR_URL is set but MAVEN_MIRROR_ID is not set, an id will be generated randomly, but won’t be usable in MAVEN_MIRROR_OF. | repo-custom | False |
|
| Fully qualified URL to a Maven repository or service. | http://nexus.nexus-project.svc.cluster.local:8081/nexus/content/groups/public/ | False |
|
| User name for accessing the Maven repository, if required. | — | False |
|
| Password to access the Maven repository, if required. | — | False |
|
| User name for accessing the Maven service hosted by Business Central inside EAP. | mavenUser | True |
|
| Password to access the Maven service hosted by Business Central inside EAP. | — | True |
|
| The directory to use for git hooks, if required. |
| False |
|
| Sets refresh-interval for the EJB timer database data-store service. | 60000 | True |
| — | Size of the persistent storage for Business Central runtime data. | 1Gi | True |
| — | Business Central Container memory limit. | 8Gi | True |
|
| Business Central Container JVM maximum memory ratio. -Xmx is set to a ratio of the memory available on the container. The default is 80, which means the upper boundary is 80% of the available memory. To skip adding the -Xmx option, set this value to 0. | 80 | True |
| — | Business Central Container CPU limit. | 2000m | True |
| — | KIE server Container memory limit. | 1Gi | True |
| — | KIE server Container CPU limit. | 1000m | True |
|
| RH-SSO URL. | False | |
|
| RH-SSO Realm name. | — | False |
|
| Business Central RH-SSO Client name. | — | False |
|
| Business Central RH-SSO Client Secret. | 252793ed-7118-4ca8-8dab-5622fa97d892 | False |
|
| KIE Server RH-SSO Client name. | — | False |
|
| KIE Server RH-SSO Client Secret. | 252793ed-7118-4ca8-8dab-5622fa97d892 | False |
|
| RH-SSO Realm admin user name for creating the Client if it doesn’t exist. | — | False |
|
| RH-SSO Realm Admin Password used to create the Client. | — | False |
|
| RH-SSO Disable SSL Certificate Validation. | false | False |
|
| RH-SSO Principal Attribute to use as user name. | preferred_username | False |
|
| LDAP Endpoint to connect for authentication. | ldap://myldap.example.com | False |
|
| Bind DN used for authentication. | uid=admin,ou=users,ou=example,ou=com | False |
|
| LDAP Credentials used for authentication. | Password | False |
|
| The JMX ObjectName of the JaasSecurityDomain used to decrypt the password. | — | False |
|
| LDAP Base DN of the top-level context to begin the user search. | ou=users,ou=example,ou=com | False |
|
| LDAP search filter used to locate the context of the user to authenticate. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. A common example for the search filter is (uid={0}). | (uid={0}) | False |
|
| The search scope to use. |
| False |
|
| The timeout in milliseconds for user or role searches. | 10000 | False |
|
| The name of the attribute in the user entry that contains the DN of the user. This may be necessary if the DN of the user itself contains special characters, backslash for example, that prevent correct user mapping. If the attribute does not exist, the entry’s DN is used. | distinguishedName | False |
|
| A flag indicating if the DN is to be parsed for the user name. If set to true, the DN is parsed for the user name. If set to false the DN is not parsed for the user name. This option is used together with usernameBeginString and usernameEndString. | true | False |
|
| Defines the String which is to be removed from the start of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. | — | False |
|
| Defines the String which is to be removed from the end of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. | — | False |
|
| Name of the attribute containing the user roles. | memberOf | False |
|
| The fixed DN of the context to search for user roles. This is not the DN where the actual roles are, but the DN where the objects containing the user roles are. For example, in a Microsoft Active Directory server, this is the DN where the user account is. | ou=groups,ou=example,ou=com | False |
|
| A search filter used to locate the roles associated with the authenticated user. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. The authenticated userDN is substituted into the filter anywhere a {1} is used. An example search filter that matches on the input username is (member={0}). An alternative that matches on the authenticated userDN is (member={1}). | (memberOf={1}) | False |
|
| The number of levels of recursion the role search will go below a matching context. Disable recursion by setting this to 0. | 1 | False |
|
| A role included for all authenticated users | user | False |
|
| Name of the attribute within the roleCtxDN context which contains the role name. If the roleAttributeIsDN property is set to true, this property is used to find the role object’s name attribute. | name | False |
|
| A flag indicating if the DN returned by a query contains the roleNameAttributeID. If set to true, the DN is checked for the roleNameAttributeID. If set to false, the DN is not checked for the roleNameAttributeID. This flag can improve the performance of LDAP queries. | false | False |
|
| Whether or not the roleAttributeID contains the fully-qualified DN of a role object. If false, the role name is taken from the value of the roleNameAttributeId attribute of the context name. Certain directory schemas, such as Microsoft Active Directory, require this attribute to be set to true. | false | False |
|
| If you are not using referrals, you can ignore this option. When using referrals, this option denotes the attribute name which contains users defined for a certain role, for example member, if the role object is inside the referral. Users are checked against the content of this attribute name. If this option is not set, the check will always fail, so role objects cannot be stored in a referral tree. | — | False |
|
| When present, the RoleMapping Login Module will be configured to use the provided file. This parameter defines the fully-qualified file path and name of a properties file or resource which maps roles to replacement roles. The format of every entry in the file is original_role=role1,role2,role3 | — | False |
|
| Whether to add to the current roles, or replace the current roles with the mapped ones. Replaces if set to true. | — | False |
5.2.2. Objects
The CLI supports various object types. A list of these object types as well as their abbreviations can be found in the Openshift documentation.
5.2.2.1. Services
A service is an abstraction which defines a logical set of pods and a policy by which to access them. Refer to the container-engine documentation for more information.
Service | Port | Name | Description |
---|---|---|---|
| 8080 | http | All the Business Central web server’s ports. |
8443 | https | ||
| 8888 | ping | The JGroups ping port for rhpamcentr clustering. |
| 8888 | ping | The JGroups ping port for clustering. |
| 11222 | hotrod | Provides a service for accessing the application over Hot Rod protocol. |
| 8080 | http | All the KIE server web server’s ports. |
8443 | https | ||
| 61616 | — | The broker’s OpenWire port. |
| 8888 | — | The JGroups ping port for amq clustering. |
| 3306 | — | The MySQL server’s port. |
5.2.2.2. Routes
A route is a way to expose a service by giving it an externally-reachable hostname such as www.example.com
. A defined route and the endpoints identified by its service can be consumed by a router to provide named connectivity from external clients to your applications. Each route consists of a route name, service selector, and (optionally) security configuration. Refer to the Openshift documentation for more information.
Service | Security | Hostname |
---|---|---|
insecure-${APPLICATION_NAME}-rhpamcentr-http | none |
|
| TLS passthrough |
|
insecure-${APPLICATION_NAME}-kieserver-http | none |
|
| TLS passthrough |
|
5.2.2.3. Deployment Configurations
A deployment in OpenShift is a replication controller based on a user defined template called a deployment configuration. Deployments are created manually or in response to triggered events. Refer to the Openshift documentation for more information.
5.2.2.3.1. Triggers
A trigger drives the creation of new deployments in response to events, both inside and outside OpenShift. Refer to the Openshift documentation for more information.
Deployment | Triggers |
---|---|
| ImageChange |
| ImageChange |
| ImageChange |
5.2.2.3.2. Replicas
A replication controller ensures that a specified number of pod "replicas" are running at any one time. If there are too many, the replication controller kills some pods. If there are too few, it starts more. Refer to the container-engine documentation for more information.
Deployment | Replicas |
---|---|
| 2 |
| 2 |
| 1 |
5.2.2.3.3. Pod Template
5.2.2.3.3.1. Service Accounts
Service accounts are API objects that exist within each project. They can be created or deleted like any other API object. Refer to the Openshift documentation for more information.
Deployment | Service Account |
---|---|
|
|
|
|
5.2.2.3.3.2. Image
Deployment | Image |
---|---|
|
|
|
|
| mysql |
5.2.2.3.3.3. Readiness Probe
${APPLICATION_NAME}-rhpamcentr
Http Get on http://localhost:8080/rest/ready
${APPLICATION_NAME}-kieserver
Http Get on http://localhost:8080/services/rest/server/readycheck
${APPLICATION_NAME}-mysql
/bin/sh -i -c MYSQL_PWD="$MYSQL_PASSWORD" mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE -e 'SELECT 1'
5.2.2.3.3.4. Liveness Probe
${APPLICATION_NAME}-rhpamcentr
Http Get on http://localhost:8080/rest/healthy
${APPLICATION_NAME}-kieserver
Http Get on http://localhost:8080/services/rest/server/healthcheck
${APPLICATION_NAME}-mysql
tcpSocket on port 3306
5.2.2.3.3.5. Exposed Ports
Deployments | Name | Port | Protocol |
---|---|---|---|
| jolokia | 8778 |
|
http | 8080 |
| |
https | 8443 |
| |
ping | 8888 |
| |
| jolokia | 8778 |
|
http | 8080 |
| |
https | 8443 |
| |
| — | 3306 |
|
5.2.2.3.3.6. Image Environment Variables
Deployment | Variable name | Description | Example value |
---|---|---|---|
|
| — |
|
| — |
| |
| KIE administrator user name. |
| |
| KIE administrator password. |
| |
| KIE server mbeans enabled/disabled. (Sets the kie.mbeans and kie.scanner.mbeans system properties) |
| |
| If set to true, turns on KIE server global discovery feature (Sets the org.kie.server.controller.openshift.global.discovery.enabled system property) |
| |
| Enables connection to KIE Server via OpenShift internal Service endpoint (Sets the org.kie.server.controller.openshift.prefer.kieserver.service system property) |
| |
| KIE ServerTemplate Cache TTL in milliseconds. (Sets the org.kie.server.controller.template.cache.ttl system property) |
| |
| — | true | |
| KIE server controller user name. (Sets the org.kie.server.controller.user system property) |
| |
| KIE server controller password. (Sets the org.kie.server.controller.pwd system property) |
| |
| KIE server controller token for bearer authentication. (Sets the org.kie.server.controller.token system property) |
| |
| KIE server user name. (Sets the org.kie.server.user system property) |
| |
| KIE server password. (Sets the org.kie.server.pwd system property) |
| |
| — |
| |
| Maven mirror that Business Central and KIE server must use. If you configure a mirror, this mirror must contain all artifacts that are required for building and deploying your services. |
| |
| The id to use for the maven repository. If set, it can be excluded from the optionally configured mirror by adding it to MAVEN_MIRROR_OF. For example: external:*,!repo-rhpamcentr,!repo-custom. If MAVEN_MIRROR_URL is set but MAVEN_MIRROR_ID is not set, an id will be generated randomly, but won’t be usable in MAVEN_MIRROR_OF. |
| |
| Fully qualified URL to a Maven repository or service. |
| |
| User name for accessing the Maven repository, if required. |
| |
| Password to access the Maven repository, if required. |
| |
| User name for accessing the Maven service hosted by Business Central inside EAP. |
| |
| Password to access the Maven service hosted by Business Central inside EAP. |
| |
| The directory to use for git hooks, if required. |
| |
| — |
| |
| The name of the keystore file within the secret for Business Central. |
| |
| The name associated with the server certificate for Business Central. |
| |
| The password for the keystore and certificate for Business Central. |
| |
| — | openshift.DNS_PING | |
| — |
| |
| — | 8888 | |
| — |
| |
| — | 11222 | |
| — |
| |
| — | 61616 | |
| The user name for connecting to the JMS broker. |
| |
| The password to connect to the JMS broker. |
| |
| Business Central Container JVM maximum memory ratio. -Xmx is set to a ratio of the memory available on the container. The default is 80, which means the upper boundary is 80% of the available memory. To skip adding the -Xmx option, set this value to 0. |
| |
| RH-SSO URL. |
| |
| — | ROOT.war | |
| RH-SSO Realm name. |
| |
| Business Central RH-SSO Client Secret. |
| |
| Business Central RH-SSO Client name. |
| |
| RH-SSO Realm admin user name for creating the Client if it doesn’t exist. |
| |
| RH-SSO Realm Admin Password used to create the Client. |
| |
| RH-SSO Disable SSL Certificate Validation. |
| |
| RH-SSO Principal Attribute to use as user name. |
| |
| Custom hostname for http service route for Business Central. Leave blank for default hostname, e.g.: insecure-<application-name>-rhpamcentr-<project>.<default-domain-suffix> |
| |
| Custom hostname for https service route for Business Central. Leave blank for default hostname, e.g.: <application-name>-rhpamcentr-<project>.<default-domain-suffix> |
| |
| LDAP Endpoint to connect for authentication. |
| |
| Bind DN used for authentication. |
| |
| LDAP Credentials used for authentication. |
| |
| The JMX ObjectName of the JaasSecurityDomain used to decrypt the password. |
| |
| LDAP Base DN of the top-level context to begin the user search. |
| |
| LDAP search filter used to locate the context of the user to authenticate. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. A common example for the search filter is (uid={0}). |
| |
| The search scope to use. |
| |
| The timeout in milliseconds for user or role searches. |
| |
| The name of the attribute in the user entry that contains the DN of the user. This may be necessary if the DN of the user itself contains special characters, backslash for example, that prevent correct user mapping. If the attribute does not exist, the entry’s DN is used. |
| |
| A flag indicating if the DN is to be parsed for the user name. If set to true, the DN is parsed for the user name. If set to false the DN is not parsed for the user name. This option is used together with usernameBeginString and usernameEndString. |
| |
| Defines the String which is to be removed from the start of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Defines the String which is to be removed from the end of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Name of the attribute containing the user roles. |
| |
| The fixed DN of the context to search for user roles. This is not the DN where the actual roles are, but the DN where the objects containing the user roles are. For example, in a Microsoft Active Directory server, this is the DN where the user account is. |
| |
| A search filter used to locate the roles associated with the authenticated user. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. The authenticated userDN is substituted into the filter anywhere a {1} is used. An example search filter that matches on the input username is (member={0}). An alternative that matches on the authenticated userDN is (member={1}). |
| |
| The number of levels of recursion the role search will go below a matching context. Disable recursion by setting this to 0. |
| |
| A role included for all authenticated users |
| |
| Name of the attribute within the roleCtxDN context which contains the role name. If the roleAttributeIsDN property is set to true, this property is used to find the role object’s name attribute. |
| |
| A flag indicating if the DN returned by a query contains the roleNameAttributeID. If set to true, the DN is checked for the roleNameAttributeID. If set to false, the DN is not checked for the roleNameAttributeID. This flag can improve the performance of LDAP queries. |
| |
| Whether or not the roleAttributeID contains the fully-qualified DN of a role object. If false, the role name is taken from the value of the roleNameAttributeId attribute of the context name. Certain directory schemas, such as Microsoft Active Directory, require this attribute to be set to true. |
| |
| If you are not using referrals, you can ignore this option. When using referrals, this option denotes the attribute name which contains users defined for a certain role, for example member, if the role object is inside the referral. Users are checked against the content of this attribute name. If this option is not set, the check will always fail, so role objects cannot be stored in a referral tree. |
| |
| When present, the RoleMapping Login Module will be configured to use the provided file. This parameter defines the fully-qualified file path and name of a properties file or resource which maps roles to replacement roles. The format of every entry in the file is original_role=role1,role2,role3 |
| |
| Whether to add to the current roles, or replace the current roles with the mapped ones. Replaces if set to true. |
| |
|
| — |
|
| Sets refresh-interval for the EJB timer database data-store service. |
| |
| — |
| |
| MySQL database name. |
| |
| — | mariadb | |
| MySQL database user name. |
| |
| MySQL database password. |
| |
| — |
| |
| — | 3306 | |
| KIE server MySQL Hibernate dialect. |
| |
| KIE server persistence datasource. (Sets the org.kie.server.persistence.ds system property) |
| |
| KIE server persistence datasource. (Sets the org.kie.server.persistence.ds system property) |
| |
| — | true | |
| KIE administrator user name. |
| |
| KIE administrator password. |
| |
| KIE server mbeans enabled/disabled. (Sets the kie.mbeans and kie.scanner.mbeans system properties) |
| |
| The KIE Server mode. Valid values are 'DEVELOPMENT' or 'PRODUCTION'. In production mode, you can not deploy SNAPSHOT versions of artifacts on the KIE server and can not change the version of an artifact in an existing container. (Sets the org.kie.server.mode system property). |
| |
| KIE server class filtering. (Sets the org.drools.server.filter.classes system property) |
| |
| If set to false, the prometheus server extension will be enabled. (Sets the org.kie.prometheus.server.ext.disabled system property) |
| |
| Allows the KIE server to bypass the authenticated user for task-related operations, for example, queries. (Sets the org.kie.server.bypass.auth.user system property) |
| |
| — | — | |
| — |
| |
| — | OpenShiftStartupStrategy | |
| KIE server password. (Sets the org.kie.server.pwd system property) |
| |
| KIE server user name. (Sets the org.kie.server.user system property) |
| |
| Maven mirror that Business Central and KIE server must use. If you configure a mirror, this mirror must contain all artifacts that are required for building and deploying your services. |
| |
| Maven mirror configuration for KIE server. |
| |
| — | RHPAMCENTR,EXTERNAL | |
| — | repo-rhpamcentr | |
| — |
| |
| — |
| |
| User name for accessing the Maven service hosted by Business Central inside EAP. |
| |
| Password to access the Maven service hosted by Business Central inside EAP. |
| |
| The id to use for the maven repository. If set, it can be excluded from the optionally configured mirror by adding it to MAVEN_MIRROR_OF. For example: external:*,!repo-rhpamcentr,!repo-custom. If MAVEN_MIRROR_URL is set but MAVEN_MIRROR_ID is not set, an id will be generated randomly, but won’t be usable in MAVEN_MIRROR_OF. |
| |
| Fully qualified URL to a Maven repository or service. |
| |
| User name for accessing the Maven repository, if required. |
| |
| Password to access the Maven repository, if required. |
| |
| — |
| |
| The name of the keystore file within the secret for KIE Server. |
| |
| The name associated with the server certificate for KIE Server. |
| |
| The password for the keystore and certificate for KIE Server. |
| |
| RH-SSO URL. |
| |
| — | ROOT.war | |
| RH-SSO Realm name. |
| |
| KIE Server RH-SSO Client Secret. |
| |
| KIE Server RH-SSO Client name. |
| |
| RH-SSO Realm admin user name for creating the Client if it doesn’t exist. |
| |
| RH-SSO Realm Admin Password used to create the Client. |
| |
| RH-SSO Disable SSL Certificate Validation. |
| |
| RH-SSO Principal Attribute to use as user name. |
| |
| Custom hostname for http service route for KIE Server. Leave blank for default hostname, e.g.: insecure-<application-name>-kieserver-<project>.<default-domain-suffix> |
| |
| Custom hostname for https service route for KIE Server. Leave blank for default hostname, e.g.: <application-name>-kieserver-<project>.<default-domain-suffix> |
| |
| LDAP Endpoint to connect for authentication. |
| |
| Bind DN used for authentication. |
| |
| LDAP Credentials used for authentication. |
| |
| The JMX ObjectName of the JaasSecurityDomain used to decrypt the password. |
| |
| LDAP Base DN of the top-level context to begin the user search. |
| |
| LDAP search filter used to locate the context of the user to authenticate. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. A common example for the search filter is (uid={0}). |
| |
| The search scope to use. |
| |
| The timeout in milliseconds for user or role searches. |
| |
| The name of the attribute in the user entry that contains the DN of the user. This may be necessary if the DN of the user itself contains special characters, backslash for example, that prevent correct user mapping. If the attribute does not exist, the entry’s DN is used. |
| |
| A flag indicating if the DN is to be parsed for the user name. If set to true, the DN is parsed for the user name. If set to false the DN is not parsed for the user name. This option is used together with usernameBeginString and usernameEndString. |
| |
| Defines the String which is to be removed from the start of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Defines the String which is to be removed from the end of the DN to reveal the user name. This option is used together with usernameEndString and only taken into account if parseUsername is set to true. |
| |
| Name of the attribute containing the user roles. |
| |
| The fixed DN of the context to search for user roles. This is not the DN where the actual roles are, but the DN where the objects containing the user roles are. For example, in a Microsoft Active Directory server, this is the DN where the user account is. |
| |
| A search filter used to locate the roles associated with the authenticated user. The input username or userDN obtained from the login module callback is substituted into the filter anywhere a {0} expression is used. The authenticated userDN is substituted into the filter anywhere a {1} is used. An example search filter that matches on the input username is (member={0}). An alternative that matches on the authenticated userDN is (member={1}). |
| |
| The number of levels of recursion the role search will go below a matching context. Disable recursion by setting this to 0. |
| |
| A role included for all authenticated users |
| |
| Name of the attribute within the roleCtxDN context which contains the role name. If the roleAttributeIsDN property is set to true, this property is used to find the role object’s name attribute. |
| |
| A flag indicating if the DN returned by a query contains the roleNameAttributeID. If set to true, the DN is checked for the roleNameAttributeID. If set to false, the DN is not checked for the roleNameAttributeID. This flag can improve the performance of LDAP queries. |
| |
| Whether or not the roleAttributeID contains the fully-qualified DN of a role object. If false, the role name is taken from the value of the roleNameAttributeId attribute of the context name. Certain directory schemas, such as Microsoft Active Directory, require this attribute to be set to true. |
| |
| If you are not using referrals, you can ignore this option. When using referrals, this option denotes the attribute name which contains users defined for a certain role, for example member, if the role object is inside the referral. Users are checked against the content of this attribute name. If this option is not set, the check will always fail, so role objects cannot be stored in a referral tree. |
| |
| When present, the RoleMapping Login Module will be configured to use the provided file. This parameter defines the fully-qualified file path and name of a properties file or resource which maps roles to replacement roles. The format of every entry in the file is original_role=role1,role2,role3 |
| |
| Whether to add to the current roles, or replace the current roles with the mapped ones. Replaces if set to true. |
| |
|
| MySQL database user name. |
|
| MySQL database password. |
| |
| MySQL database name. |
|
5.2.2.3.3.7. Volumes
Deployment | Name | mountPath | Purpose | readOnly |
---|---|---|---|---|
| businesscentral-keystore-volume |
| ssl certs | True |
| kieserver-keystore-volume |
| ssl certs | True |
|
|
| mysql | false |
5.2.2.4. External Dependencies
5.2.2.4.1. Volume Claims
A PersistentVolume
object is a storage resource in an OpenShift cluster. Storage is provisioned by an administrator by creating PersistentVolume
objects from sources such as GCE Persistent Disks, AWS Elastic Block Stores (EBS), and NFS mounts. Refer to the Openshift documentation for more information.
Name | Access Mode |
---|---|
| ReadWriteMany |
| ReadWriteOnce |
5.2.2.4.2. Secrets
This template requires the following secrets to be installed for the application to run.
businesscentral-app-secret kieserver-app-secret
5.2.2.4.3. Clustering
Clustering in OpenShift EAP is achieved through one of two discovery mechanisms: Kubernetes or DNS. This is done by configuring the JGroups protocol stack in standalone-openshift.xml with either the <openshift.KUBE_PING/>
or <openshift.DNS_PING/>
elements. The templates are configured to use DNS_PING
, however `KUBE_PING`is the default used by the image.
The discovery mechanism used is specified by the JGROUPS_PING_PROTOCOL
environment variable which can be set to either openshift.DNS_PING
or openshift.KUBE_PING
. openshift.KUBE_PING
is the default used by the image if no value is specified for JGROUPS_PING_PROTOCOL
.
For DNS_PING to work, the following steps must be taken:
-
The
OPENSHIFT_DNS_PING_SERVICE_NAME
environment variable must be set to the name of the ping service for the cluster (see table above). If not set, the server will act as if it is a single-node cluster (a "cluster of one"). -
The
OPENSHIFT_DNS_PING_SERVICE_PORT
environment variables should be set to the port number on which the ping service is exposed (see table above). TheDNS_PING
protocol will attempt to discern the port from the SRV records, if it can, otherwise it will default to 8888. A ping service which exposes the ping port must be defined. This service should be "headless" (ClusterIP=None) and must have the following:
- The port must be named for port discovery to work.
-
It must be annotated with
service.alpha.kubernetes.io/tolerate-unready-endpoints
set to"true"
. Omitting this annotation will result in each node forming their own "cluster of one" during startup, then merging their cluster into the other nodes' clusters after startup (as the other nodes are not detected until after they have started).
Example ping service for use with DNS_PING
kind: Service apiVersion: v1 spec: clusterIP: None ports: - name: ping port: 8888 selector: deploymentConfig: eap-app metadata: name: eap-app-ping annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" description: "The JGroups ping port for clustering."
For KUBE_PING
to work, the following steps must be taken:
-
The
OPENSHIFT_KUBE_PING_NAMESPACE
environment variable must be set (see table above). If not set, the server will act as if it is a single-node cluster (a "cluster of one"). -
The
OPENSHIFT_KUBE_PING_LABELS
environment variables should be set (see table above). If not set, pods outside of your application (albeit in your namespace) will try to join. - Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST api. This is done on the command line.
Example 5.1. Policy commands
Using the default service account in the myproject namespace:
oc policy add-role-to-user view system:serviceaccount:myproject:default -n myproject
Using the eap-service-account in the myproject namespace:
oc policy add-role-to-user view system:serviceaccount:myproject:eap-service-account -n myproject
5.3. OpenShift usage quick reference
To deploy, monitor, manage, and undeploy Red Hat Process Automation Manager templates on Red Hat OpenShift Container Platform, you can use the OpenShift Web console or the oc
command.
For instructions about using the Web console, see Create and build an image using the Web console.
For detailed instructions about using the oc
command, see CLI Reference. The following commands are likely to be required:
To create a project, use the following command:
$ oc new-project <project-name>
For more information, see Creating a project using the CLI.
To deploy a template (create an application from a template), use the following command:
$ oc new-app -f <template-name> -p <parameter>=<value> -p <parameter>=<value> ...
For more information, see Creating an application using the CLI.
To view a list of the active pods in the project, use the following command:
$ oc get pods
To view the current status of a pod, including information whether or not the pod deployment has completed and it is now in a running state, use the following command:
$ oc describe pod <pod-name>
You can also use the
oc describe
command to view the current status of other objects. For more information, see Application modification operations.To view the logs for a pod, use the following command:
$ oc logs <pod-name>
To view deployment logs, look up a
DeploymentConfig
name in the template reference and enter the following command:$ oc logs -f dc/<deployment-config-name>
For more information, see Viewing deployment logs.
To view build logs, look up a
BuildConfig
name in the template reference and enter the command:$ oc logs -f bc/<build-config-name>
For more information, see Accessing build logs.
To scale a pod in the application, look up a
DeploymentConfig
name in the template reference and enter the command:$ oc scale dc/<deployment-config-name> --replicas=<number>
For more information, see Manual scaling.
To undeploy the application, you can delete the project by using the command:
$ oc delete project <project-name>
Alternatively, you can use the
oc delete
command to remove any part of the application, such as a pod or replication controller. For details, see Application modification operations.
Appendix A. Versioning information
Documentation last updated on Friday, June 25, 2021.