Serverless Logic
Introduction to OpenShift Serverless Logic
Abstract
Chapter 1. Getting started Copy linkLink copied to clipboard!
1.1. Creating and running workflows with Knative Workflow plugin Copy linkLink copied to clipboard!
You can create and run the OpenShift Serverless Logic workflows locally.
1.1.1. Creating a workflow Copy linkLink copied to clipboard!
You can use the create
command with kn workflow
to set up a new OpenShift Serverless Logic project in your current directory.
Prerequisites
-
You have installed the OpenShift Serverless Logic
kn-workflow
CLI plugin.
Procedure
Create a new OpenShift Serverless Logic workflow project by running the following command:
kn workflow create
$ kn workflow create
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, the generated project name is
new-project
. You can change the project name by using the[-n|--name]
flag as follows:Example command
kn workflow create --name my-project
$ kn workflow create --name my-project
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.2. Running a workflow locally Copy linkLink copied to clipboard!
You can use the run
command with kn workflow
to build and run your OpenShift Serverless Logic workflow project in your current directory.
Prerequisites
- You have installed Podman on your local machine.
-
You have installed the OpenShift Serverless Logic
kn-workflow
CLI plugin. - You have created an OpenShift Serverless Logic workflow project.
Procedure
From the directory where you created your OpenShift Serverless Logic project, move to your project directory by running the following command:
cd ./<your-project-name>
$ cd ./<your-project-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to build and run your OpenShift Serverless Logic workflow project:
kn workflow run
$ kn workflow run
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the project is ready, the Development UI automatically opens in your browser at
localhost:8080/q/dev-ui
and you will find the Serverless Workflow Tools tile available. Alternatively, you can access the tool directly usinghttp://localhost:8080/q/dev-ui/org.apache.kie.sonataflow.sonataflow-quarkus-devui/workflows
.
You can execute a workflow locally using a container that runs on your machine. Stop the container with Ctrl+C.
1.2. Deployment options and deploying workflows Copy linkLink copied to clipboard!
You can deploy the Serverless Logic workflows on the cluster using one of three deployment profiles:
- Dev
- Preview
- GitOps
Each profile defines how the Operator builds and manages workflow deployments, including image lifecycle, live updates, and reconciliation behavior.
1.2.1. Deploying workflows using the Dev profile Copy linkLink copied to clipboard!
You can deploy your local workflow on OpenShift Container Platform using the Dev profile. You can use this deployment to experiment and modify your workflow directly on the cluster, seeing changes almost immediately. Dev profile is designed for development and testing purposes. Because it automatically reloads the workflow without restarting the container, it is ideal for initial development stages and for testing workflow changes in a live environment.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Create the workflow configuration YAML file.
Example
workflow-dev.yaml
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy the application, apply the YAML file by entering the following command:
oc apply -f <filename> -n <your_namespace>
$ oc apply -f <filename> -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the deployment and check the status of the deployed workflow by entering the following command:
oc get workflow -n <your_namespace> -w
$ oc get workflow -n <your_namespace> -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that your workflow is listed and the status is
Running
orCompleted
.Edit the workflow directly in the cluster by entering the following command:
oc edit sonataflow <workflow_name> -n <your_namespace>
$ oc edit sonataflow <workflow_name> -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing, save the changes. The OpenShift Serverless Logic Operator detects the changes and updates the workflow accordingly.
Verification
To ensure the changes are applied correctly, verify the status and logs of the workflow by entering the following commands:
View the status of the workflow by running the following command:
oc get sonataflows -n <your_namespace>
$ oc get sonataflows -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the workflow logs by running the following command:
oc logs <workflow_pod_name> -n <your_namespace>
$ oc logs <workflow_pod_name> -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
After completing the testing, delete the resources to avoid unnecessary usage by running the following command:
oc delete sonataflow <workflow_name> -n <your_namespace>
$ oc delete sonataflow <workflow_name> -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2. Deploying workflows using the Preview profile Copy linkLink copied to clipboard!
You can deploy your local workflow on OpenShift Container Platform using the Preview profile. This allows you to validate and test workflows in a production-like environment directly on the cluster. Preview profile is ideal for final testing and validation before moving workflows to production, as well as for quick iteration without directly managing the build pipeline. It also ensures that workflows will run smoothly in a production-like setting.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
To deploy a workflow in Preview profile, OpenShift Serverless Logic Operator uses the build system on OpenShift Container Platform, which automatically creates the image for deploying your workflow.
The following sections explain how to build and deploy your workflow on a cluster using the OpenShift Serverless Logic Operator with a SonataFlow
custom resource.
1.2.2.1. Configuring workflows in Preview profile Copy linkLink copied to clipboard!
1.2.2.1.1. Configuring the workflow base builder image Copy linkLink copied to clipboard!
If your scenario requires strict policies for image usage, such as security or hardening constraints, replace the default image used by the OpenShift Serverless Logic Operator to build the final workflow container image.
By default, the OpenShift Serverless Logic Operator uses the image distributed in the official Red Hat Registry to build workflows. If your scenario requires strict policies for image use, such as security or hardening constraints, you can replace the default image.
To change this image, you edit the SonataFlowPlatform
custom resource (CR) in the namespace where you deployed your workflows.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
List the
SonataFlowPlatform
resources in your namespace by running the following command:oc get sonataflowplatform -n <your_namespace>
$ oc get sonataflowplatform -n <your_namespace>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<your_namespace>
with the name of your namespace.
Patch the
SonataFlowPlatform
resource with the new builder image by running the following command:oc patch sonataflowplatform <name> --patch 'spec:\n build:\n config:\n baseImage: <your_new_image_full_name_with_tag>' -n <your_namespace>
$ oc patch sonataflowplatform <name> --patch 'spec:\n build:\n config:\n baseImage: <your_new_image_full_name_with_tag>' -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
SonataFlowPlatform
CR has been patched correctly by running the following command:oc describe sonataflowplatform <name> -n <your_namespace>
$ oc describe sonataflowplatform <name> -n <your_namespace>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>
with the name of yourSonataFlowPlatform
resource and<your_namespace>
with the name of your namespace.
Ensure that the
baseImage
field underspec.build.config
reflects the new image.
1.2.2.1.2. Customization for the base builder Dockerfile Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator uses the logic-operator-rhel8-builder-config
config map custom resource (CR) in its openshift-serverless-logic
OpenShift Serverless Logic Operator installation namespace to configure and run the workflow build process. You can change the Dockerfile entry in this config map to adjust the Dockerfile to your needs.
Modifying the Dockerfile can break the build process.
This example is for reference only. The actual version might be slightly different. Do not use this example for your installation.
Example logic-operator-rhel8-builder-config
config map CR
1.2.2.1.3. Changing resource requirements Copy linkLink copied to clipboard!
You can specify resource requirements for the internal builder pods, by creating or editing a SonataFlowPlatform
resource in the workflow namespace.
Example SonataFlowPlatform
resource
Only one SonataFlowPlatform
resource is allowed per namespace. Fetch and edit the resource that the OpenShift Serverless Logic Operator created for you instead of trying to create another resource.
You can fine-tune the resource requirements for a particular workflow. Each workflow instance has a SonataFlowBuild
instance created with the same name as the workflow. You can edit the SonataFlowBuild
custom resource (CR) and specify the parameters as follows:
Example of SonataFlowBuild
CR
These parameters apply only to new build instances.
1.2.2.1.4. Passing arguments to the internal builder Copy linkLink copied to clipboard!
You can customize the build process by passing build arguments to the SonataFlowBuild
instance or setting default build arguments in the SonataFlowPlatform
resource.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check for the existing
SonataFlowBuild
instance by running the following command:oc get sonataflowbuild <name> -n <namespace>
$ oc get sonataflowbuild <name> -n <namespace>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with your namespace.
Add build arguments to the
SonataFlowBuild
instance by running the following command:oc edit sonataflowbuild <name> -n <namespace>
$ oc edit sonataflowbuild <name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the desired build arguments under the
.spec.buildArgs
field of theSonataFlowBuild
instance:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the existing
SonataFlowBuild
instance.
Save the file and exit.
A new build with the updated configuration starts.
Set the default build arguments in the
SonataFlowPlatform
resource by running the following command:oc edit sonataflowplatform <name> -n <namespace>
$ oc edit sonataflowplatform <name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the desired build arguments under the
.spec.buildArgs
field of theSonataFlowPlatform
resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the existing
SonataFlowPlatform
resource.
- Save the file and exit.
1.2.2.1.5. Setting environment variables in the internal builder Copy linkLink copied to clipboard!
You can set environment variables to the SonataFlowBuild
internal builder pod. These variables are valid for the build context only and are not set on the final built workflow image.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check for existing
SonataFlowBuild
instance by running the following command:oc get sonataflowbuild <name> -n <namespace>
$ oc get sonataflowbuild <name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with your namespace.Edit the
SonataFlowBuild
instance by running the following command:oc edit sonataflowbuild <name> -n <namespace>
$ oc edit sonataflowbuild <name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
SonataFlowBuild
instanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit.
A new with the updated configuration starts.
Alternatively, you can set the enviroments in the
SonataFlowPlatform
, so that every new build instances will use it as a template.Example
SonataFlowPlatform
instanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2.1.6. Changing the base builder image Copy linkLink copied to clipboard!
You can modify the default builder image used by the OpenShift Serverless Logic Operator by editing the logic-operator-rhel8-builder-config
config map.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Edit the
logic-operator-rhel8-builder-config
config map by running the following command:oc edit cm/logic-operator-rhel8-builder-config -n openshift-serverless-logic
$ oc edit cm/logic-operator-rhel8-builder-config -n openshift-serverless-logic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the dockerfile entry.
In your editor, locate the Dockerfile entry and change the first line to the desired image.
Example
data: Dockerfile: | FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 # Change the image to the desired one
data: Dockerfile: | FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 # Change the image to the desired one
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes.
1.2.2.2. Building and deploying your workflow Copy linkLink copied to clipboard!
You can create a SonataFlow
custom resource (CR) on OpenShift Container Platform and OpenShift Serverless Logic Operator builds and deploys the workflow.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Create a workflow YAML file similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
SonataFlow
workflow definition to your OpenShift Container Platform namespace by running the following command:oc apply -f <workflow-name>.yaml -n <your_namespace>
$ oc apply -f <workflow-name>.yaml -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command for the
greetings-workflow.yaml
file:oc apply -f greetings-workflow.yaml -n workflows
$ oc apply -f greetings-workflow.yaml -n workflows
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all the build configurations by running the following command:
oc get buildconfigs -n workflows
$ oc get buildconfigs -n workflows
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the logs of the build process by running the following command:
oc logs buildconfig/<workflow-name> -n <your_namespace>
$ oc logs buildconfig/<workflow-name> -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command for the
greetings-workflow.yaml
file:oc logs buildconfig/greeting -n workflows
$ oc logs buildconfig/greeting -n workflows
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify the deployment, list all the pods by running the following command:
oc get pods -n <your_namespace>
$ oc get pods -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the pod corresponding to your workflow is running.
Check the running pods and their logs by running the following command:
oc logs pod/<pod-name> -n workflows
$ oc logs pod/<pod-name> -n workflows
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2.3. Verifying workflow deployment Copy linkLink copied to clipboard!
You can verify that your OpenShift Serverless Logic workflow is running by performing a test HTTP call from the workflow pod.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Create a workflow
YAML
file similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a route for the workflow service by running the following command:
oc expose svc/<workflow-service-name> -n workflows
$ oc expose svc/<workflow-service-name> -n workflows
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command creates a public URL to access the workflow service.
Set an environment variable for the public URL by running the following command:
WORKFLOW_SVC=$(oc get route/<workflow-service-name> -n <namespace> --template='{{.spec.host}}')
$ WORKFLOW_SVC=$(oc get route/<workflow-service-name> -n <namespace> --template='{{.spec.host}}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make an HTTP call to the workflow to send a POST request to the service by running the following command:
curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{<"your": "json_payload">}' http://$WORKFLOW_SVC/<endpoint>
$ curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{<"your": "json_payload">}' http://$WORKFLOW_SVC/<endpoint>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output shows an example of the expected response if the workflow is running.
1.2.2.4. Restarting a build Copy linkLink copied to clipboard!
To restart a build, you can add or edit the sonataflow.org/restartBuild: true
annotation in the SonataFlowBuild
instance. Restarting a build is necessary if there is a problem with your workflow or the initial build revision.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check if the
SonataFlowBuild
instance exists by running the following command:oc get sonataflowbuild <name> -n <namespace>
$ oc get sonataflowbuild <name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
SonataFlowBuild
instance by running the following command:oc edit sonataflowbuild/<name> -n <namespace>
$ oc edit sonataflowbuild/<name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with the namespace where your workflow is deployed.Add the
sonataflow.org/restartBuild: true
annotation to restart the build.Copy to Clipboard Copied! Toggle word wrap Toggle overflow This action triggers the OpenShift Serverless Logic Operator to start a new build of the workflow.
To monitor the build process, check the build logs by running the following command:
oc logs buildconfig/<name> -n <namespace>
$ oc logs buildconfig/<name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with the namespace where your workflow is deployed.
1.2.3. Deploying workflows using the GitOps profile Copy linkLink copied to clipboard!
Use the GitOps profile only for production deployments. For development, rapid iteration, or testing, use the Dev or Preview profiles instead.
You can deploy your local workflow on OpenShift Container Platform using the GitOps profile. The GitOps profile provides full control over the workflow container image by allowing you to build and manage the image externally, typically through a CI/CD pipeline such as ArgoCD or Tekton. When a container image is defined in the SonataFlow
custom resource (CR), the Operator automatically assumes the GitOps profile is being used and it does not attempt to build or modify the image in any way.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Specify your container image in your SonataFlow CR:
Example SonataFlow CR with set GitOps profile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
flow
definition must match the workflow definition used during the build process. When you deploy your workflow using the GitOps profile, the Operator compares this definition with the workflow files embedded in the container image. If the definition and files do not match, the deployment fails.
Apply your CR to deploy the workflow:
oc apply -f <filename>
$ oc apply -f <filename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.4. Editing a workflow Copy linkLink copied to clipboard!
When the OpenShift Serverless Logic Operator deploys a workflow service, it creates two config maps to store runtime properties:
-
User properties: Defined in a
ConfigMap
named after theSonataFlow
object with the suffix-props
. For example, if your workflow name isgreeting
, then theConfigMap
name isgreeting-props
. -
Managed properties: Defined in a
ConfigMap
named after theSonataFlow
object with the suffix-managed-props
. For example, if your workflow name isgreeting
, then theConfigMap
name isgreeting-managed-props
.
Managed properties always override any user property with the same key name and cannot be edited by the user. Any change would be overwritten by the Operator at the next reconciliation cycle.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Open and edit the
ConfigMap
by running the following command:oc edit cm <workflow_name>-props -n <namespace>
$ oc edit cm <workflow_name>-props -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<workflow_name>
with the name of your workflow and<namespace>
with the namespace where your workflow is deployed.Add the properties in the
application.properties
section.Example of a workflow properties stored within a
ConfigMap
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the properties are correctly formatted to prevent the Operator from replacing your configuration with the default one.
- After making the necessary changes, save the file and exit the editor.
1.2.5. Testing a workflow Copy linkLink copied to clipboard!
To verify that your OpenShift Serverless Logic workflow is running correctly, you can perform a test HTTP call from the relevant pod.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
To create a route for the specified service in your namespace by running the following command:
oc expose svc <service_name> -n <namespace>
$ oc expose svc <service_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the URL for the newly exposed service by running the following command:
WORKFLOW_SVC=$(oc get route/<service_name> --template='{{.spec.host}}')
$ WORKFLOW_SVC=$(oc get route/<service_name> --template='{{.spec.host}}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Perform a test HTTP call and send a
POST
request by running the following command:curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '<request_body>' http://$WORKFLOW_SVC/<endpoint>
$ curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '<request_body>' http://$WORKFLOW_SVC/<endpoint>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the response to ensure the workflow is functioning as expected.
1.2.6. Troubleshooting a workflow Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator deploys its pod with health check probes to ensure the Workflow runs in a healthy state. If changes cause these health checks to fail, the pod will stop responding.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check the workflow status by running the following command:
oc get workflow <name> -o jsonpath={.status.conditions} | jq .
$ oc get workflow <name> -o jsonpath={.status.conditions} | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch and analyze the the logs from the workflow’s deployment, run the following command:
oc logs deployment/<workflow_name> -f
$ oc logs deployment/<workflow_name> -f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.7. Deleting a workflow Copy linkLink copied to clipboard!
You can use the oc delete
command to delete your OpenShift Serverless Logic workflow in your current directory.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
-
Verify that you have the correct file that defines the Workflow you want to delete. For example,
workflow.yaml
. Run the
oc delete
command to remove the Workflow from your specified namespace:oc delete -f <your_file> -n <your_namespace>
$ oc delete -f <your_file> -n <your_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<your_file>
with the name of your Workflow file and<your_namespace>
with your namespace.
Chapter 2. Global configuration settings Copy linkLink copied to clipboard!
You can set global configuration options for the OpenShift Serverless Logic Operator.
2.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the OpenShift Serverless Logic Operator in the target cluster.
2.2. Customization of global configurations Copy linkLink copied to clipboard!
After installing the OpenShift Serverless Logic Operator, you can access the logic-operator-rhel8-controllers-config
config map file in the openshift-serverless-logic
namespace. This configuration file defines how the Operator behaves when it creates new resources in the cluster. However, changes to this configuration does not affect resources that already exist.
You can modify any of the options within the controllers_cfg.yaml
key in the config map.
The following table outlines all the available global configuration options:
Configuration key | Default value | Description |
---|---|---|
|
| The default size of Kaniko persistent volume claim (PVC) when using the internal OpenShift Serverless Logic Operator builder manager. |
|
| How much time (in seconds) to wait for a developer mode workflow to start. This information is used for the controller manager to create new developer mode containers and setup the health check probes. |
|
| Default image used internally by the Operator managed Kaniko builder to create the warmup pods. |
|
| Default image used internally by the Operator managed Kaniko builder to create the executor pods. |
|
| The Jobs service image for PostgreSQL to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
| The Jobs service image without persistence to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
| The Data Index service image for PostgreSQL to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
| The Data Index service image without persistence to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
| OpenShift Serverless Logic base builder image used in the internal Dockerfile to build workflow applications in preview profile. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
| The image to use to deploy OpenShift Serverless Logic workflow images in devmode profile. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
| The default name of the builder config map in the OpenShift Serverless Logic Operator namespace. |
| next column | Quarkus extensions required for workflows persistence. These extensions are used by the OpenShift Serverless Logic Operator builder in cases where the workflow being built has configured PostgreSQL persistence. |
|
|
When set to |
|
|
When set to |
|
|
When set to |
You can edit this by updating the logic-operator-controllers-config
config map by using the oc
command-line tool.
2.2.1. Impact of global configuration changes Copy linkLink copied to clipboard!
When you update the global configurations, the changes immediately affect only newly created resources. For example, if you change the sonataFlowDevModeImageTag
property and already have a workflow deployed in dev mode, the OpenShift Serverless Logic Operator does not roll out a new deployment with the updated image configuration. Only new deployments reflect the changes.
2.2.2. Customizing the base builder image Copy linkLink copied to clipboard!
You can directly change the base builder image in the Dockerfile used by the OpenShift Serverless Logic Operator.
Additionally, you can specify the base builder image in the SonataFlowPlatform
configuration within the current namespace. This ensures that the specified base image is used exclusively in the given namespace.
Example of SonataFlowPlatform
with a custom base builder image
Alternatively, you can also modify the base builder image in the global configuration config map as shown in the following example:
Example of ConfigMap
with a custom base builder image
When customizing the base builder image, the following order of precedence is applicable:
-
The
SonataFlowPlatform
configuration in the current context. -
The global configuration entry in the
ConfigMap
resource. -
The
FROM
clause in the Dockerfile within the OpenShift Serverless Logic Operator namespace, defined in thelogic-operator-rhel8-builder-config
config map.
The entry in the SonataFlowPlatform
configuration always overrides any other value.
Chapter 3. Managing services Copy linkLink copied to clipboard!
3.1. Configuring OpenAPI services Copy linkLink copied to clipboard!
The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface for HTTP APIs. You can understand a service’s capabilities without access to the source code, additional documentation, or network traffic inspection. When you define a service by using the OpenAPI, you can understand and interact with it using minimal implementation logic. Just as interface descriptions simplify lower-level programming, the OpenAPI Specification eliminates guesswork in calling a service.
3.1.1. OpenAPI function definition Copy linkLink copied to clipboard!
OpenShift Serverless Logic allows the workflows to interact with remote services using an OpenAPI specfication reference in a function.
Example OpenAPI function definition
The operation
attribute is a string
composed of the following parameters:
-
URI
: The engine uses this to locate the specification file. - Operation identifier: You can find this identifier in the OpenAPI specification file.
OpenShift Serverless Logic supports the following URI schemes:
- file: Use this for files located in the file system.
-
http
orhttps
: Use these for remotely located files.
Ensure the OpenAPI specification files are available during build time. OpenShift Serverless Logic uses an internal code generation feature to send requests at runtime. After you build the application image, OpenShift Serverless Logic will not have access to these files.
If the OpenAPI service you want to add to the workflow does not have a specification file, you can either create one or update the service to generate and expose the file.
3.1.2. Sending REST requests based on the OpenAPI specification Copy linkLink copied to clipboard!
To send REST requests that are based on the OpenAPI specification files, you must perform the following procedures:
- Define the function references
- Access the defined functions in the workflow states
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
To define the OpenAPI functions:
- Identify and access the OpenAPI specification files for the services you intend to invoke.
Copy the OpenAPI specification files into your workflow service directory, such as
<project_application_dir>/specs
.The following example shows the OpenAPI specification for the multiplication REST service:
Example multiplication REST service OpenAPI specification
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To define functions in the workflow, use the
operationId
from the OpenAPI specification to reference the desired operations in your function definitions.Example function definitions in the temperature conversion application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure that your function definitions reference the correct paths to the OpenAPI files stored in the
<project_application_dir>/specs
directory.
To access the defined functions in the workflow states:
- Define workflow actions to call the function definitions you added. Ensure each action references a function defined earlier.
Use the
functionRef
attribute to refer to the specific function by its name. Map the arguments in thefunctionRef
using the parameters defined in the OpenAPI specification.The following example shows about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping function arguments in workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Check the
Operation Object
section of the OpenAPI specification to understand how to structure parameters in the request. -
Use
jq
expressions to extract data from the payload and map it to the required parameters. Ensure the engine maps parameter names according to the OpenAPI specification. For operations requiring parameters in the request path instead of the body, refer to the parameter definitions in the OpenAPI specification.
For more information about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping path parameters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Following is an example invocation of a function, in which only one parameter named
petId
is added in the request path:Example of calling the PetStore function
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.3. Configuring the endpoint URL of OpenAPI services Copy linkLink copied to clipboard!
After accessing the function definitions in workflow states, you can configure the endpoint URL of OpenAPI services.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have created your OpenShift Serverless Logic project.
- You have access to the OpenAPI specification files.
- You have defined the function definitions in the workflow.
- You have the access to the defined functions in the workflow states.
Procedure
-
Locate the OpenAPI specification file you want to configure. For example,
substraction.yaml
. -
Convert the file name into a valid configuration key by replacing special characters, such as
.
, with underscores and converting letters to lowercase. For example, changesubstraction.yaml
tosubstraction_yaml
. To define the configuration key, use the converted file name as the REST client configuration key. Set this key as an environment variable, as shown in the following example:
quarkus.rest-client.subtraction_yaml.url=http://myserver.com
quarkus.rest-client.subtraction_yaml.url=http://myserver.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To prevent hardcoding URLs in the
application.properties
file, use environment variable substitution, as shown in the following example:quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example:
-
Configuration Key:
quarkus.rest-client.subtraction_yaml.url
- Environment variable: SUBTRACTION_URL
-
Fallback URL:
http://myserver.com
-
Configuration Key:
-
Ensure that the
(SUBTRACTION_URL)
environment variable is set in your system or deployment environment. If the variable is not found, the application uses the fallback URL(http://myserver.com)
. Add the configuration key and URL substitution to the
application.properties
file:quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deploy or restart your application to apply the new configuration settings.
3.2. Configuring OpenAPI services endpoints Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses the kogito.sw.operationIdStrategy
property to generate the REST client for invoking services defined in OpenAPI documents. This property determines how the configuration key is derived for the REST client configuration.
The kogito.sw.operationIdStrategy
property supports the following values: FILE_NAME
, FULL_URI
, FUNCTION_NAME
, and SPEC_TITLE
.
FILE_NAME
OpenShift Serverless Logic uses the OpenAPI document file name to create the configuration key. The key is based on the file name, where special characters are replaced with underscores.
Example configuration:
quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/
quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenAPI file path is
<project_application_dir>/specs/stock-portfolio-svc.yaml
. The generated key that configures the URL for the REST client isstock_portfolio_svc_yaml
.
FULL_URI
OpenShift Serverless Logic uses the complete URI path of the OpenAPI document as the configuration key. The full URI is sanitized to form the key.
Example for Serverless Workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration:
quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/
quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The URI path is
https://my.remote.host/apicatalog/apis/123/document
. The generated key that configures the URL for the REST client isapicatalog_apis_123_document
.
FUNCTION_NAME
OpenShift Serverless Logic combines the workflow ID and the function name referencing the OpenAPI document to generate the configuration key.
Example for Serverless Workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration:
quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/
quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The workflow ID is
myworkflow
. The function name ismyfunction
. The generated key that configures the URL for the REST client ismyworkflow_myfunction
.
SPEC_TITLE
OpenShift Serverless Logic uses the
info.title
value from the OpenAPI document to create the configuration key. The title is sanitized to form the key.Example for OpenAPI document
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration:
quarkus.rest-client.stock-service_API.url=http://localhost:8282/
quarkus.rest-client.stock-service_API.url=http://localhost:8282/
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenAPI document title is
stock-service API
. The generated key that configures the URL for the REST client isstock-service_API
.
3.2.1. Using URI alias Copy linkLink copied to clipboard!
As an alternative to the kogito.sw.operationIdStrategy
property, you can assign an alias to a URI by using the workflow-uri-definitions
custom extension. This alias simplifies the configuration process and can be used as a configuration key in REST client settings and function definitions.
The workflow-uri-definitions
extension allows you to map a URI to an alias, which you can reference throughout the workflow and in your configuration files. This approach provides a centralized way to manage URIs and their configurations.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
Add the
workflow-uri-definitions
extension to your workflow. Within this extension, create aliases for your URIs.Example workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Set the extension ID to
workflow-uri-definitions
. - 2
- Set the alias definition by mapping the
remoteCatalog
alias to a URI, for example,https://my.remote.host/apicatalog/apis/123/document
URI. - 3
- Set the function operations by using the
remoteCatalog
alias with the operation identifiers, for example,operation1
andoperation2
operation identifiers.In the
application.properties
file, configure the REST client by using the alias defined in the workflow.Example property
quarkus.rest-client.remoteCatalog.url=http://localhost:8282/
quarkus.rest-client.remoteCatalog.url=http://localhost:8282/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the previous example, the configuration key is set to
quarkus.rest-client.remoteCatalog.url
, and the URL is set tohttp://localhost:8282/
, which the REST clients use by referring to theremoteCatalog
alias.In your workflow, use the alias when defining functions that operate on the URI.
Example Workflow (continued):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Troubleshooting services Copy linkLink copied to clipboard!
Efficient troubleshooting of the HTTP-based function invocations, such as those using OpenAPI functions, is crucial for maintaining workflow orchestrations.
To diagnose issues, you can trace HTTP requests and responses.
3.3.1. Tracing HTTP requests and responses Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses the Apache HTTP client to the trace HTTP requests and responses.
Prerequisites
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
- You have access to the workflow definition and instance IDs for correlating HTTP requests and responses.
- You have access to the log configuration of the application where the HTTP service invocations are occurring
Procedure
To trace HTTP requests and responses, OpenShift Serverless Logic uses the Apache HTTP client by setting the following property:
Turning HTTP tracing on
# Turning HTTP tracing on quarkus.log.category."org.apache.http".level=DEBUG
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following configuration to your application’s
application.properties
file to turn on debugging for the Apache HTTP Client:quarkus.log.category."org.apache.http".level=DEBUG
quarkus.log.category."org.apache.http".level=DEBUG
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart your application to propagate the log configuration changes.
After restarting, check the logs for HTTP request traces.
Example logs of a traced HTTP request
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the logs for HTTP response traces following the request logs.
Example logs of a traced HTTP response
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Managing security Copy linkLink copied to clipboard!
4.1. Authentication for OpenAPI services Copy linkLink copied to clipboard!
To secure an OpenAPI service operation, define a Security Scheme
by using the OpenAPI specification. These schemes are in the securitySchemes
section of the OpenAPI specification file. You must configure the operation by adding a Security Requirement
that refers to that Security Scheme
. When a workflow invokes such operation, that information is used to determine the required authentication configuration.
This section outlines the supported authentication types and demonstrates how to configure them to access secured OpenAPI service operations within your workflows.
4.1.1. Overview of OpenAPI service authentication Copy linkLink copied to clipboard!
In OpenShift Serverless Logic, you can secure OpenAPI service operations using the Security Schemes
defined in the OpenAPI specification file. These schemes help define the authentication requirements for operations invoked within a workflow.
The Security Schemes
are declared in the securitySchemes
section of the OpenAPI document. Each scheme specifies the type of authentication to apply, such as HTTP Basic, API key, and so on.
When a workflow calls a secured operation, it references these defined schemes to determine the required authentication configuration.
Example security scheme definitions
If the OpenAPI file defines Security Schemes
, but does not include Security Requirements
for operations, the generator can be configured to create them by default. These defaults apply to operations without explicitly defined requirements.
To configure that scheme, you must use the quarkus.openapi-generator.codegen.default-security-scheme
property. The default-security-scheme
property is used only at code generation time and not during the runtime. The value must match any of the available schemes in securitySchemes
section, such as http-basic-example
or api-key-example
:
For Example
quarkus.openapi-generator.codegen.default-security-scheme=http-basic-example
$ quarkus.openapi-generator.codegen.default-security-scheme=http-basic-example
4.1.2. Configuring authentication credentials for OpenAPI services Copy linkLink copied to clipboard!
To invoke OpenAPI service operations secured by authentication schemes, you must configure the corresponding credentials and parameters in your application. OpenShift Serverless Logic uses these configurations to authenticate with the external services during workflow execution.
This section describes how to define and apply the necessary configuration properties for security schemes declared in the OpenAPI specification file. You can use either application.properties
, the ConfigMap
associated with your workflow, or environment variables in the SonataFlow
CR to provide these credentials.
The security schemes defined in an OpenAPI specification file are global to all the operations that are available in the same file. This means that the configurations set for a particular security scheme also apply to the other secured operations.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- Your OpenAPI specification includes one or more security schemes.
- You have access to the OpenAPI specification files.
-
You have identified the schemes you want to configure
http-basic-example
orapi-key-example
. -
You have access to the
application.properties
file, the workflowConfigMap
, or theSonataFlow
CR.
Procedure
Use the following format to compose your property keys:
quarkus.openapi-generator.[filename].auth.[security_scheme_name].[auth_property_name]
quarkus.openapi-generator.[filename].auth.[security_scheme_name].[auth_property_name]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
filename
is the sanitized name of the file containing the OpenAPI specification, such as security_example_json. To sanitize this name, you must replace all non-alphabetic characters with_
underscores. -
security_scheme_name
is the sanitized name of the security scheme object definition in the OpenAPI specification file, such ashttp_basic_example
orapi_key_example
. To sanitize this name, you must replace all non-alphabetic characters with_
underscores. auth_property_name
is the name of the property to configure, such as username. This property depends on the defined security scheme type.NoteWhen you are using environment variables to configure properties, follow the MicroProfile environment variable mapping rules. You can replace all non-alphabetic characters in the property key with underscores
_
, and convert the entire key to uppercase.
-
The following examples show how to provide these configuration properties using application.properties
, the ConfigMap
associated with your workflow, or environment variables defined in the SonataFlow
CR:
Example of configuring the credentials by using the application.properties
file
quarkus.openapi-generator.security_example_json.auth.http_basic_example.username=myuser quarkus.openapi-generator.security_example_json.auth.http_basic_example.password=mypassword
quarkus.openapi-generator.security_example_json.auth.http_basic_example.username=myuser
quarkus.openapi-generator.security_example_json.auth.http_basic_example.password=mypassword
Example of configuring the credentials by using the workflow ConfigMap
If the name of the workflow is example-workflow
, the name of the ConfigMap
with the user defined properties must be example-workflow-props
.
Example of configuring the credentials by using environment variables in the SonataFlow
CR
4.1.3. Example of basic HTTP authentication Copy linkLink copied to clipboard!
The following example shows how to secure a workflow operation using the HTTP basic authentication scheme. The security-example.json
file defines an OpenAPI service with a single operation, sayHelloBasic
, which uses the http-basic-example
security scheme. You can configure credentials using application properties, the worfklow ConfigMap
, or environment variables.
Example OpenAPI specification with HTTP basic authentication
In this example, the sayHelloBasic
operation is secured using the http-basic-example
scheme defined in the securitySchemes
section. When invoking this operation in a workflow, you must configure the appropriate credentials.
4.1.3.1. Supported configuration properties for basic HTTP authentication Copy linkLink copied to clipboard!
You can use the following configuration keys to provide authentication credentials for the http-basic-example
scheme:
Description | Property key | Example |
---|---|---|
Username credentials |
|
|
Password credentials |
|
|
You can replace [filename]
with the sanitized OpenAPI file name security_example_json
and [security_scheme_name]
with the sanitized scheme name http_basic_example
.
4.1.4. Example of Bearer token authentication Copy linkLink copied to clipboard!
The following example shows how to secure an OpenAPI operation using the HTTP Bearer authentication scheme. The security-example.json
file defines an OpenAPI service with the sayHelloBearer
operation, which uses the http-bearer-example
scheme for authentication. To access the secured operation during workflow execution, you must configure a Bearer token using application properties, the workflow ConfigMap
, or environment variables.
Example OpenAPI specification with Bearer token authentication
In this example, the sayHelloBearer
operation is protected by the http-bearer-example
scheme. You must define a valid Bearer token in your configuration to invoke this operation successfully.
4.1.4.1. Supported configuration properties for Bearer token authentication Copy linkLink copied to clipboard!
You can use the following configuration property key to provide the Bearer token:
Description | Property key | Example |
---|---|---|
Bearer token |
|
|
You can replace [filename]
with the sanitized OpenAPI file name security_example_json
and [security_scheme_name]
with the sanitized scheme name http_bearer_example
.
4.1.5. Example of API key authentication Copy linkLink copied to clipboard!
The following example shows how to secure an OpenAPI service operation using the apiKey
authentication scheme. The security-example.json
file defines the sayHelloApiKey
operation, which uses the api-key-example
security scheme. You can configure the API key using application properties, the workflow ConfigMap
, or environment variables.
Example OpenAPI specification with API key authentication
In this example, the sayHelloApiKey
operation is protected by the api-key-example
security scheme, which uses an API key passed in the HTTP request header.
4.1.5.1. Supported configuration properties for API key authentication Copy linkLink copied to clipboard!
You can use the following configuration property to configure the API key:
Description | Property key | Example |
---|---|---|
API Key |
|
|
You can replace [filename]
with the sanitized OpenAPI file name security_example_json
and [security_scheme_name]
with the sanitized scheme name api_key_example
.
The apiKey
scheme type contains an additional name
property that configures the key name to use when the Open API service is invoked. Also, the format to pass the key depends on the value of the in
property.
-
When the value is
header
, the key is passed as an HTTP request parameter. -
When the value is
cookie
, the key is passed as an HTTP cookie. -
When the value is
query
, the key is passed as an HTTP query parameter.
In the example, the key is passed in the HTTP header as api-key-name: MY_KEY
.
OpenShift Serverless Logic manages this internally, so no additional configuration is required beyond setting the property value.
4.1.6. Example of clientCredentials OAuth 2.0 authentication Copy linkLink copied to clipboard!
The following example shows how to secure an OpenAPI operation using the OAuth 2.0 clientCredentials
flow. The OpenAPI specification defines the sayHelloOauth2
operation, which uses the oauth-example
security scheme. Unlike simpler authentication methods, such as HTTP Basic or API keys, OAuth 2.0 authentication requires additional integration with the Quarkus OpenID Connect (OIDC) Client.
Example OpenAPI specification with OAuth 2.0
In this example, the sayHelloOauth2
operation is protected by the oauth-example
security scheme, which uses the clientCredentials
flow for token-based authentication.
4.1.6.1. OAuth 2.0 Support with the OIDC Client filter extension Copy linkLink copied to clipboard!
OAuth 2.0 token management is handled by a Quarkus OidcClient
. To enable this integration, you must add the Quarkus OIDC Client Filter, and the Quarkus OpenApi Generator OIDC extensions to your project as shown in the following examples:
Example of adding extensions using Maven
Example of adding extensions using gitops
profile
Ensure that you configure the QUARKUS_EXTENSIONS build argument with the following value when building the workflow image:
--build-arg=QUARKUS_EXTENSIONS=io.quarkus:quarkus-oidc-client-filter:3.15.4.redhat-00001,io.quarkiverse.openapi.generator:quarkus-openapi-generator-oidc:2.9.0-lts
$ --build-arg=QUARKUS_EXTENSIONS=io.quarkus:quarkus-oidc-client-filter:3.15.4.redhat-00001,io.quarkiverse.openapi.generator:quarkus-openapi-generator-oidc:2.9.0-lts
Example of adding extensions using preview
profile
The extensions that are added in the SonataFlowPlatform
CR are included for all the workflows that you deploy in that namespace with the preview
profile.
4.1.6.2. OidcClient configuration Copy linkLink copied to clipboard!
To access the secured operation, define an OidcClient
configuration in your application.properties
file. The configuration uses the sanitized security scheme name from the OpenAPI specification, in this case, oauth_example
as follows:
In this configuration:
-
oauth_example
matches the sanitized name of theoauth-example
scheme in the OpenAPI file. The link between the sanitized scheme name and the correspondingOidcClient
is achieved by using that simple naming convention. - The OidcClient handles token generation and renewal automatically during workflow execution.
4.1.7. Example of authorization token propagation Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports token propagation for OpenAPI operations that use the oauth2
or http
bearer security scheme types. Token propagation enables your workflow to forward the authorization token it receives during workflow creation to downstream services.This feature is useful when your workflow needs to interact with third-party services on behalf of the client that initiated the request.
You must configure token propagation individually for each security scheme. After it is enabled, all OpenAPI operations secured using the same scheme uses the propagated token unless explicitly overridden.
The following example defines the sayHelloOauth2
operation in the security-example.json
file. This operation uses the oauth-example
security scheme with the clientCredentials
flow:
Example OpenAPI specification with token propagation
4.1.7.1. Supported configuration properties for authorization token propagation Copy linkLink copied to clipboard!
You can use the following configuration keys to enable and customize token propagation:
The tokens are automatically passed to downstream services while the workflow is active. When the workflow enters a waiting state, such as a timer or event-based pause, the token propagation stops. After the workflow resumes, tokens are not re-propagated automatically. You must manage re-authentication if needed.
Property key | Example | Description |
---|---|---|
|
|
Enables token propagation for all operations secured with the given scheme. Default is |
|
| (Optional) Overrides the default Authorization header with a custom header name to read the token from. |
You can replace [filename]
with the sanitized OpenAPI file name security_example_json
and [security_scheme_name]
with the sanitized scheme name oauth_example
.
Chapter 5. Supporting services Copy linkLink copied to clipboard!
5.1. Job service Copy linkLink copied to clipboard!
The Job service schedules and executes tasks in a cloud environment. Independent services implement these tasks, which can be initiated through any of the supported interaction modes, including HTTP calls or Knative Events delivery.
In OpenShift Serverless Logic, the Job service is responsible for controlling the execution of the time-triggered actions. Therefore, all the time-based states that you can use in a workflow, are handled by the interaction between the workflow and the Job service.
For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job service, and when the timeout is met, an HTTP callback is executed to notify the workflow.
The main goal of the Job service is to manage active jobs, such as scheduled jobs that need to be executed. When a job reaches its final state, the Job service removes it. To retain jobs information in a permanent repository, the Job service produces status change events that can be recorded by an external service, such as the Data Index Service.
You do not need to manually install or configure the Job service if you are using the OpenShift Serverless Operator to deploy workflows. The Operator handles these tasks automatically and manages all necessary configurations for each workflow to connect with it.
5.1.1. Job service leader election process Copy linkLink copied to clipboard!
The Job service operates as a singleton service, meaning only one active instance can schedule and execute jobs.
To prevent conflicts when the service is deployed in the cloud, where multiple instances might be running, the Job service supports a leader election process. Only the instance that is elected as the leader manages external communication to receive and schedule jobs.
Non-leader instances remain inactive in a standby state but continue attempting to become the leader through the election process. When a new instance starts, it does not immediately assume leadership. Instead, it enters the leader election process to determine if it can take over the leader role.
If the current leader becomes unresponsive or if it is shut down, another running instance takes over as the leader.
This leader election mechanism uses the underlying persistence backend, which is currently supported only in the PostgreSQL implementation.
5.2. Data Index service Copy linkLink copied to clipboard!
The Data Index service is a dedicated supporting service that stores the data related to the workflow instances and their associated jobs. This service provides a GraphQL endpoint allowing users to query that data.
The Data Index service processes data received through events, which can originate from any workflow or directly from the Job service.
Data Index supports Apache Kafka or Knative Eventing to consume CloudEvents messages from workflows. It indexes and stores this event data in a database, making it accessible through GraphQL. These events provide detailed information about the workflow execution. The Data Index service is central to OpenShift Serverless Logic search, insights, and management capabilities.
The key features of the Data Index service are as follows:
- A flexible data structure
- A distributable, cloud-ready format
- Message-based communication with workflows via Apache Kafka, Knative, and CloudEvents
- A powerful GraphQL-based querying API
When you are using the OpenShift Serverless Operator to deploy workflows, you do not need to manually install or configure the Data Index service. The Operator automatically manages all the necessary configurations for each workflow to connect with it.
5.2.1. GraphQL queries for workflow instances and jobs Copy linkLink copied to clipboard!
To retrieve data about workflow instances and jobs, you can use GraphQL queries.
5.2.1.1. Retrieve data from workflow instances Copy linkLink copied to clipboard!
You can retrieve information about a specific workflow instance by using the following query example:
5.2.1.2. Retrieve data from jobs Copy linkLink copied to clipboard!
You can retrieve data from a specific job instance by using the following query example:
5.2.1.3. Filter query results by using the where parameter Copy linkLink copied to clipboard!
You can filter query results by using the where
parameter, allowing multiple combinations based on workflow attributes.
Example query to filter by state
Example query to filter by ID
By default, filters are combined using the AND Operator. You can modify this behavior by combining filters with the AND or OR operators.
Example query to combine filters with the OR Operator
Example query to combine filters with the AND and OR Operators
Depending on the attribute type, you can use the following avaialable Operators:
Attribute type | Available Operators |
---|---|
String array |
|
String |
|
ID |
|
Boolean |
|
Numeric |
|
Date |
|
5.2.1.4. Sort query results by using the orderBy parameter Copy linkLink copied to clipboard!
You can sort query results based on workflow attributes by using the orderBy
parameter. You can also specify the sorting direction in an ascending (ASC
) or a descending (DESC
) order. Multiple attributes are applied in the order you specified.
Example query to sort by the start time in an ASC
order
5.2.1.5. Limit the number of results by using the pagination parameter Copy linkLink copied to clipboard!
You can control the number of returned results and specify an offset by using the pagination
parameter.
Example query to limit results to 10, starting from offset 0
5.3. Managing supporting services Copy linkLink copied to clipboard!
This section provides an overview of the supporting services essential for OpenShift Serverless Logic. It specifically focuses on configuring and deploying the Data Index service and Job Service supporting services using the OpenShift Serverless Logic Operator.
In a typical OpenShift Serverless Logic installation, you must deploy both services to ensure successful workflow execution. The Data Index service allows for efficient data management, while the Job Service ensures reliable job handling.
5.3.1. Supporting services and workflow integration Copy linkLink copied to clipboard!
When you deploy a supporting service in a given namespace, you can choose between an enabled or disabled deployment. An enabled deployment signals the OpenShift Serverless Logic Operator to automatically intercept workflow deployments using the preview
or gitops
profile within the namespace and configure them to connect with the service.
For example, when the Data Index service is enabled, workflows are automatically configured to send status change events to it. Similarly, enabling the Job Service ensures that a job is created whenever a workflow requires a timeout. The OpenShift Serverless Logic Operator also configures the Job Service to send events to the Data Index service, facilitating seamless integration between the services.
The OpenShift Serverless Logic Operator does not just deploy supporting services, it also manages other necessary configurations to ensure successful workflow execution. All these configurations are handled automatically. You only need to provide the supporting services configuration in the SonataFlowPlatform
CR.
Deploying only one of the supporting services or using a disabled deployment are advanced use cases. In a standard installation, you must enable both services to ensure smooth workflow execution.
5.3.2. Supporting services deployment with the SonataFlowPlatform CR Copy linkLink copied to clipboard!
To deploy supporting services, configure the dataIndex
and jobService
subfields within the spec.services
section of the SonataFlowPlatform
custom resource (CR). This configuration instructs the OpenShift Serverless Logic Operator to deploy each service when the SonataFlowPlatform
CR is applied.
Each configuration of a service is handled independently, allowing you to customize these settings alongside other configurations in the SonataFlowPlatform
CR.
See the following scaffold example configuration for deploying supporting services:
- 1
- Data Index service configuration field.
- 2
- Setting
enabled: true
deploys the Data Index service. If set tofalse
or omitted, the deployment will be disabled. The default value isfalse
. - 3
- Job Service configuration field.
- 4
- Setting
enabled: true
deploys the Job Service. If set tofalse
or omitted, the deployment will be disabled. The default value isfalse
.
5.3.3. Supporting services scope Copy linkLink copied to clipboard!
The SonataFlowPlatform
custom resource (CR) enables the deployment of supporting services within a specific namespace. This means all automatically configured supporting services and workflow communications are restricted to the namespace of the deployed platform.
This feature is particularly useful when separate instances of supporting services are required for different sets of workflows. For example, you can deploy an application in isolation with its workflows and supporting services, ensuring they remain independent from other deployments.
5.3.4. Supporting services persistence configurations Copy linkLink copied to clipboard!
The persistence configuration for supporting services in OpenShift Serverless Logic can be either ephemeral or PostgreSQL, depending on needs of your environment. Ephemeral persistence is ideal for development and testing, while PostgreSQL persistence is recommended for production environments.
5.3.4.1. Ephemeral persistence configuration Copy linkLink copied to clipboard!
The ephemeral persistence uses an embedded PostgreSQL database that is dedicated to each service. The OpenShift Serverless Logic Operator recreates this database with every service restart, making it suitable only for development and testing purposes. You do not need any additional configuration other than the following SonataFlowPlatform
CR:
5.3.4.2. Database migration configuration Copy linkLink copied to clipboard!
Database migration refers to either initializing a given Data Index or Jobs Service database to its respective schema, or applying data or schema updates when new versions are released. You must configure the database migration strategy individually for each supporting service by using the dataIndex.persistence.dbMigrationStrategy
and jobService.persistence.dbMigrationStrategy
optional fields. If you do not configure a migration strategy, the system uses service
as the default value.
Database migration is supported only when using the PostgreSQL persistence configuration.
You can configure any of the following database migration strategies:
5.3.4.2.1. Job-based database migration strategy Copy linkLink copied to clipboard!
When you configure the job-based strategy, the OpenShift Serverless Logic Operator uses a dedicated Kubernetes Job
to manage the entire migration process. This Job
runs before the supporting services deployment, ensuring that a service starts only if the corresponding migration completes successfully. You typically use this strategy in production environments.
5.3.4.2.2. Service-based database migration strategy Copy linkLink copied to clipboard!
When you configure the service-based strategy, the database migration is managed directly by each supporting service. The migration is executed as part of the service startup sequence. In worst-case scenarios, a service might start with failures if the migration is unsuccessful. Service-based database migration is the default strategy when you do not specify any configuration.
5.3.4.2.3. None migration strategy Copy linkLink copied to clipboard!
When you configure the none
strategy, neither the Operator nor the service attempts to perform the migration. You typically use this strategy in environments where a database administrator (DBA) manually executes all database migrations.
5.3.4.3. PostgreSQL persistence configuration Copy linkLink copied to clipboard!
For PostgreSQL persistence, you must set up a PostgreSQL server instance on your cluster. The administration of this instance remains independent of the OpenShift Serverless Logic Operator control. To connect a supporting service with the PostgreSQL server, you must configure the appropriate database connection parameters.
You can configure PostgreSQL persistence in the SonataFlowPlatform
CR by using the following example:
Example of PostgreSQL persistence configuration
- 1
- Optional: Database migration strategy to use. Defaults to
service
. - 2
- Name of the service to connect with the PostgreSQL database server.
- 3
- Optional: Defines the namespace of the PostgreSQL Service. Defaults to the
SonataFlowPlatform
namespace. - 4
- Defines the name of the PostgreSQL database for storing supporting service data.
- 5
- Optional: Specifies the schema for storing supporting service data. Default value is
SonataFlowPlatform
name, suffixed with-data-index-service
or-jobs-service
. For example,sonataflow-platform-example-data-index-service
. - 6
- Optional: Port number to connect with the PostgreSQL Service. Default value is
5432
. - 7
- Defines the name of the secret containing the username and password for database access.
- 8
- Defines the name of the key in the secret that contains the username to connect with the database.
- 9
- Defines the name of the key in the secret that contains the password to connect with the database.
You can configure each service’s persistence independently by using the respective persistence field.
Create the secrets to access PostgreSQL by running the following command:
oc create secret generic <postgresql_secret_name> \ --from-literal=POSTGRESQL_USER=<user> \ --from-literal=POSTGRESQL_PASSWORD=<password> \ -n <namespace>
$ oc create secret generic <postgresql_secret_name> \
--from-literal=POSTGRESQL_USER=<user> \
--from-literal=POSTGRESQL_PASSWORD=<password> \
-n <namespace>
5.3.4.4. Common PostgreSQL persistence configuration Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator automatically connects supporting services to the common PostgreSQL server configured in the spec.persistence
field.
For rules, the following precedence is applicable:
-
If you configure a specific persistence for a supporting service, for example,
services.dataIndex.persistence
, it uses that configuration. - If you do not configure persistence for a service, the system uses the common persistence configuration from the current platform.
When using a common PostgreSQL configuration, each service schema is automatically set as the SonataFlowPlatform
name, suffixed with -data-index-service
or -jobs-service
, for example, sonataflow-platform-example-data-index-service
.
5.3.4.5. Platform-scoped PostgreSQL persistence configuration Copy linkLink copied to clipboard!
You can configure a common PostgreSQL service and database for all supporting services by using the spec.persistence.postgresql
field in the SonataFlowPlatform
Custom Resource (CR). When this field is configured, the OpenShift Serverless Logic Operator connects the supporting services to the specified database. Any workflows deployed in the same namespace using the preview
or gitops
profiles, and that do not specify a custom persistence configuration, will also connect to this database.
The following rules apply when configuring platform-scoped persistence:
-
If a supporting service has its own persistence configuration, for example, if
services.dataIndex.persistence.postgresql
is set, then that configuration takes precedence. - If a supporting service does not have a custom persistence configuration, the configuration is inherited from the current platform.
-
If a supporting service requires a specific database migration strategy, configure it by using the
dataIndex.persistence.dbMigrationStrategy
andjobService.persistence.dbMigrationStrategy
fields.
The following SonataFlowPlatform
CR fragment shows how to configure platform-scoped PostgreSQL persistence:
- 1
- Name of the Kubernetes service to connect to the PostgreSQL database server.
- 2
- (Optional) Namespace containing the PostgreSQL service. Defaults to the
SonataFlowPlatform
namespace. - 3
- Name of the PostgreSQL database to store supporting services and workflows data.
- 4
- (Optional) Port to connect to the PostgreSQL service. Defaults to 5432.
- 5
- Name of the Kubernetes Secret that contains database credentials.
- 6
- Secret key that stores the database username.
- 7
- Secret key that stores the database password.
- 8
- (Optional) Database migration strategy for the Data Index. Defaults to
service
. - 9
- (Optional) Database migration strategy for the Jobs Service. Defaults to
service
. You can configure distinct strategies per service if needed.
5.3.5. Supporting services eventing system configurations Copy linkLink copied to clipboard!
For a OpenShift Serverless Logic installation, the following types of events are generated:
- Outgoing and incoming events related to workflow business logic.
- Events sent from workflows to the Data Index and Job Service.
- Events sent from the Job Service to the Data Index Service.
The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these events and services, ensuring efficient and reliable event handling.
5.3.5.1. Platform-scoped eventing system configuration Copy linkLink copied to clipboard!
To configure a platform-scoped eventing system, you can use the spec.eventing.broker.ref
field in the SonataFlowPlatform
CR to reference a Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the supporting services to produce and consume events by using the specified broker.
A workflow deployed in the same namespace with the preview
or gitops
profile and without a custom eventing system configuration, automatically links to a specified broker.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays how to configure the SonataFlowPlatform
CR for a platform-scoped eventing system:
5.3.5.2. Service-scoped eventing system configuration Copy linkLink copied to clipboard!
A service-scoped eventing system configuration allows for fine-grained control over the eventing system, specifically for the Data Index or the Job Service.
For a OpenShift Serverless Logic installation, consider using a platform-scoped eventing system configuration. The service-scoped configuration is intended for advanced use cases only.
5.3.5.3. Data Index eventing system configuration Copy linkLink copied to clipboard!
To configure a service-scoped eventing system for the Data Index, you must use the spec.services.dataIndex.source.ref
field in the SonataFlowPlatform
CR to refer to a specific Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the Data Index to consume SonataFlow system events from that Broker.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays the Data Index eventing system configuration:
- 1
- Specifies the Knative Eventing Broker from which the Data Index consumes events.
- 2
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatform
namespace. Consider creating the broker in the same namespace asSonataFlowPlatform
.
5.3.5.4. Job Service eventing system configuration Copy linkLink copied to clipboard!
To configure a service-scoped eventing system for the Job Service, you must use the spec.services.jobService.source.ref
and spec.services.jobService.sink.ref
fields in the SonataFlowPlatform
CR. These fields instruct the OpenShift Serverless Logic Operator to automatically link the Job Service to consume and produce SonataFlow system events, based on the provided configuration.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays the Job Service eventing system configuration:
- 1
- Specifies the Knative Eventing Broker from which the Job Service consumes events.
- 2
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatform
namespace. Consider creating the Broker in the same namespace asSonataFlowPlatform
. - 3
- Specifies the Knative Eventing Broker on which the Job Service produces events.
- 4
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatform
namespace. Consider creating the Broker in the same namespace asSonataFlowPlatform
.
5.3.5.5. Cluster-scoped eventing system configuration for supporting services Copy linkLink copied to clipboard!
When you deploy cluster-scoped supporting services, the supporting services automatically link to the Broker specified in the SonataFlowPlatform
CR, which is referenced by the SonataFlowClusterPlatform
CR.
5.3.5.6. Eventing system configuration precedence rules for supporting services Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator follows a defined order of precedence to configure the eventing system for a supporting service.
Eventing system configuration precedence rules are as follows:
- If the supporting service has its own eventing system configuration, using either the Data Index eventing system or the Job Service eventing system configuration, then supporting service configuration takes precedence.
-
If the
SonataFlowPlatform
CR enclosing the supporting service is configured with a platform-scoped eventing system, that configuration takes precedence. - If the current cluster is configured with a cluster-scoped eventing system, that configuration takes precedence.
- f none of the previous configurations exist, the supporting service delivers events by direct HTTP calls.
5.3.5.7. Eventing system linking configuration Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator automatically creates Knative Eventing, SinkBindings
, and triggers to link supporting services with the eventing system. These objects enable the production and consumption of events by the supporting services.
The following example displays the Knative Native eventing objects created for the SonataFlowPlatform
CR:
The following example displays how to configure a Knative Kafka Broker for use with the SonataFlowPlatform
CR:
Example of Knative Kafka Broker example used by the SonataFlowPlatform CR
- 1
- Use the Kafka class to create a Kafka Knative Broker.
The following command displays the list of triggers set up for the Data Index and Job Service events, showing which services are subscribed to the events:
oc get triggers -n example-namespace
$ oc get triggers -n example-namespace
Example output
To see the SinkBinding
resource for the Job Service, use the following command:
oc get sources -n example-namespace
$ oc get sources -n example-namespace
Example output
NAME TYPE RESOURCE SINK READY sonataflow-platform-example-jobs-service-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True
NAME TYPE RESOURCE SINK READY
sonataflow-platform-example-jobs-service-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True
5.3.6. Advanced supporting services configurations Copy linkLink copied to clipboard!
In scenarios where you must apply advanced configurations for supporting services, use the podTemplate
field in the SonataFlowPlatform
custom resource (CR). This field allows you to customize the service pod deployment by specifying configurations like the number of replicas, environment variables, container images, and initialization options.
You can configure advanced settings for the service by using the following example:
Advanced configurations example for the Data Index service
You can set the 'services' field to either 'dataIndex' or 'jobService' depending on your requirement. The rest of the configuration remains the same.
- 1
- Defines the number of replicas. Default value is
1
. In the case ofjobService
, this value is always overridden to1
because it operates as a singleton service. - 2
- Holds specific configurations for the container running the service.
- 3
- Allows you to fine-tune service properties by specifying environment variables.
- 4
- Configures the container image for the service, useful if you need to update or customize the image.
- 5
- Configures init containers for the pod, useful for setting up prerequisites before the main container starts.
The podTemplate
field provides flexibility for tailoring the deployment of each supporting service. It follows the standard PodSpec
API, meaning the same API validation rules apply to these fields.
5.3.7. Cluster scoped supporting services Copy linkLink copied to clipboard!
You can define a cluster-wide set of supporting services that can be consumed by workflows across different namespaces, by using the SonataFlowClusterPlatform
custom resource (CR). By referencing an existing namespace-specific SonataFlowPlatform
CR, you can extend the use of these services cluster-wide.
You can use the following example of a basic configuration that enables workflows deployed in any namespace to utilize supporting services deployed in a specific namespace, such as example-namespace
:
Example of a SonataFlowClusterPlatform
CR
You can override these cluster-wide services within any namespace by configuring that namespace in SonataFlowPlatform.spec.services
.
Chapter 6. Configuring workflow services Copy linkLink copied to clipboard!
This section describes how to configure a workflow service by using the OpenShift Serverless Logic Operator. The section outlines key concepts and configuration options that you can reference for customizing your workflow service according to your environment and use case. You can edit workflow configurations, manage specific properties, and define global managed properties to ensure consistent and efficient execution of your workflows.
6.1. Modifying workflow configuration Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator decides the workflow configuration based on two ConfigMaps
for each workflow: a workflow for user-defined
properties and a workflow for Operator managed-properties
:
-
User-defined properties: if your workflow requires particular configurations, ensure that you create a
ConfigMap
named<workflow-name>-props
that includes all the configurations before workflow deployment. For example, if your workflow name isgreeting
, theConfigMap
name isgreeting-managed-props
. If suchConfigMap
does not exists, the Operator creates the workflow to have empty or default content. -
Managed properties: automatically generated by the Operator and stored in a
ConfigMap
named<workflow-name>-managed-props
. These properties are typically related to configurations for the workflow. The properties connect to supporting services, the eventing system, and so on.
Managed properties always override user-defined properties with the same key. These managed properties are read-only and reset by the Operator during each reconciliation cycle.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI (
oc
). -
You have previously created the workflow
user-defined
propertiesConfigMap
, or the Operator has created it.
Procedure
Open your terminal and access the OpenShift Serverless Logic project. Ensure that you are working within the correct project,
namespace
, where your workflow service is deployed.oc project <your-project-name>
$ oc project <your-project-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the name of the workflow you want to configure.
For example, if your workflow is named
greeting
, the user-defined properties are stored in aConfigMap
namedgreeting-props
.Edit the workflow
ConfigMap
by executing the following example command:oc edit configmap greeting-props
$ oc edit configmap greeting-props
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
greeting
with the actual name of your workflow.Modify the
application.properties
section.Locate the
data
section and update theapplication.properties
field with your desired configuration.Example of
ConfigMap
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After updating the properties, save the file and exit the editor. The updated configuration will be applied automatically.
The workflow runtime is based on Quarkus, so all the keys under application.properties
must follow Quarkus property syntax. If the format is invalid, the OpenShift Serverless Logic Operator might overwrite your changes with default values during the next reconciliation cycle.
Verification
To confirm that your changes are applied successfully, execute the following example command:
oc get configmap greeting-props -o yaml
$ oc get configmap greeting-props -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Managed properties in workflow services Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator uses managed properties to control essential runtime behavior. These values are stored separately and override user-defined properties during each reconciliation cycle. You can also apply custom managed properties globally by updating the SonataFlowPlatform
resource within the same namespace.
Some properties used by the OpenShift Serverless Logic Operator are managed properties and cannot be changed through the standard user configuration. These properties are stored in a dedicated ConfigMap
, typically named <workflow-name>-managed-props
. If you try to modify any managed property directly, the Operator will automatically revert it to its default value, but it will preserve your other user-defined changes.
You cannot override the default managed properties set by the Operator using global managed properties. These defaults are always enforced during reconciliation.
The following table lists some core managed properties as an example:
Property Key | Immutable Value | Profile |
---|---|---|
|
|
|
|
| |
|
|
|
Other managed properties include Kubernetes service discovery settings, Data Index location properties, Job Service location properties, and Knative Eventing system configurations.
6.3. Defining global-managed-properties Copy linkLink copied to clipboard!
You can define custom global managed properties for all workflows in a specific namespace by editing the SonataFlowPlatform
resource. These properties are defined under the .spec.properties.flow
attribute and are automatically applied to every workflow service in the same namespace.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Locate the
SonataFlowPlatform
resource in the same namespace as your workflow services.This is where you will define global managed properties.
Open the
SonataFlowPlatform
resource in your default editor by executing the following command:oc edit sonataflowplatform sonataflow-platform-example
$ oc edit sonataflowplatform sonataflow-platform-example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define custom global managed properties.
In the editor, navigate to the
spec.properties.flow
section and define your desired properties as shown in the following example:Example of a SonataFlowPlatform with flow properties
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This configuration adds the
quarkus.log.category=INFO
property to the managed properties of every workflow service in the namespace.Optional: Use external
ConfigMaps
orSecrets
.You can also reference values from existing
ConfigMap
orSecret
resources using thevalueFrom
attribute as shown in the following example:Example of a SonataFlowPlatform properties from ConfigMap and Secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
valueFrom
attribute is derived from the KubernetesEnvVar
API and works similarly to how environment variables reference external sources. - 2
valueFrom.secretKeyRef
pulls the value from a key namedAUTH_TOKEN
in thepetstore-credentials
secret.- 3
valueFrom.configMapRef
pulls the value from a key namedPETSTORE_URL
in thepetstore-props
ConfigMap.
Chapter 7. Managing workflow persistence Copy linkLink copied to clipboard!
You can configure a SonataFlow
instance to use persistence and store workflow context in a relational database.
By design, Kubernetes pods are stateless. This behavior can pose challenges for workloads that need to maintain the application state across pod restarts. In the case of OpenShift Serverless Logic, the workflow context is lost when the pod restarts by default.
To ensure workflow recovery in such scenarios, you must configure workflow runtime persistence. Use the SonataFlowPlatform
custom resource (CR) or the SonataFlow
CR to provide this configuration. The scope of the configuration varies depending on which resource you use.
7.1. Configuring persistence using the SonataFlowPlatform CR Copy linkLink copied to clipboard!
The SonataFlowPlatform
custom resource (CR) enables persistence configuration at the namespace level. This approach applies the persistence settings automatically to all workflows deployed in the namespace. It simplifies resource configuration, especially when multiple workflows in the namespace belong to the same application. While this configuration is applied by default, individual workflows in the namespace can override it using the SonataFlow
CR.
The OpenShift Serverless Logic Operator also uses this configuration to set up persistence for supporting services.
The persistence configurations are applied only at the time of workflow deployment. Changes to the SonataFlowPlatform
CR do not affect workflows that are already deployed.
Procedure
-
Define the
SonataFlowPlatform
CR. Specify the persistence settings in the
persistence
field under theSonataFlowPlatform
CR spec.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the Kubernetes Service connecting to the PostgreSQL database.
- 2
- Optional: Namespace of the PostgreSQL Service. Defaults to the namespace of the
SonataFlowPlatform
. - 3
- Name of the PostgreSQL database for storing workflow data.
- 4
- Optional: Port number to connect to the PostgreSQL service. Defaults to
5432
. - 5
- Name of the Kubernetes Secret containing database credentials.
- 6
- Key in the
Secret
object that contains the database username. - 7
- Key in the
Secret
object that contains the database password.
View the generated environment variables for the workflow.
The following example shows the generated environment variables for a workflow named
example-workflow
deployed with the earlierSonataFlowPlatform
configuration. These configurations specifically relate to persistence and are managed by the OpenShift Serverless Logic Operator. You cannot modify these settings once you have applied them.
When you use the SonataFlowPlatform
persistence, every workflow is configured to use a PostgreSQL schema name equal to the workflow name.
When this persistence configuration is in place, the OpenShift Serverless Logic Operator configures every workflow deployed in this namespace using the preview
or gitops
profile, to connect with the PostgreSQL database by injecting relevant JDBC connection parameters as environment variables.
PostgreSQL is currently the only supported database for persistence.
For SonataFlow
CR deployments using the preview
profile, the OpenShift Serverless Logic build system automatically includes specific Quarkus extensions required for enabling persistence. This ensures compatibility with persistence mechanisms, streamlining the workflow deployment process.
7.2. Configuring persistence using the SonataFlow CR Copy linkLink copied to clipboard!
The SonataFlow
custom resource (CR) enables workflow-specific persistence configuration. You can use this configuration independently, even if SonataFlowPlatform
persistence is already set up in the current namespace.
Procedure
-
Configure persistence by using the
persistence
field in theSonataFlow
CR specification as shown in the following example:
- 1
- Name of the Kubernetes Service that connects to the PostgreSQL database server.
- 2
- Optional: Namespace containing the PostgreSQL Service. Defaults to the workflow namespace.
- 3
- Name of the PostgreSQL database where workflow data is stored.
- 4
- Optional: Name of the database schema for workflow data. Defaults to the workflow name.
- 5
- Optional: Port to connect to the PostgreSQL Service. Defaults to
5432
. - 6
- Name of the Kubernetes Secret containing database credentials.
- 7
- Key in the
Secret
object containing the database username. - 8
- Key in the
Secret
object containing the database password.
This configuration informs the OpenShift Serverless Logic Operator that the workflow must connect to the specified PostgreSQL database server when deployed. The OpenShift Serverless Logic Operator adds the relevant JDBC connection parameters as environment variables to the workflow container.
PostgreSQL is currently the only supported database for persistence.
For SonataFlow
CR deployments using the preview
profile, the OpenShift Serverless Logic build system includes the required Quarkus extensions to enable persistence automatically.
7.3. Persistence configuration precedence rules Copy linkLink copied to clipboard!
You can use SonataFlow
custom resource (CR) persistence independently or alongside SonataFlowPlatform
CR persistence. If a SonataFlowPlatform
CR persistence configuration exists in the current namespace, the following rules determine which persistence configuration applies:
-
If the
SonataFlow
CR includes a persistence configuration, that configuration takes precedence and applies to the workflow. -
If the
SonataFlow
CR does not include a persistence configuration and thespec.persistence
field is absent, the OpenShift Serverless Logic Operator uses the persistence configuration from the currentSonataFlowPlatform
if any. -
To disable persistence for the workflow, explicitly set
spec.persistence: {}
in theSonataFlow
CR. This configuration ensures the workflow does not inherit persistence settings from theSonataFlowPlatform
CR.
7.4. Profile specific persistence requirements Copy linkLink copied to clipboard!
The persistence configurations provided for both SonataFlowPlatform
custom resource (CR) and SonataFlow
CR apply equally to the preview
and gitops
profiles. However, you must avoid using these configurations with the dev
profile, as this profile ignores them entirely.
The primary difference between the preview
and gitops
profiles lies in the build process.
When using the gitops
profile, ensure that the following Quarkus extensions are included in the workflow image during the build process.
groupId | artifactId | version |
---|---|---|
|
|
|
|
|
|
|
|
|
If you are using the registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.36.0
to generate your images, you can pass the following build argument to include these extensions:
QUARKUS_EXTENSIONS=io.quarkus:quarkus-agroal:3.15.4.redhat-00001,io.quarkus:quarkus-jdbc-postgresql:3.15.4.redhat-00001,org.kie:kie-addons-quarkus-persistence-jdbc:9.103.0.redhat-00003
$ QUARKUS_EXTENSIONS=io.quarkus:quarkus-agroal:3.15.4.redhat-00001,io.quarkus:quarkus-jdbc-postgresql:3.15.4.redhat-00001,org.kie:kie-addons-quarkus-persistence-jdbc:9.103.0.redhat-00003
7.5. Database schema initialization Copy linkLink copied to clipboard!
When you are using SonataFlow
with PostgreSQL persistence, you can initialize the database schema either by enabling Flyway or by manually applying database schema updates using Data Definition Language (DDL) scripts.
Flyway is managed by the kie-addons-quarkus-flyway
runtime module and it is disabled by default. To enable Flyway, you must configure it using one of the following methods:
7.5.1. Flyway configuration in the workflow ConfigMap Copy linkLink copied to clipboard!
To enable Flyway in the workflow ConfigMap
, you can add the following property:
Example of enabling Flyway in the workflow ConfigMap
7.5.2. Flyway configuration using environment variables in the workflow container Copy linkLink copied to clipboard!
You can enable Flyway by adding an environment variable to the spec.podTemplate.container
field in the SonataFlow
CR by using the following example:
Example of enabling Flyway by using the workflow container environment variable
7.5.3. Flyway configuration using SonataFlowPlatform properties Copy linkLink copied to clipboard!
To apply a common Flyway configuration to all workflows within a namespace, you can add the property to the spec.properties.flow
field of the SonataFlowPlatform
CR shown in the following example:
This configuration is applied during workflow deployment. Ensure the Flyway property is set before deploying workflows.
Example of enabling Flyway by using the SonataFlowPlatform
properties
7.5.4. Initializing a manual database using DDL scripts Copy linkLink copied to clipboard!
If you prefer manual initialization, you must disable Flyway by ensuring the kie.flyway.enabled
property is either not configured or explicitly set to false
.
- By default, each workflow uses a schema name equal to the workflow name. Ensure that you manually apply the schema initialization for each workflow.
-
If you are using the
SonataFlow
custom resource (CR) persistence configuration, you can specify a custom schema name.
Procedure
- Download the DDL scripts from the kogito-ddl-9.103.0.redhat-00003-db-scripts.zip location.
- Extract the files.
Run the
.sql
files located in the root directory on the target PostgreSQL database. Ensure that the files are executed in the order of their version numbers.For example:
-
V1.35.0__create_runtime_PostgreSQL.sql
-
V10.0.0__add_business_key_PostgreSQL.sql
V10.0.1__alter_correlation_PostgreSQL.sql
NoteThe file version numbers are not associated with the OpenShift Serverless Logic Operator versioning.
-
Chapter 8. Workflow eventing system Copy linkLink copied to clipboard!
You can set up the eventing system for a SonataFlow
workflow.
In a OpenShift Serverless Logic installation, the following types of events are generated:
- Outgoing and incoming events related to workflow business logic.
- Events sent from workflows to the Data Index and Job Service.
- Events sent from the Job Service to the Data Index Service.
The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these services, ensuring efficient and reliable event handling.
8.1. Platform-scoped eventing system configuration Copy linkLink copied to clipboard!
To configure a platform-scoped eventing system, you can use the spec.eventing.broker.ref
field in the SonataFlowPlatform
custom resource (CR) to reference a Knative Eventing broker.
This configuration instructs the OpenShift Serverless Logic Operator to automatically link every workflow deployed in the specified namespace, using the preview
or gitops
profile, to produce and consume events through the defined broker.
The supporting services deployed in the namespace without a custom eventing configuration are also linked to this broker.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays how to configure the SonataFlowPlatform
CR for a platform-scoped eventing system:
8.2. Workflow-scoped eventing system configuration Copy linkLink copied to clipboard!
A workflow-scoped eventing system configuration allows for detailed customization of the events produced and consumed by a specific workflow. You can use the spec.sink.ref
and spec.sources[]
fields in the SonataFlow
CR to configure outgoing and incoming events.
8.2.1. Outgoing eventing system configuration Copy linkLink copied to clipboard!
To configure outgoing events, you can use the spec.sink.ref
field in the SonataFlow
CR. This configuration ensures the workflow produces events using the specified Knative Eventing Broker, including both system events and workflow business events.
The following example displays how to configure the SonataFlow
CR for a workflow-scoped outgoing eventing system:
- 1
- Name of the Knative Eventing Broker to use for all the events produced by the workflow, including the SonataFlow system events.
- 2
- Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlow
namespace. Consider creating the broker in the same namespace asSonataFlow
. - 3
- Flow definition field in the
SonataFlow
CR. - 4
- Events definition field in the
SonataFlow
CR. - 5
- Example of an outgoing event
outEvent1
definition. - 6
- Event type for the
outEvent1
outgoing event.
8.2.2. Incoming eventing system configuration Copy linkLink copied to clipboard!
To configure incoming events, you can use the spec.sources[]
field in the SonataFlow
CR. You can add an entry for each event type requiring specific configuration. This setup allows workflows to consume events from different brokers based on event type.
If an incoming event type lacks a specific Broker configuration, the system applies eventing system configuration precedence rules.
The following example displays how to configure the SonataFlow
CR for a workflow-scoped incoming eventing system:
The link between a spec.sources[]
entry and the workflow event, is by using the event type.
- 1
- Configure the workflow to consume events of type
in-event-type1
using the specified Knative Eventing Broker. - 2
- Name of the Knative Eventing Broker to use for the consumption of the events of type
in-event-type1
sent to this workflow. - 3
- Optional: If you do not specify the value, the parameter defaults to the
SonataFlow
namespace. Consider creating the broker in the same namespace as theSonataFlow
workflow. - 4
- Configure the workflow to consume events of type
in-event-type2
using the specified Knative Eventing Broker. - 5
- Name of the Knative Eventing Broker to use for the consumption of the events of type
in-event-type2
sent to this workflow. - 6
- Optional: If you do not specify the value, the parameter defaults to the
SonataFlow
namespace. Consider creating the broker in the same namespace as theSonataFlow
workflow. - 7
- Flow definition field in the
SonataFlow
CR. - 8
- Events definition field in the
SonataFlow
CR. - 9
- Example of an incoming event
inEvent1
definition. - 10
- Event type for the incoming event
inEvent1
. The link of the workflow event, with the correspondingspec.sources[]
entry, is by using the event type namein-event-type1
. - 11
- Example of an incoming event
inEvent2
definition. - 12
- Event type for the incoming event
inEvent2
. The link of the workflow event, with the corresponding spec.sources[] entry, is created by using the event type name in-event-type2.
8.3. Cluster-scoped eventing system configuration Copy linkLink copied to clipboard!
In a SonataFlowClusterPlatform
setup, workflows are automatically linked to the Broker specified in the associated SonataFlowPlatform
CR. This linkage follows the Eventing System configuration precedence rules.
To ensure proper integration, you can configure Broker
in the SonataFlowPlatform
CR referenced by the SonataFlowClusterPlatform
CR.
The following example displays how to configure the SonataFlowClusterPlatform
CR and its reference to the SonataFlowPlatform
CR:
The SonataFlowClusterPlatform
CR can refer to any SonataFlowPlatform
CR that has already been deployed.
8.4. Eventing system configuration precedence rules Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator follows a defined order of precedence to determine the eventing system configuration for a workflow.
Eventing system configuration precedence rules are as follows:
- If the workflow has a defined eventing system by using either workflow-scoped outgoing or incoming eventing system configurations, the choice of configuration takes priority over the other configuration and applies to the workflow.
-
If the
SonataFlowPlatform
CR enclosing the workflow has a platform-scoped eventing system configured, this configuration is then applied to the next step. - If the current cluster is configured with a cluster-scoped eventing system, it is applied if no workflow-scoped or platform-scoped configuration exists.
Review the following information, if none of the above configurations are defined:
- The workflow uses direct HTTP calls to deliver SonataFlow system events to supporting services.
-
The workflow consumes incoming events by HTTP POST calls at the workflow service root path
/
. - No eventing system is configured to produce workflow business events. Any attempt to produce such events might result in a failure.
8.5. Linking workflows to the eventing system Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator links workflows with the eventing system using Knative Eventing, SinkBindings, and triggers. These objects are created automatically by the OpenShift Serverless Logic Operator and simplify the production and consumption of workflow events.
The following example shows the Knative Eventing objects created for an example-workflow
workflow configured with a platform-scoped eventing system:
The example-broker
object is a Kafka class Broker, and its configuration is defined in the kafka-broker-config
config map.
The following example displays how to configure a Kafka Knative Broker for use with the SonataFlowPlatform:
- 1
- The Kafka class is used to create the
example-broker
object.
The following example displays how the example-workflow
is automatically linked to the example-broker
in the example-namespace
for event production and consumption:
- 1
- The
example-workflow
outgoing events are produced by using theSinkBinding
namedexample-workflow-sb
. - 2
- Events of type
in-event-type1
are consumed by using theexample-workflow-inevent1-b40c067c-595b-4913-81a4-c8efa980bc11
trigger. - 3
- Events of type
in-event-type2
are consumed by using theexample-workflow-inevent2-b40c067c-595b-4913-81a4-c8efa980bc11
trigger.
You can list the automatically created SinkBinding
named example-workflow-sb
by using the following command:
oc get sinkbindings -n example-namespace
$ oc get sinkbindings -n example-namespace
Example output
NAME TYPE RESOURCE SINK READY example-workflow-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True
NAME TYPE RESOURCE SINK READY
example-workflow-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True
You can use the following command to list the automatically created triggers for event consumption:
oc get triggers -n <example-namespace>
$ oc get triggers -n <example-namespace>
Example output
NAME BROKER SINK AGE CONDITIONS READY REASON example-workflow-inevent1-b40c067c-595b-4913-81a4-c8efa980bc11 example-broker service:example-workflow 16m 7 OK / 7 True example-workflow-inevent2-b40c067c-595b-4913-81a4-c8efa980bc11 example-broker service:example-workflow 16m 7 OK / 7 True
NAME BROKER SINK AGE CONDITIONS READY REASON
example-workflow-inevent1-b40c067c-595b-4913-81a4-c8efa980bc11 example-broker service:example-workflow 16m 7 OK / 7 True
example-workflow-inevent2-b40c067c-595b-4913-81a4-c8efa980bc11 example-broker service:example-workflow 16m 7 OK / 7 True
Chapter 9. Configuring custom Maven mirrors Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses Maven Central by default to resolve Maven artifacts during workflow builds. The provided builder and development images include all required Java libraries to run workflows, but in certain scenarios, such as when you add a custom Quarkus extension, you must download the additional dependencies from Maven Central.
In environments with restricted or firewalled network access, direct access to Maven Central might not be available. In such cases, you can configure the workflow containers to use a custom Maven mirror, such as an internal company registry or repository manager.
You can configure a custom Maven mirror at different levels as follows:
-
Per workflow build by updating the
SonataFlowBuild
custom resource. -
At the platform level by updating the
SonataFlowPlatform
custom resource. -
For development mode deployments by editing the
SonataFlow
custom resource. - When building custom images externally with the builder image
9.1. Adding a Maven mirror when building workflows Copy linkLink copied to clipboard!
You can configure a Maven mirror by setting the MAVEN_MIRROR_URL
environment variable in the SonataFlowBuild
or SonataFlowPlatform
custom resources (CR).
The recommended approach is to update the SonataFlowPlatform
CR. This ensures the mirror configuration is propagated automatically to all workflow builds within the platform scope.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to a custom Maven mirror or internal repository.
Procedure
Edit the
SonataFlowPlatform
CR to configure a Maven mirror for all workflow builds in a namespace, as shown in the following example:Example of Maven mirror configuration in a
SonataFlowPlatform
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow This configuration applies to all workflow builds in the same namespace that use the
preview
profile. When a workflow builder instance runs, it updates the internal Maven settings file to use the specified mirror as the default for external locations such as Maven Central.Optional: If you need a specific configuration for a single workflow build, create the
SonataFlowBuild
CR before creating the correspondingSonataFlow
CR. TheSonataFlowBuild
andSonataFlow
CRs must have the same name.Example of Maven mirror configuration in a
SonataFlowBuild
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the
SonataFlowBuild
CR configuration only when you require workflow-specific behavior, for example, debugging. For general use, configure theSonataFlowPlatform
CR instead.
9.2. Adding a Maven mirror when deploying in development mode Copy linkLink copied to clipboard!
You can configure a Maven mirror for workflows that run in dev
mode by adding the MAVEN_MIRROR_URL
environment variable to the SonataFlow
custom resource (CR).
Prerequisites
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have a workflow deployed in
dev
profile. - You have access to a custom Maven mirror or internal repository.
Procedure
Edit the
SonataFlow
CR to include the Maven mirror configuration as shown in the following example:Example of Maven mirror configuration on SonataFlow CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
MAVEN_MIRROR_URL
variable specifies the custom Maven mirror.
Only workflows deployed with the dev
profile can use Maven mirrors. Other deployment models run compiled code only, so they do not need to connect to a Maven registry.
9.3. Configuring a Maven mirror on a custom image Copy linkLink copied to clipboard!
You can configure a Maven mirror for workflows that run in dev
mode by adding the MAVEN_MIRROR_URL
environment variable to the SonataFlow
custom resource (CR).
Prerequisites
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to a dockerfile or container build context that uses the SonataFlow Builder image.
- You have access to a custom Maven mirror or internal repository.
Procedure
Set the Maven mirror as an environment variable in the Dockerfile as shown in the following example:
Example of custom container file with Maven mirror set as an environment variable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ENV
directive ensures that all builds with this Dockerfile automatically use the specified Maven mirror.Set the Maven mirror as a build-time argument in the Dockerfile as shown in the following example:
Example of custom container file with Maven mirror set as an argument
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ARG
directive allows you to pass the Maven mirror value dynamically at build time.
Chapter 10. Managing upgrades Copy linkLink copied to clipboard!
10.1. Upgrading OpenShift Serverless Logic Operator from version 1.34.0 to 1.35.0 Copy linkLink copied to clipboard!
This section provides step-by-step instructions to upgrade the OpenShift Serverless Logic Operator from version 1.34.0 to 1.35.0. The upgrade process involves preparing the existing workflows and services, updating the Operator, and restoring the workflows after the upgrade.
Different workflow profiles require different upgrade steps. Carefully follow the instructions for each profile.
10.1.1. Preparing for the upgrade Copy linkLink copied to clipboard!
Before starting the upgrade process, you need to prepare your OpenShift Serverless Logic environment. This section outlines the necessary steps to ensure a upgrade from version 1.34.0 to 1.35.0.
The preparation process includes:
- Deleting or scaling workflows based on their profiles.
- Backing up all necessary databases and resources.
- Ensuring you have a record of all custom configurations.
- Running required database migration scripts for workflows using persistence.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI (
oc
).
10.1.1.1. Deleting workflows with dev profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the dev
profile and redeploy them after the upgrade is completed.
Procedure
-
Ensure you have a backup of all necessary Kubernetes resources, including
SonataFlow
custom resource definitions (CRDs),ConfigMaps
, or any other related custom configurations. Delete the workflow by executing the following command:
oc delete -f <my-workflow.yaml> -n <target_namespace>
$ oc delete -f <my-workflow.yaml> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.1.2. Deleting and migrating workflows with the preview profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the Preview
profile, and migrate any data that is persisted. When the upgrade is completed, you must redeploy the workflows.
Procedure
- If you are using persistence, back up the workflow database and ensure the backup includes both database objects and table data.
-
Ensure you have a backup of all necessary Kubernetes resources, including
SonataFlow
CRDs,ConfigMaps
, or any other related custom configurations. Delete the workflow by executing the following command:
oc delete -f <my-workflow.yaml> -n <target_namespace>
$ oc delete -f <my-workflow.yaml> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are using persistence, you must execute the following database migration script:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.1.3. Scaling down workflows with the gitops profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must scale down workflows running with the gitops
profile, and scale them up again after the upgrade is completed.
Procedure
Modify the
my-workflow.yaml
CRD and scale down each workflow tozero
before upgrading as shown in the following example:spec: podTemplate: replicas: 0
spec: podTemplate: replicas: 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated CRD by running the following command:
oc apply -f <my-workflow.yaml> -n <target_namespace>
$ oc apply -f <my-workflow.yaml> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) Scale the workflow to
0
by running the following command:oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
$ oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.1.4. Backing up the Data Index database Copy linkLink copied to clipboard!
You must back up the Data Index database before upgrading to prevent data loss.
Procedure
Take a full backup of the Data Index database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.1.1.5. Backing up the Jobs Service database Copy linkLink copied to clipboard!
You must back up the Jobs Service database before upgrading to maintain job scheduling data.
Procedure
Take a full backup of the Jobs Service database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.1.2. Upgrading the OpenShift Serverless Logic Operator Copy linkLink copied to clipboard!
To transition from OpenShift Serverless Logic Operator (OSL) version 1.34.0 to 1.35.0, you must upgrade the OSL using the Red Hat OpenShift Serverless web console. This upgrade ensures compatibility with newer features and proper functioning of your workflows.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI (
oc
).
Procedure
- In the web console, navigate to the Operators → OperatorHub → Installed Operator page.
-
Select the
openshift-serverless-logic
namespace from the Installed Namespace. - In the list of installed operators, find and click the OpenShift Serverless Logic Operator.
- In the Operator details page, click on the Subscription tab. Click Edit Subscription.
- In the Upgrade status, click the Upgrade available link.
- Click the Preview install plan and Approve to start the update.
To monitor the upgrade process, run the following command:
oc get subscription logic-operator-rhel8 -n openshift-serverless-logic -o jsonpath='{.status.installedCSV}'
$ oc get subscription logic-operator-rhel8 -n openshift-serverless-logic -o jsonpath='{.status.installedCSV}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
logic-operator-rhel8.v1.35.0
$ logic-operator-rhel8.v1.35.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify the new Operator version is installed, run the following command:
oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logic
$ oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.35.0 OpenShift Serverless Logic Operator 1.35.0 logic-operator-rhel8.v1.34.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.35.0 OpenShift Serverless Logic Operator 1.35.0 logic-operator-rhel8.v1.34.0 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.3. Finalizing the upgrade Copy linkLink copied to clipboard!
After upgrading the OpenShift Serverless Logic Operator to version 1.35.0, you must finalize the upgrade process by restoring or scaling workflows and cleaning up old services. This ensures that your system runs cleanly on the new version and that all dependent components are configured correctly.
Follow the appropriate steps below based on the profile of your workflows and services.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI (
oc
).
10.1.3.1. Finalizing the Data Index upgrade Copy linkLink copied to clipboard!
After the Operator upgrade, a new ReplicaSet is automatically created for Data Index 1.35.0. You must delete the old one manually.
Procedure
Verify the new ReplicaSet exists by listing all ReplicaSets by running the following command:
oc get replicasets -n <target_namespace> -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image
$ oc get replicasets -n <target_namespace> -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the old Data Index ReplicaSet (with version 1.34.0) and delete it:
oc delete replicaset <old_replicaset_name> -n <target_namespace>
$ oc delete replicaset <old_replicaset_name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.3.2. Finalizing the Job Service upgrade Copy linkLink copied to clipboard!
You must manually clean up the Jobs Service components from the older version to trigger deployment of version 1.35.0 components.
Procedure
Delete the old Jobs Service deployment by running the following command:
oc delete deployment <jobs-service-deployment-name> -n <target_namespace>
$ oc delete deployment <jobs-service-deployment-name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will trigger automatic cleanup of the older Pods and ReplicaSets and initiate a fresh deployment using version 1.35.0.
10.1.3.3. Redeploying workflows with the dev profile Copy linkLink copied to clipboard!
After the upgrade, you must redeploy workflows that use the dev
profile and any associated Kubernetes resources.
Procedure
-
Ensure all required resources are restored, including
SonataFlow
CRDs,ConfigMaps
, or any other related custom configurations. Redeploy the workflow by running the following command:
oc apply -f <my-workflow.yaml> -n <target_namespace>
$ oc apply -f <my-workflow.yaml> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.3.4. Restoring workflows with the preview profile Copy linkLink copied to clipboard!
Workflows with the preview
profile require an additional configuration step before being redeployed.
Procedure
If the workflow uses persistence, add the following property to the
ConfigMap
associated with the workflow:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure all required resources are recreated, including
SonataFlow
CRDs,ConfigMaps
, or any other related custom configurations. Redeploy the workflow by running the following command:
oc apply -f <my-workflow.yaml> -n <target_namespace>
$ oc apply -f <my-workflow.yaml> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.3.5. Scaling up workflows with the gitops profile Copy linkLink copied to clipboard!
Workflows with the gitops
profile that were previously scaled down must be scaled back up to continue operation.
Procedure
Modify the
my-workflow.yaml
CRD and scale up each workflow toone
before upgrading as shown in the following example:spec: podTemplate: replicas: 1
spec: podTemplate: replicas: 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated CRD by running the following command:
oc apply -f <my-workflow.yaml> -n <target_namespace>
$ oc apply -f <my-workflow.yaml> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) Scale the workflow back to
1
by running the following command:oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
$ oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.3.6. Verifying the upgrade Copy linkLink copied to clipboard!
After restoring workflows and services, it is essential to verify that the upgrade was successful and that all components are functioning as expected.
Procedure
Check if all workflows and services are running by entering the following command:
oc get pods -n <target_namespace>
$ oc get pods -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all pods related to workflows, Data Index, and Jobs Service are in a
Running
orCompleted
state.Verify that the OpenShift Serverless Logic Operator is running correctly by entering the following command:
oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logic
$ oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expected output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.35.0 OpenShift Serverless Logic Operator 1.35.0 logic-operator-rhel8.v1.34.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.35.0 OpenShift Serverless Logic Operator 1.35.0 logic-operator-rhel8.v1.34.0 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check Operator logs for any errors by entering the following command:
oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic
$ oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2. Upgrading OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0 Copy linkLink copied to clipboard!
You can upgrade the OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0. The upgrade process involves preparing the existing workflows and services, updating the Operator, and restoring the workflows after the upgrade.
Different workflow profiles require different upgrade steps. Follow the instructions for each profile carefully.
10.2.1. Preparing for the upgrade Copy linkLink copied to clipboard!
Before starting the upgrade process, you need to prepare your OpenShift Serverless Logic environment to upgrade from version 1.35.0 to 1.36.0.
The preparation process is as follows:
- Deleting or scaling workflows based on their profiles.
- Backing up all necessary databases and resources.
- Ensuring you have a record of all custom configurations.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI (
oc
).
10.2.1.1. Deleting workflows with the dev profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the dev
profile and redeploy them after the upgrade is complete.
Procedure
-
Ensure you have a backup of all necessary Kubernetes resources, including
SonataFlow
custom resources (CRs),ConfigMap
resources, or any other related custom configurations. Delete the workflow by executing the following command:
oc delete workflow <workflow_name> -n <target_namespace>
$ oc delete workflow <workflow_name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.1.2. Deleting workflows with the preview profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the preview
profile. When the upgrade is complete, you must redeploy the workflows.
Procedure
- If you are using persistence, back up the workflow database and ensure the backup includes both database objects and table data.
-
Ensure you have a backup of all necessary Kubernetes resources, including
SonataFlow
custom resources (CRs),ConfigMap
resources, or any other related custom configurations. Delete the workflow by executing the following command:
oc delete workflow <workflow_name> -n <target_namespace>
$ oc delete workflow <workflow_name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.1.3. Scaling down workflows with the gitops profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must scale down workflows running with the gitops
profile, and scale them up again after the upgrade is complete.
Procedure
Modify the
my-workflow.yaml
custom resources (CR) and scale down each workflow to0
before upgrading as shown in the following example:spec: podTemplate: replicas: 0 # ...
spec: podTemplate: replicas: 0 # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated
my-workflow.yaml
CR by running the following command:oc apply -f my-workflow.yaml -n <target_namespace>
$ oc apply -f my-workflow.yaml -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Scale the workflow to
0
by running the following command:oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
$ oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.1.4. Backing up the Data Index database Copy linkLink copied to clipboard!
You must back up the Data Index database before upgrading to prevent data loss.
Procedure
Take a full backup of the Data Index database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.2.1.5. Backing up the Jobs Service database Copy linkLink copied to clipboard!
You must back up the Jobs Service database before upgrading to maintain job scheduling data.
Procedure
Take a full backup of the Jobs Service database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.2.2. Upgrading the OpenShift Serverless Logic Operator to 1.36.0 Copy linkLink copied to clipboard!
You can upgrade the OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0 by performing the following steps.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI (
oc
). - You have version 1.35.0 of OpenShift Serverless Logic Operator installed.
Procedure
Patch the
ClusterServiceVersion
(CSV) for the 1.35.0 OpenShift Serverless Logic Operator to update the deployment labels by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the current Operator deployment by running the following command:
oc delete deployment logic-operator-rhel8-controller-manager -n openshift-serverless-logic
$ oc delete deployment logic-operator-rhel8-controller-manager -n openshift-serverless-logic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the web console, navigate to the Operators → OperatorHub → Installed Operators page.
- In the list of installed Operators, find and click the Operator named OpenShift Serverless Logic Operator.
- Initiate the OpenShift Serverless Logic Operator upgrade to version 1.36.0.
Verification
After applying the upgrade, verify that the Operator is running and in the
Succeeded
phase, by running the following command:oc get clusterserviceversion logic-operator-rhel8.v1.36.0
$ oc get clusterserviceversion logic-operator-rhel8.v1.36.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.36.0 OpenShift Serverless Logic Operator 1.36.0 logic-operator-rhel8.v1.35.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.36.0 OpenShift Serverless Logic Operator 1.36.0 logic-operator-rhel8.v1.35.0 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.3. Finalizing the upgrade Copy linkLink copied to clipboard!
After upgrading the OpenShift Serverless Logic Operator to version 1.36.0, you must finalize the upgrade process by restoring or scaling workflows and cleaning up old services. This ensures that your system runs cleanly on the new version and that all dependent components are configured correctly.
Follow the appropriate steps below based on the profile of your workflows and services.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI (
oc
).
10.2.3.1. Finalizing the Data Index upgrade Copy linkLink copied to clipboard!
After the Operator upgrade, if your deployment is configured to use a Knative Eventing Kafka Broker, you must delete the old data-index-process-definition
trigger that was created in the OpenShift Serverless Logic 1.35.0 version. Optionally, you can delete the old ReplicaSet
resource as well.
Procedure
List all the triggers by running the following command:
oc get triggers -n <target_namespace>
$ oc get triggers -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Based on the generated example output, delete the old
data-index-process-definition
trigger by running the following command:oc delete trigger data-index-process-definition-473e1ddbb3ca1d62768187eb80de99bca -n <target_namespace>
$ oc delete trigger data-index-process-definition-473e1ddbb3ca1d62768187eb80de99bca -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After deletion, a new trigger compatible with OpenShift Serverless Logic 1.36.0 is automatically created.
Optional: Identify the old
ReplicaSet
resource by running the following command:oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>
$ oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Name Image sonataflow-platform-data-index-service-1111111111 registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.35.0 sonataflow-platform-data-index-service-2222222222 registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.36.0
Name Image sonataflow-platform-data-index-service-1111111111 registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.35.0 sonataflow-platform-data-index-service-2222222222 registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.36.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Delete your old
ReplicaSet
resource by running the following command:oc delete replicaset <old_replicaset_name> -n <target_namespace>
$ oc delete replicaset <old_replicaset_name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command based on the example output
oc delete replicaset sonataflow-platform-data-index-service-1111111111 -n <target_namespace>
$ oc delete replicaset sonataflow-platform-data-index-service-1111111111 -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.3.2. Finalizing the Job Service upgrade Copy linkLink copied to clipboard!
After the OpenShift Serverless Operator is upgraded to version 1.36.0 you can optionally delete the old ReplicaSet
resource.
Procedure
Identify the old
ReplicaSet
resource by running the following command:oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>
$ oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Name Image sonataflow-platform-jobs-service-1111111111 registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.35.0 sonataflow-platform-jobs-service-2222222222 registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.36.0
Name Image sonataflow-platform-jobs-service-1111111111 registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.35.0 sonataflow-platform-jobs-service-2222222222 registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.36.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete your old
ReplicaSet
resource by running the following command:oc delete replicaset <old_replicaset_name> -n <target_namespace>
$ oc delete replicaset <old_replicaset_name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command based on the example output
oc delete replicaset sonataflow-platform-jobs-service-1111111111 -n <target_namespace>
$ oc delete replicaset sonataflow-platform-jobs-service-1111111111 -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.3.3. Redeploying workflows with the dev profile Copy linkLink copied to clipboard!
After the upgrade, you must redeploy workflows that use the dev
profile and any associated Kubernetes resources.
Procedure
-
Ensure that all required Kubernetes resources, including the
ConfigMap
with theapplication.properties
field, are restored before redeploying the workflow. Redeploy the workflow by running the following command:
oc apply -f <workflow_name> -n <target_namespace>
$ oc apply -f <workflow_name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.3.4. Restoring workflows with the preview profile Copy linkLink copied to clipboard!
After the upgrade, you must redeploy workflows that use the preview
profile and any associated Kubernetes resources.
Procedure
-
Ensure that all required Kubernetes resources, including the
ConfigMap
with theapplication.properties
field, are restored before redeploying the workflow. Redeploy the workflow by running the following command:
oc apply -f <workflow_name> -n <target_namespace>
$ oc apply -f <workflow_name> -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.3.5. Scaling up workflows with the gitops profile Copy linkLink copied to clipboard!
To continue operation, you must scale up workflows that you previously scaled down with the gitops
profile.
Procedure
Modify the
my-workflow.yaml
custom resources (CR) and scale up each workflow to1
as shown in the following example:spec: podTemplate: replicas: 1 # ...
spec: podTemplate: replicas: 1 # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated CR by running the following command:
oc apply -f my-workflow.yaml -n <target_namespace>
$ oc apply -f my-workflow.yaml -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Scale the workflow back to
1
by running the following command:oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
$ oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.4. Verifying the 1.36.0 upgrade Copy linkLink copied to clipboard!
After restoring workflows and services, verify that the upgrade was successful and all components are functioning as expected.
Procedure
Check if all workflows and services are running by entering the following command:
oc get pods -n <target_namespace>
$ oc get pods -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all pods related to workflows, Data Index, and Jobs Service are in a
Running
orCompleted
state.Verify that the OpenShift Serverless Logic Operator is running correctly by entering the following command:
oc get clusterserviceversion logic-operator-rhel8.v1.36.0 -n openshift-serverless-logic
$ oc get clusterserviceversion logic-operator-rhel8.v1.36.0 -n openshift-serverless-logic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.36.0 OpenShift Serverless Logic Operator 1.36.0 logic-operator-rhel8.v1.35.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.36.0 OpenShift Serverless Logic Operator 1.36.0 logic-operator-rhel8.v1.35.0 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check Operator logs for any errors by entering the following command:
oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic
$ oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow