Serverless Logic
Introduction to OpenShift Serverless Logic
Abstract
Chapter 1. Getting started Copy linkLink copied to clipboard!
1.1. Creating and running workflows with Knative Workflow plugin Copy linkLink copied to clipboard!
You can create and run the OpenShift Serverless Logic workflows locally.
1.1.1. Creating a workflow Copy linkLink copied to clipboard!
You can use the
create
kn workflow
Prerequisites
-
You have installed the OpenShift Serverless Logic CLI plugin.
kn-workflow
Procedure
Create a new OpenShift Serverless Logic workflow project by running the following command:
$ kn workflow createBy default, the generated project name is
. You can change the project name by using thenew-projectflag as follows:[-n|--name]Example command
$ kn workflow create --name my-project
1.1.2. Running a workflow locally Copy linkLink copied to clipboard!
You can use the
run
kn workflow
Prerequisites
- You have installed Podman on your local machine.
-
You have installed the OpenShift Serverless Logic CLI plugin.
kn-workflow - You have created an OpenShift Serverless Logic workflow project.
Procedure
From the directory where you created your OpenShift Serverless Logic project, move to your project directory by running the following command:
$ cd ./<your-project-name>Run the following command to build and run your OpenShift Serverless Logic workflow project:
$ kn workflow runWhen the project is ready, the Development UI automatically opens in your browser at
and you will find the Serverless Workflow Tools tile available. Alternatively, you can access the tool directly usinglocalhost:8080/q/dev-ui.http://localhost:8080/q/dev-ui/org.apache.kie.sonataflow.sonataflow-quarkus-devui/workflows
You can execute a workflow locally using a container that runs on your machine. Stop the container with Ctrl+C.
1.2. Deployment options and deploying workflows Copy linkLink copied to clipboard!
You can deploy the Serverless Logic workflows on the cluster using one of three deployment profiles:
- Dev
- Preview
- GitOps
Each profile defines how the Operator builds and manages workflow deployments, including image lifecycle, live updates, and reconciliation behavior.
1.2.1. Deploying workflows using the Dev profile Copy linkLink copied to clipboard!
You can deploy your local workflow on OpenShift Container Platform using the Dev profile. You can use this deployment to experiment and modify your workflow directly on the cluster, seeing changes almost immediately. Dev profile is designed for development and testing purposes. Because it automatically reloads the workflow without restarting the container, it is ideal for initial development stages and for testing workflow changes in a live environment.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Create the workflow configuration YAML file.
Example
workflow-dev.yamlfileapiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: greeting1 annotations: sonataflow.org/description: Greeting example on k8s! sonataflow.org/version: 0.0.1 sonataflow.org/profile: dev2 spec: flow: start: ChooseOnLanguage functions: - name: greetFunction type: custom operation: sysout states: - name: ChooseOnLanguage type: switch dataConditions: - condition: "${ .language == \"English\" }" transition: GreetInEnglish - condition: "${ .language == \"Spanish\" }" transition: GreetInSpanish defaultCondition: GreetInEnglish - name: GreetInEnglish type: inject data: greeting: "Hello from JSON Workflow, " transition: GreetPerson - name: GreetInSpanish type: inject data: greeting: "Saludos desde JSON Workflow, " transition: GreetPerson - name: GreetPerson type: operation actions: - name: greetAction functionRef: refName: greetFunction arguments: message: ".greeting + .name" end: trueTo deploy the application, apply the YAML file by entering the following command:
$ oc apply -f <filename> -n <your_namespace>Verify the deployment and check the status of the deployed workflow by entering the following command:
$ oc get workflow -n <your_namespace> -wEnsure that your workflow is listed and the status is
orRunning.CompletedEdit the workflow directly in the cluster by entering the following command:
$ oc edit sonataflow <workflow_name> -n <your_namespace>- After editing, save the changes. The OpenShift Serverless Logic Operator detects the changes and updates the workflow accordingly.
Verification
To ensure the changes are applied correctly, verify the status and logs of the workflow by entering the following commands:
View the status of the workflow by running the following command:
$ oc get sonataflows -n <your_namespace>View the workflow logs by running the following command:
$ oc logs <workflow_pod_name> -n <your_namespace>
Next steps
After completing the testing, delete the resources to avoid unnecessary usage by running the following command:
$ oc delete sonataflow <workflow_name> -n <your_namespace>
1.2.2. Deploying workflows using the Preview profile Copy linkLink copied to clipboard!
You can deploy your local workflow on OpenShift Container Platform using the Preview profile. This allows you to validate and test workflows in a production-like environment directly on the cluster. Preview profile is ideal for final testing and validation before moving workflows to production, as well as for quick iteration without directly managing the build pipeline. It also ensures that workflows will run smoothly in a production-like setting.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
To deploy a workflow in Preview profile, OpenShift Serverless Logic Operator uses the build system on OpenShift Container Platform, which automatically creates the image for deploying your workflow.
The following sections explain how to build and deploy your workflow on a cluster using the OpenShift Serverless Logic Operator with a
SonataFlow
1.2.2.1. Configuring workflows in Preview profile Copy linkLink copied to clipboard!
1.2.2.1.1. Configuring the workflow base builder image Copy linkLink copied to clipboard!
If your scenario requires strict policies for image usage, such as security or hardening constraints, replace the default image used by the OpenShift Serverless Logic Operator to build the final workflow container image.
By default, the OpenShift Serverless Logic Operator uses the image distributed in the official Red Hat Registry to build workflows. If your scenario requires strict policies for image use, such as security or hardening constraints, you can replace the default image.
To change this image, you edit the
SonataFlowPlatform
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
List the
resources in your namespace by running the following command:SonataFlowPlatform$ oc get sonataflowplatform -n <your_namespace>1 - 1
- Replace
<your_namespace>with the name of your namespace.
Patch the
resource with the new builder image by running the following command:SonataFlowPlatform$ oc patch sonataflowplatform <name> --patch 'spec:\n build:\n config:\n baseImage: <your_new_image_full_name_with_tag>' -n <your_namespace>
Verification
Verify that the
CR has been patched correctly by running the following command:SonataFlowPlatform$ oc describe sonataflowplatform <name> -n <your_namespace>1 - 1
- Replace
<name>with the name of yourSonataFlowPlatformresource and<your_namespace>with the name of your namespace.
Ensure that the
field underbaseImagereflects the new image.spec.build.config
1.2.2.1.2. Customization for the base builder Dockerfile Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator uses the
logic-operator-rhel8-builder-config
openshift-serverless-logic
Modifying the Dockerfile can break the build process.
This example is for reference only. The actual version might be slightly different. Do not use this example for your installation.
Example logic-operator-rhel8-builder-config config map CR
apiVersion: v1
data:
DEFAULT_WORKFLOW_EXTENSION: .sw.json
Dockerfile: |
FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 AS builder
# Variables that can be overridden by the builder
# To add a Quarkus extension to your application
ARG QUARKUS_EXTENSIONS
# Args to pass to the Quarkus CLI add extension command
ARG QUARKUS_ADD_EXTENSION_ARGS
# Additional java/mvn arguments to pass to the builder
ARG MAVEN_ARGS_APPEND
# Copy from build context to skeleton resources project
COPY --chown=1001 . ./resources
RUN /home/kogito/launch/build-app.sh ./resources
#=============================
# Runtime Run
#=============================
FROM registry.access.redhat.com/ubi9/openjdk-17:latest
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# We make four distinct layers so if there are application changes, the library layers can be re-used
COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/lib/ /deployments/lib/
COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/*.jar /deployments/
COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/app/ /deployments/app/
COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 185
ENV AB_JOLOKIA_OFF=""
ENV JAVA_OPTS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV JAVA_APP_JAR="/deployments/quarkus-run.jar"
kind: ConfigMap
metadata:
name: sonataflow-operator-builder-config
namespace: sonataflow-operator-system
1.2.2.1.3. Changing resource requirements Copy linkLink copied to clipboard!
You can specify resource requirements for the internal builder pods, by creating or editing a
SonataFlowPlatform
Example SonataFlowPlatform resource
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
build:
template:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Only one
SonataFlowPlatform
You can fine-tune the resource requirements for a particular workflow. Each workflow instance has a
SonataFlowBuild
SonataFlowBuild
Example of SonataFlowBuild CR
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowBuild
metadata:
name: my-workflow
spec:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
These parameters apply only to new build instances.
1.2.2.1.4. Passing arguments to the internal builder Copy linkLink copied to clipboard!
You can customize the build process by passing build arguments to the
SonataFlowBuild
SonataFlowPlatform
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Check for the existing
instance by running the following command:SonataFlowBuild$ oc get sonataflowbuild <name> -n <namespace>1 - 1
- Replace
<name>with the name of yourSonataFlowBuildinstance and<namespace>with your namespace.
Add build arguments to the
instance by running the following command:SonataFlowBuild$ oc edit sonataflowbuild <name> -n <namespace>Add the desired build arguments under the
field of the.spec.buildArgsinstance:SonataFlowBuildapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name>1 spec: buildArgs: - name: <argument_1> value: <value_1> - name: <argument_2> value: <value_2>- 1
- The name of the existing
SonataFlowBuildinstance.
Save the file and exit.
A new build with the updated configuration starts.
Set the default build arguments in the
resource by running the following command:SonataFlowPlatform$ oc edit sonataflowplatform <name> -n <namespace>Add the desired build arguments under the
field of the.spec.buildArgsresource:SonataFlowPlatformapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: <name>1 spec: build: template: buildArgs: - name: <argument_1> value: <value_1> - name: <argument_2> value: <value_2>- 1
- The name of the existing
SonataFlowPlatformresource.
- Save the file and exit.
1.2.2.1.5. Setting environment variables in the internal builder Copy linkLink copied to clipboard!
You can set environment variables to the
SonataFlowBuild
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Check for existing
instance by running the following command:SonataFlowBuild$ oc get sonataflowbuild <name> -n <namespace>Replace
with the name of your<name>instance andSonataFlowBuildwith your namespace.<namespace>Edit the
instance by running the following command:SonataFlowBuild$ oc edit sonataflowbuild <name> -n <namespace>Example
SonataFlowBuildinstanceapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name> spec: envs: - name: <env_variable_1> value: <value_1> - name: <env_variable_2> value: <value_2>Save the file and exit.
A new with the updated configuration starts.
Alternatively, you can set the enviroments in the
, so that every new build instances will use it as a template.SonataFlowPlatformExample
SonataFlowPlatforminstanceapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: <name> spec: build: template: envs: - name: <env_variable_1> value: <value_1> - name: <env_variable_2> value: <value_2>
1.2.2.1.6. Changing the base builder image Copy linkLink copied to clipboard!
You can modify the default builder image used by the OpenShift Serverless Logic Operator by editing the
logic-operator-rhel8-builder-config
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Edit the
config map by running the following command:logic-operator-rhel8-builder-config$ oc edit cm/logic-operator-rhel8-builder-config -n openshift-serverless-logicModify the dockerfile entry.
In your editor, locate the Dockerfile entry and change the first line to the desired image.
Example
data: Dockerfile: | FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 # Change the image to the desired one- Save the changes.
1.2.2.2. Building and deploying your workflow Copy linkLink copied to clipboard!
You can create a
SonataFlow
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Create a workflow YAML file similar to the following:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: greeting annotations: sonataflow.org/description: Greeting example on k8s! sonataflow.org/version: 0.0.1 spec: flow: start: ChooseOnLanguage functions: - name: greetFunction type: custom operation: sysout states: - name: ChooseOnLanguage type: switch dataConditions: - condition: "${ .language == \"English\" }" transition: GreetInEnglish - condition: "${ .language == \"Spanish\" }" transition: GreetInSpanish defaultCondition: GreetInEnglish - name: GreetInEnglish type: inject data: greeting: "Hello from JSON Workflow, " transition: GreetPerson - name: GreetInSpanish type: inject data: greeting: "Saludos desde JSON Workflow, " transition: GreetPerson - name: GreetPerson type: operation actions: - name: greetAction functionRef: refName: greetFunction arguments: message: ".greeting+.name" end: trueApply the
workflow definition to your OpenShift Container Platform namespace by running the following command:SonataFlow$ oc apply -f <workflow-name>.yaml -n <your_namespace>Example command for the
greetings-workflow.yamlfile:$ oc apply -f greetings-workflow.yaml -n workflowsList all the build configurations by running the following command:
$ oc get buildconfigs -n workflowsGet the logs of the build process by running the following command:
$ oc logs buildconfig/<workflow-name> -n <your_namespace>Example command for the
greetings-workflow.yamlfile:$ oc logs buildconfig/greeting -n workflows
Verification
To verify the deployment, list all the pods by running the following command:
$ oc get pods -n <your_namespace>Ensure that the pod corresponding to your workflow is running.
Check the running pods and their logs by running the following command:
$ oc logs pod/<pod-name> -n workflows
1.2.2.3. Verifying workflow deployment Copy linkLink copied to clipboard!
You can verify that your OpenShift Serverless Logic workflow is running by performing a test HTTP call from the workflow pod.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Create a workflow
file similar to the following:YAMLapiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: greeting annotations: sonataflow.org/description: Greeting example on k8s! sonataflow.org/version: 0.0.1 spec: flow: start: ChooseOnLanguage functions: - name: greetFunction type: custom operation: sysout states: - name: ChooseOnLanguage type: switch dataConditions: - condition: "${ .language == \"English\" }" transition: GreetInEnglish - condition: "${ .language == \"Spanish\" }" transition: GreetInSpanish defaultCondition: GreetInEnglish - name: GreetInEnglish type: inject data: greeting: "Hello from JSON Workflow, " transition: GreetPerson - name: GreetInSpanish type: inject data: greeting: "Saludos desde JSON Workflow, " transition: GreetPerson - name: GreetPerson type: operation actions: - name: greetAction functionRef: refName: greetFunction arguments: message: ".greeting+.name" end: trueCreate a route for the workflow service by running the following command:
$ oc expose svc/<workflow-service-name> -n workflowsThis command creates a public URL to access the workflow service.
Set an environment variable for the public URL by running the following command:
$ WORKFLOW_SVC=$(oc get route/<workflow-service-name> -n <namespace> --template='{{.spec.host}}')Make an HTTP call to the workflow to send a POST request to the service by running the following command:
$ curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{<"your": "json_payload">}' http://$WORKFLOW_SVC/<endpoint>Example output
{ "id": "b5fbfaa3-b125-4e6c-9311-fe5a3577efdd", "workflowdata": { "name": "John", "language": "English", "greeting": "Hello from JSON Workflow, " } }This output shows an example of the expected response if the workflow is running.
1.2.2.4. Restarting a build Copy linkLink copied to clipboard!
To restart a build, you can add or edit the
sonataflow.org/restartBuild: true
SonataFlowBuild
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Check if the
instance exists by running the following command:SonataFlowBuild$ oc get sonataflowbuild <name> -n <namespace>Edit the
instance by running the following command:SonataFlowBuild$ oc edit sonataflowbuild/<name> -n <namespace>Replace
with the name of your<name>instance andSonataFlowBuildwith the namespace where your workflow is deployed.<namespace>Add the
annotation to restart the build.sonataflow.org/restartBuild: trueapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name> annotations: sonataflow.org/restartBuild: trueThis action triggers the OpenShift Serverless Logic Operator to start a new build of the workflow.
To monitor the build process, check the build logs by running the following command:
$ oc logs buildconfig/<name> -n <namespace>Replace
with the name of your<name>instance andSonataFlowBuildwith the namespace where your workflow is deployed.<namespace>
1.2.3. Deploying workflows using the GitOps profile Copy linkLink copied to clipboard!
Use the GitOps profile only for production deployments. For development, rapid iteration, or testing, use the Dev or Preview profiles instead.
You can deploy your local workflow on OpenShift Container Platform using the GitOps profile. The GitOps profile provides full control over the workflow container image by allowing you to build and manage the image externally, typically through a CI/CD pipeline such as ArgoCD or Tekton. When a container image is defined in the
SonataFlow
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Specify your container image in your SonataFlow CR:
Example SonataFlow CR with set GitOps profile
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: annotations: sonataflow.org/profile: gitops name: workflow_name spec: flow:1 #... podTemplate: container: image: your-registry/workflow_name:tag #...- 1
- The
flowdefinition must match the workflow definition used during the build process. When you deploy your workflow using the GitOps profile, the Operator compares this definition with the workflow files embedded in the container image. If the definition and files do not match, the deployment fails.
Apply your CR to deploy the workflow:
$ oc apply -f <filename>
1.2.4. Editing a workflow Copy linkLink copied to clipboard!
When the OpenShift Serverless Logic Operator deploys a workflow service, it creates two config maps to store runtime properties:
-
User properties: Defined in a named after the
ConfigMapobject with the suffixSonataFlow. For example, if your workflow name is-props, then thegreetingname isConfigMap.greeting-props -
Managed properties: Defined in a named after the
ConfigMapobject with the suffixSonataFlow. For example, if your workflow name is-managed-props, then thegreetingname isConfigMap.greeting-managed-props
Managed properties always override any user property with the same key name and cannot be edited by the user. Any change would be overwritten by the Operator at the next reconciliation cycle.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Open and edit the
by running the following command:ConfigMap$ oc edit cm <workflow_name>-props -n <namespace>Replace
with the name of your workflow and<workflow_name>with the namespace where your workflow is deployed.<namespace>Add the properties in the
section.application.propertiesExample of a workflow properties stored within a
ConfigMap:apiVersion: v1 kind: ConfigMap metadata: labels: app: greeting name: greeting-props namespace: default data: application.properties: | my.properties.key = any-valueEnsure the properties are correctly formatted to prevent the Operator from replacing your configuration with the default one.
- After making the necessary changes, save the file and exit the editor.
1.2.5. Testing a workflow Copy linkLink copied to clipboard!
To verify that your OpenShift Serverless Logic workflow is running correctly, you can perform a test HTTP call from the relevant pod.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
To create a route for the specified service in your namespace by running the following command:
$ oc expose svc <service_name> -n <namespace>To fetch the URL for the newly exposed service by running the following command:
$ WORKFLOW_SVC=$(oc get route/<service_name> --template='{{.spec.host}}')Perform a test HTTP call and send a
request by running the following command:POST$ curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '<request_body>' http://$WORKFLOW_SVC/<endpoint>- Verify the response to ensure the workflow is functioning as expected.
1.2.6. Troubleshooting a workflow Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator deploys its pod with health check probes to ensure the Workflow runs in a healthy state. If changes cause these health checks to fail, the pod will stop responding.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
Check the workflow status by running the following command:
$ oc get workflow <name> -o jsonpath={.status.conditions} | jq .To fetch and analyze the the logs from the workflow’s deployment, run the following command:
$ oc logs deployment/<workflow_name> -f
1.2.7. Deleting a workflow Copy linkLink copied to clipboard!
You can use the
oc delete
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI .
(oc)
Procedure
-
Verify that you have the correct file that defines the Workflow you want to delete. For example, .
workflow.yaml Run the
command to remove the Workflow from your specified namespace:oc delete$ oc delete -f <your_file> -n <your_namespace>Replace
with the name of your Workflow file and<your_file>with your namespace.<your_namespace>
Chapter 2. Global configuration settings Copy linkLink copied to clipboard!
You can set global configuration options for the OpenShift Serverless Logic Operator.
2.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the OpenShift Serverless Logic Operator in the target cluster.
2.2. Customization of global configurations Copy linkLink copied to clipboard!
After installing the OpenShift Serverless Logic Operator, you can access the
logic-operator-rhel8-controllers-config
openshift-serverless-logic
You can modify any of the options within the
controllers_cfg.yaml
The following table outlines all the available global configuration options:
| Configuration key | Default value | Description |
|---|---|---|
|
|
| The default size of Kaniko persistent volume claim (PVC) when using the internal OpenShift Serverless Logic Operator builder manager. |
|
|
| How much time (in seconds) to wait for a developer mode workflow to start. This information is used for the controller manager to create new developer mode containers and setup the health check probes. |
|
|
| Default image used internally by the Operator managed Kaniko builder to create the warmup pods. |
|
|
| Default image used internally by the Operator managed Kaniko builder to create the executor pods. |
|
|
| The Jobs service image for PostgreSQL to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
|
| The Jobs service image without persistence to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
|
| The Data Index service image for PostgreSQL to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
|
| The Data Index service image without persistence to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
|
| OpenShift Serverless Logic base builder image used in the internal Dockerfile to build workflow applications in preview profile. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
|
| The image to use to deploy OpenShift Serverless Logic workflow images in devmode profile. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version. |
|
|
| The default name of the builder config map in the OpenShift Serverless Logic Operator namespace. |
|
| next column | Quarkus extensions required for workflows persistence. These extensions are used by the OpenShift Serverless Logic Operator builder in cases where the workflow being built has configured PostgreSQL persistence. |
|
|
| When set to
|
|
|
| When set to
|
|
|
| When set to
|
You can edit this by updating the
logic-operator-controllers-config
oc
2.2.1. Impact of global configuration changes Copy linkLink copied to clipboard!
When you update the global configurations, the changes immediately affect only newly created resources. For example, if you change the
sonataFlowDevModeImageTag
2.2.2. Customizing the base builder image Copy linkLink copied to clipboard!
You can directly change the base builder image in the Dockerfile used by the OpenShift Serverless Logic Operator.
Additionally, you can specify the base builder image in the
SonataFlowPlatform
Example of SonataFlowPlatform with a custom base builder image
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
build:
config:
baseImage: dev.local/my-workflow-builder:1.0.0
Alternatively, you can also modify the base builder image in the global configuration config map as shown in the following example:
Example of ConfigMap with a custom base builder image
apiVersion: v1
data:
controllers_cfg.yaml: |
sonataFlowBaseBuilderImageTag: dev.local/my-workflow-builder:1.0.0
kind: ConfigMap
metadata:
name: logic-operator-rhel8-controllers-config
namespace: openshift-serverless-logic
When customizing the base builder image, the following order of precedence is applicable:
-
The configuration in the current context.
SonataFlowPlatform -
The global configuration entry in the resource.
ConfigMap -
The clause in the Dockerfile within the OpenShift Serverless Logic Operator namespace, defined in the
FROMconfig map.logic-operator-rhel8-builder-config
The entry in the
SonataFlowPlatform
Chapter 3. Managing services Copy linkLink copied to clipboard!
3.1. Configuring OpenAPI services Copy linkLink copied to clipboard!
The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface for HTTP APIs. You can understand a service’s capabilities without access to the source code, additional documentation, or network traffic inspection. When you define a service by using the OpenAPI, you can understand and interact with it using minimal implementation logic. Just as interface descriptions simplify lower-level programming, the OpenAPI Specification eliminates guesswork in calling a service.
3.1.1. OpenAPI function definition Copy linkLink copied to clipboard!
OpenShift Serverless Logic allows the workflows to interact with remote services using an OpenAPI specfication reference in a function.
Example OpenAPI function definition
{
"functions": [
{
"name": "myFunction1",
"operation": "specs/myopenapi-file.yaml#myFunction1"
}
]
}
The
operation
string
-
: The engine uses this to locate the specification file.
URI - Operation identifier: You can find this identifier in the OpenAPI specification file.
OpenShift Serverless Logic supports the following URI schemes:
- file: Use this for files located in the file system.
-
or
http: Use these for remotely located files.https
Ensure the OpenAPI specification files are available during build time. OpenShift Serverless Logic uses an internal code generation feature to send requests at runtime. After you build the application image, OpenShift Serverless Logic will not have access to these files.
If the OpenAPI service you want to add to the workflow does not have a specification file, you can either create one or update the service to generate and expose the file.
3.1.2. Sending REST requests based on the OpenAPI specification Copy linkLink copied to clipboard!
To send REST requests that are based on the OpenAPI specification files, you must perform the following procedures:
- Define the function references
- Access the defined functions in the workflow states
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
To define the OpenAPI functions:
- Identify and access the OpenAPI specification files for the services you intend to invoke.
Copy the OpenAPI specification files into your workflow service directory, such as
.<project_application_dir>/specsThe following example shows the OpenAPI specification for the multiplication REST service:
Example multiplication REST service OpenAPI specification
openapi: 3.0.3 info: title: Generated API version: "1.0" paths: /: post: operationId: doOperation parameters: - in: header name: notUsed schema: type: string required: false requestBody: content: application/json: schema: $ref: '#/components/schemas/MultiplicationOperation' responses: "200": description: OK content: application/json: schema: type: object properties: product: format: float type: number components: schemas: MultiplicationOperation: type: object properties: leftElement: format: float type: number rightElement: format: float type: numberTo define functions in the workflow, use the
from the OpenAPI specification to reference the desired operations in your function definitions.operationIdExample function definitions in the temperature conversion application
{ "functions": [ { "name": "multiplication", "operation": "specs/multiplication.yaml#doOperation" }, { "name": "subtraction", "operation": "specs/subtraction.yaml#doOperation" } ] }-
Ensure that your function definitions reference the correct paths to the OpenAPI files stored in the directory.
<project_application_dir>/specs
To access the defined functions in the workflow states:
- Define workflow actions to call the function definitions you added. Ensure each action references a function defined earlier.
Use the
attribute to refer to the specific function by its name. Map the arguments in thefunctionRefusing the parameters defined in the OpenAPI specification.functionRefThe following example shows about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping function arguments in workflow
{ "states": [ { "name": "SetConstants", "type": "inject", "data": { "subtractValue": 32.0, "multiplyValue": 0.5556 }, "transition": "Computation" }, { "name": "Computation", "actionMode": "sequential", "type": "operation", "actions": [ { "name": "subtract", "functionRef": { "refName": "subtraction", "arguments": { "leftElement": ".fahrenheit", "rightElement": ".subtractValue" } } }, { "name": "multiply", "functionRef": { "refName": "multiplication", "arguments": { "leftElement": ".difference", "rightElement": ".multiplyValue" } } } ], "end": { "terminate": true } } ] }-
Check the section of the OpenAPI specification to understand how to structure parameters in the request.
Operation Object -
Use expressions to extract data from the payload and map it to the required parameters. Ensure the engine maps parameter names according to the OpenAPI specification.
jq For operations requiring parameters in the request path instead of the body, refer to the parameter definitions in the OpenAPI specification.
For more information about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping path parameters
{ "/pet/{petId}": { "get": { "tags": ["pet"], "summary": "Find pet by ID", "description": "Returns a single pet", "operationId": "getPetById", "parameters": [ { "name": "petId", "in": "path", "description": "ID of pet to return", "required": true, "schema": { "type": "integer", "format": "int64" } } ] } } }Following is an example invocation of a function, in which only one parameter named
is added in the request path:petIdExample of calling the PetStore function
{ "name": "CallPetStore",1 "actionMode": "sequential", "type": "operation", "actions": [ { "name": "getPet", "functionRef": { "refName": "getPetById",2 "arguments": {3 "petId": ".petId" } } } ] }
3.1.3. Configuring the endpoint URL of OpenAPI services Copy linkLink copied to clipboard!
After accessing the function definitions in workflow states, you can configure the endpoint URL of OpenAPI services.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have created your OpenShift Serverless Logic project.
- You have access to the OpenAPI specification files.
- You have defined the function definitions in the workflow.
- You have the access to the defined functions in the workflow states.
Procedure
-
Locate the OpenAPI specification file you want to configure. For example, .
substraction.yaml -
Convert the file name into a valid configuration key by replacing special characters, such as , with underscores and converting letters to lowercase. For example, change
.tosubstraction.yaml.substraction_yaml To define the configuration key, use the converted file name as the REST client configuration key. Set this key as an environment variable, as shown in the following example:
quarkus.rest-client.subtraction_yaml.url=http://myserver.comTo prevent hardcoding URLs in the
file, use environment variable substitution, as shown in the following example:application.propertiesquarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}In this example:
-
Configuration Key:
quarkus.rest-client.subtraction_yaml.url - Environment variable: SUBTRACTION_URL
-
Fallback URL:
http://myserver.com
-
Configuration Key:
-
Ensure that the environment variable is set in your system or deployment environment. If the variable is not found, the application uses the fallback URL
(SUBTRACTION_URL).(http://myserver.com) Add the configuration key and URL substitution to the
file:application.propertiesquarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}- Deploy or restart your application to apply the new configuration settings.
3.2. Configuring OpenAPI services endpoints Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses the
kogito.sw.operationIdStrategy
The
kogito.sw.operationIdStrategy
FILE_NAME
FULL_URI
FUNCTION_NAME
SPEC_TITLE
FILE_NAMEOpenShift Serverless Logic uses the OpenAPI document file name to create the configuration key. The key is based on the file name, where special characters are replaced with underscores.
Example configuration:
quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/1 - 1
- The OpenAPI file path is
<project_application_dir>/specs/stock-portfolio-svc.yaml. The generated key that configures the URL for the REST client isstock_portfolio_svc_yaml.
FULL_URIOpenShift Serverless Logic uses the complete URI path of the OpenAPI document as the configuration key. The full URI is sanitized to form the key.
Example for Serverless Workflow
{ "id": "myworkflow", "functions": [ { "name": "myfunction", "operation": "https://my.remote.host/apicatalog/apis/123/document" } ] }Example configuration:
quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/1 - 1
- The URI path is
https://my.remote.host/apicatalog/apis/123/document. The generated key that configures the URL for the REST client isapicatalog_apis_123_document.
FUNCTION_NAMEOpenShift Serverless Logic combines the workflow ID and the function name referencing the OpenAPI document to generate the configuration key.
Example for Serverless Workflow
{ "id": "myworkflow", "functions": [ { "name": "myfunction", "operation": "https://my.remote.host/apicatalog/apis/123/document" } ] }Example configuration:
quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/1 - 1
- The workflow ID is
myworkflow. The function name ismyfunction. The generated key that configures the URL for the REST client ismyworkflow_myfunction.
SPEC_TITLEOpenShift Serverless Logic uses the
value from the OpenAPI document to create the configuration key. The title is sanitized to form the key.info.titleExample for OpenAPI document
openapi: 3.0.3 info: title: stock-service API version: 2.0.0-SNAPSHOT paths: /stock-price/{symbol}: ...Example configuration:
quarkus.rest-client.stock-service_API.url=http://localhost:8282/1 - 1
- The OpenAPI document title is
stock-service API. The generated key that configures the URL for the REST client isstock-service_API.
3.2.1. Using URI alias Copy linkLink copied to clipboard!
As an alternative to the
kogito.sw.operationIdStrategy
workflow-uri-definitions
The
workflow-uri-definitions
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
Add the
extension to your workflow. Within this extension, create aliases for your URIs.workflow-uri-definitionsExample workflow
{ "extensions": [ { "extensionid": "workflow-uri-definitions",1 "definitions": { "remoteCatalog": "https://my.remote.host/apicatalog/apis/123/document"2 } } ], "functions": [3 { "name": "operation1", "operation": "remoteCatalog#operation1" }, { "name": "operation2", "operation": "remoteCatalog#operation2" } ] }
- 1
- Set the extension ID to
workflow-uri-definitions. - 2
- Set the alias definition by mapping the
remoteCatalogalias to a URI, for example,https://my.remote.host/apicatalog/apis/123/documentURI. - 3
- Set the function operations by using the
remoteCatalogalias with the operation identifiers, for example,operation1andoperation2operation identifiers.In the
file, configure the REST client by using the alias defined in the workflow.application.propertiesExample property
quarkus.rest-client.remoteCatalog.url=http://localhost:8282/In the previous example, the configuration key is set to
, and the URL is set toquarkus.rest-client.remoteCatalog.url, which the REST clients use by referring to thehttp://localhost:8282/alias.remoteCatalogIn your workflow, use the alias when defining functions that operate on the URI.
Example Workflow (continued):
{ "functions": [ { "name": "operation1", "operation": "remoteCatalog#operation1" }, { "name": "operation2", "operation": "remoteCatalog#operation2" } ] }
3.3. Troubleshooting services Copy linkLink copied to clipboard!
Efficient troubleshooting of the HTTP-based function invocations, such as those using OpenAPI functions, is crucial for maintaining workflow orchestrations.
To diagnose issues, you can trace HTTP requests and responses.
3.3.1. Tracing HTTP requests and responses Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses the Apache HTTP client to the trace HTTP requests and responses.
Prerequisites
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
- You have access to the workflow definition and instance IDs for correlating HTTP requests and responses.
- You have access to the log configuration of the application where the HTTP service invocations are occurring
Procedure
To trace HTTP requests and responses, OpenShift Serverless Logic uses the Apache HTTP client by setting the following property:
# Turning HTTP tracing on quarkus.log.category."org.apache.http".level=DEBUGAdd the following configuration to your application’s
file to turn on debugging for the Apache HTTP Client:application.propertiesquarkus.log.category."org.apache.http".level=DEBUG- Restart your application to propagate the log configuration changes.
After restarting, check the logs for HTTP request traces.
Example logs of a traced HTTP request
2023-09-25 19:00:55,242 DEBUG Executing request POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Accept: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Type: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocid: inferencepipeline 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocinstanceid: 85114b2d-9f64-496a-bf1d-d3a0760cde8e 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocist: Active 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoproctype: SW 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocversion: 1.0 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Length: 23177723 2023-09-25 19:00:55,244 DEBUG http-outgoing-0 >> Host: yolo-model-opendatahub-model.apps.trustyai.dzzt.p1.openshiftapps.comCheck the logs for HTTP response traces following the request logs.
Example logs of a traced HTTP response
2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "HTTP/1.1 500 Internal Server Error[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-type: application/json[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "date: Mon, 25 Sep 2023 19:01:00 GMT[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-length: 186[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "set-cookie: 276e4597d7fcb3b2cba7b5f037eeacf5=5427fafade21f8e7a4ee1fa6c221cf40; path=/; HttpOnly; Secure; SameSite=None[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "{"code":13, "message":"Failed to load Model due to adapter error: Error calling stat on model file: stat /models/yolo-model__isvc-1295fd6ba9/yolov5s-seg.onnx: no such file or directory"}"
Chapter 4. Managing security Copy linkLink copied to clipboard!
4.1. Authentication for OpenAPI services Copy linkLink copied to clipboard!
To secure an OpenAPI service operation, define a
Security Scheme
securitySchemes
Security Requirement
Security Scheme
This section outlines the supported authentication types and demonstrates how to configure them to access secured OpenAPI service operations within your workflows.
4.1.1. Overview of OpenAPI service authentication Copy linkLink copied to clipboard!
In OpenShift Serverless Logic, you can secure OpenAPI service operations using the
Security Schemes
The
Security Schemes
securitySchemes
When a workflow calls a secured operation, it references these defined schemes to determine the required authentication configuration.
Example security scheme definitions
"securitySchemes": {
"http-basic-example": {
"type": "http",
"scheme": "basic"
},
"api-key-example": {
"type": "apiKey",
"name": "my-example-key",
"in": "header"
}
}
If the OpenAPI file defines
Security Schemes
Security Requirements
To configure that scheme, you must use the
quarkus.openapi-generator.codegen.default-security-scheme
default-security-scheme
securitySchemes
http-basic-example
api-key-example
For Example
$ quarkus.openapi-generator.codegen.default-security-scheme=http-basic-example
4.1.2. Configuring authentication credentials for OpenAPI services Copy linkLink copied to clipboard!
To invoke OpenAPI service operations secured by authentication schemes, you must configure the corresponding credentials and parameters in your application. OpenShift Serverless Logic uses these configurations to authenticate with the external services during workflow execution.
This section describes how to define and apply the necessary configuration properties for security schemes declared in the OpenAPI specification file. You can use either
application.properties
ConfigMap
SonataFlow
The security schemes defined in an OpenAPI specification file are global to all the operations that are available in the same file. This means that the configurations set for a particular security scheme also apply to the other secured operations.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- Your OpenAPI specification includes one or more security schemes.
- You have access to the OpenAPI specification files.
-
You have identified the schemes you want to configure or
http-basic-example.api-key-example -
You have access to the file, the workflow
application.properties, or theConfigMapCR.SonataFlow
Procedure
Use the following format to compose your property keys:
quarkus.openapi-generator.[filename].auth.[security_scheme_name].[auth_property_name]-
is the sanitized name of the file containing the OpenAPI specification, such as security_example_json. To sanitize this name, you must replace all non-alphabetic characters with
filenameunderscores._ -
is the sanitized name of the security scheme object definition in the OpenAPI specification file, such as
security_scheme_nameorhttp_basic_example. To sanitize this name, you must replace all non-alphabetic characters withapi_key_exampleunderscores._ - is the name of the property to configure, such as username. This property depends on the defined security scheme type.
auth_property_nameNoteWhen you are using environment variables to configure properties, follow the MicroProfile environment variable mapping rules. You can replace all non-alphabetic characters in the property key with underscores
, and convert the entire key to uppercase._
-
The following examples show how to provide these configuration properties using
application.properties
ConfigMap
SonataFlow
Example of configuring the credentials by using the application.properties file
quarkus.openapi-generator.security_example_json.auth.http_basic_example.username=myuser
quarkus.openapi-generator.security_example_json.auth.http_basic_example.password=mypassword
Example of configuring the credentials by using the workflow ConfigMap
apiVersion: v1
data:
application.properties: |
quarkus.openapi-generator.security_example_json.auth.http_basic_example.username=myuser
quarkus.openapi-generator.security_example_json.auth.http_basic_example.password=mypassword
kind: ConfigMap
metadata:
labels:
app: example-workflow
name: example-workflow-props
namespace: example-namespace
If the name of the workflow is
example-workflow
ConfigMap
example-workflow-props
Example of configuring the credentials by using environment variables in the SonataFlow CR
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: example-workflow
namespace: example-namespace
annotations:
sonataflow.org/description: Example Workflow
sonataflow.org/version: 0.0.1
sonataflow.org/profile: preview
spec:
podTemplate:
container:
env:
- name: QUARKUS_OPENAPI_GENERATOR_SECURITY_EXAMPLE_JSON_AUTH_HTTP_BASIC_EXAMPLE_USERNAME
value: myuser
- name: QUARKUS_OPENAPI_GENERATOR_SECURITY_EXAMPLE_JSON_AUTH_HTTP_BASIC_EXAMPLE_PASSWORD
value: mypassowrd
4.1.3. Example of basic HTTP authentication Copy linkLink copied to clipboard!
The following example shows how to secure a workflow operation using the HTTP basic authentication scheme. The
security-example.json
sayHelloBasic
http-basic-example
ConfigMap
Example OpenAPI specification with HTTP basic authentication
{
"openapi": "3.1.0",
"info": {
"title": "Http Basic Scheme Example",
"version": "1.0"
},
"paths": {
"/hello-with-http-basic": {
"get": {
"operationId": "sayHelloBasic",
"responses": {
"200": {
"description": "OK",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
}
},
"security": [{"http-basic-example" : []}]
}
}
},
"components": {
"securitySchemes": {
"http-basic-example": {
"type": "http",
"scheme": "basic"
}
}
}
}
In this example, the
sayHelloBasic
http-basic-example
securitySchemes
4.1.3.1. Supported configuration properties for basic HTTP authentication Copy linkLink copied to clipboard!
You can use the following configuration keys to provide authentication credentials for the
http-basic-example
| Description | Property key | Example |
|---|---|---|
| Username credentials |
|
|
| Password credentials |
|
|
You can replace
[filename]
security_example_json
[security_scheme_name]
http_basic_example
4.1.4. Example of Bearer token authentication Copy linkLink copied to clipboard!
The following example shows how to secure an OpenAPI operation using the HTTP Bearer authentication scheme. The
security-example.json
sayHelloBearer
http-bearer-example
ConfigMap
Example OpenAPI specification with Bearer token authentication
{
"openapi": "3.1.0",
"info": {
"title": "Http Bearer Scheme Example",
"version": "1.0"
},
"paths": {
"/hello-with-http-bearer": {
"get": {
"operationId": "sayHelloBearer",
"responses": {
"200": {
"description": "OK",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
}
},
"security": [
{
"http-bearer-example": []
}
]
}
}
},
"components": {
"securitySchemes": {
"http-bearer-example": {
"type": "http",
"scheme": "bearer"
}
}
}
}
In this example, the
sayHelloBearer
http-bearer-example
4.1.4.1. Supported configuration properties for Bearer token authentication Copy linkLink copied to clipboard!
You can use the following configuration property key to provide the Bearer token:
| Description | Property key | Example |
|---|---|---|
| Bearer token |
|
|
You can replace
[filename]
security_example_json
[security_scheme_name]
http_bearer_example
4.1.5. Example of API key authentication Copy linkLink copied to clipboard!
The following example shows how to secure an OpenAPI service operation using the
apiKey
security-example.json
sayHelloApiKey
api-key-example
ConfigMap
Example OpenAPI specification with API key authentication
{
"openapi": "3.1.0",
"info": {
"title": "Api Key Scheme Example",
"version": "1.0"
},
"paths": {
"/hello-with-api-key": {
"get": {
"operationId": "sayHelloApiKey",
"responses": {
"200": {
"description": "OK",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
}
},
"security": [{"api-key-example" : []}]
}
}
},
"components": {
"securitySchemes": {
"api-key-example": {
"type": "apiKey",
"name": "api-key-name",
"in": "header"
}
}
}
}
In this example, the
sayHelloApiKey
api-key-example
4.1.5.1. Supported configuration properties for API key authentication Copy linkLink copied to clipboard!
You can use the following configuration property to configure the API key:
| Description | Property key | Example |
|---|---|---|
| API Key |
|
|
You can replace
[filename]
security_example_json
[security_scheme_name]
api_key_example
The
apiKey
name
in
-
When the value is , the key is passed as an HTTP request parameter.
header -
When the value is , the key is passed as an HTTP cookie.
cookie -
When the value is , the key is passed as an HTTP query parameter.
query
In the example, the key is passed in the HTTP header as
api-key-name: MY_KEY
OpenShift Serverless Logic manages this internally, so no additional configuration is required beyond setting the property value.
4.1.6. Example of clientCredentials OAuth 2.0 authentication Copy linkLink copied to clipboard!
The following example shows how to secure an OpenAPI operation using the OAuth 2.0
clientCredentials
sayHelloOauth2
oauth-example
Example OpenAPI specification with OAuth 2.0
{
"openapi": "3.1.0",
"info": {
"title": "Oauth2 Scheme Example",
"version": "1.0"
},
"paths": {
"/hello-with-oauth2": {
"get": {
"operationId": "sayHelloOauth2",
"responses": {
"200": {
"description": "OK",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
}
},
"security": [
{
"oauth-example": []
}
]
}
}
},
"components": {
"securitySchemes": {
"oauth-example": {
"type": "oauth2",
"flows": {
"clientCredentials": {
"authorizationUrl": "https://example.com/oauth",
"tokenUrl": "https://example.com/oauth/token",
"scopes": {}
}
}
}
}
}
}
In this example, the
sayHelloOauth2
oauth-example
clientCredentials
4.1.6.1. OAuth 2.0 Support with the OIDC Client filter extension Copy linkLink copied to clipboard!
OAuth 2.0 token management is handled by a Quarkus
OidcClient
Example of adding extensions using Maven
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-oidc-client-filter</artifactId>
<version>3.15.4.redhat-00001</version>
</dependency>
<dependency>
<groupId>io.quarkiverse.openapi.generator</groupId>
<artifactId>quarkus-openapi-generator-oidc</artifactId>
<version>2.9.0-lts</version>
</dependency>
Example of adding extensions using gitops profile
Ensure that you configure the QUARKUS_EXTENSIONS build argument with the following value when building the workflow image:
$ --build-arg=QUARKUS_EXTENSIONS=io.quarkus:quarkus-oidc-client-filter:3.15.4.redhat-00001,io.quarkiverse.openapi.generator:quarkus-openapi-generator-oidc:2.9.0-lts
Example of adding extensions using preview profile
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
build:
template:
buildArgs:
- name: QUARKUS_EXTENSIONS
value: io.quarkus:quarkus-oidc-client-filter:3.15.4.redhat-00001,io.quarkiverse.openapi.generator:quarkus-openapi-generator-oidc:2.9.0-lts
The extensions that are added in the
SonataFlowPlatform
preview
4.1.6.2. OidcClient configuration Copy linkLink copied to clipboard!
To access the secured operation, define an
OidcClient
application.properties
oauth_example
# adjust these configurations according with the authentication service.
quarkus.oidc-client.oauth_example.auth-server-url=https://example.com/oauth
quarkus.oidc-client.oauth_example.token-path=/token
quarkus.oidc-client.oauth_example.discovery-enabled=false
quarkus.oidc-client.oauth_example.client-id=example-app
quarkus.oidc-client.oauth_example.grant.type=client
quarkus.oidc-client.oauth_example.credentials.client-secret.method=basic
quarkus.oidc-client.oauth_example.credentials.client-secret.value=secret
In this configuration:
-
matches the sanitized name of the
oauth_examplescheme in the OpenAPI file. The link between the sanitized scheme name and the correspondingoauth-exampleis achieved by using that simple naming convention.OidcClient - The OidcClient handles token generation and renewal automatically during workflow execution.
4.1.7. Example of authorization token propagation Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports token propagation for OpenAPI operations that use the
oauth2
http
You must configure token propagation individually for each security scheme. After it is enabled, all OpenAPI operations secured using the same scheme uses the propagated token unless explicitly overridden.
The following example defines the
sayHelloOauth2
security-example.json
oauth-example
clientCredentials
Example OpenAPI specification with token propagation
{
"openapi": "3.1.0",
"info": {
"title": "Oauth2 Scheme Example",
"version": "1.0"
},
"paths": {
"/hello-with-oauth2": {
"get": {
"operationId": "sayHelloOauth2",
"responses": {
"200": {
"description": "OK",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
}
},
"security": [
{
"oauth-example": []
}
]
}
}
},
"components": {
"securitySchemes": {
"oauth-example": {
"type": "oauth2",
"flows": {
"clientCredentials": {
"authorizationUrl": "https://example.com/oauth",
"tokenUrl": "https://example.com/oauth/token",
"scopes": {}
}
}
}
}
}
}
4.1.7.1. Supported configuration properties for authorization token propagation Copy linkLink copied to clipboard!
You can use the following configuration keys to enable and customize token propagation:
The tokens are automatically passed to downstream services while the workflow is active. When the workflow enters a waiting state, such as a timer or event-based pause, the token propagation stops. After the workflow resumes, tokens are not re-propagated automatically. You must manage re-authentication if needed.
| Property key | Example | Description |
|---|---|---|
|
|
| Enables token propagation for all operations secured with the given scheme. Default is
|
|
|
| (Optional) Overrides the default Authorization header with a custom header name to read the token from. |
You can replace
[filename]
security_example_json
[security_scheme_name]
oauth_example
Chapter 5. Supporting services Copy linkLink copied to clipboard!
5.1. Job service Copy linkLink copied to clipboard!
The Job service schedules and executes tasks in a cloud environment. Independent services implement these tasks, which can be initiated through any of the supported interaction modes, including HTTP calls or Knative Events delivery.
In OpenShift Serverless Logic, the Job service is responsible for controlling the execution of the time-triggered actions. Therefore, all the time-based states that you can use in a workflow, are handled by the interaction between the workflow and the Job service.
For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job service, and when the timeout is met, an HTTP callback is executed to notify the workflow.
The main goal of the Job service is to manage active jobs, such as scheduled jobs that need to be executed. When a job reaches its final state, the Job service removes it. To retain jobs information in a permanent repository, the Job service produces status change events that can be recorded by an external service, such as the Data Index Service.
You do not need to manually install or configure the Job service if you are using the OpenShift Serverless Operator to deploy workflows. The Operator handles these tasks automatically and manages all necessary configurations for each workflow to connect with it.
5.1.1. Job service leader election process Copy linkLink copied to clipboard!
The Job service operates as a singleton service, meaning only one active instance can schedule and execute jobs.
To prevent conflicts when the service is deployed in the cloud, where multiple instances might be running, the Job service supports a leader election process. Only the instance that is elected as the leader manages external communication to receive and schedule jobs.
Non-leader instances remain inactive in a standby state but continue attempting to become the leader through the election process. When a new instance starts, it does not immediately assume leadership. Instead, it enters the leader election process to determine if it can take over the leader role.
If the current leader becomes unresponsive or if it is shut down, another running instance takes over as the leader.
This leader election mechanism uses the underlying persistence backend, which is currently supported only in the PostgreSQL implementation.
5.2. Data Index service Copy linkLink copied to clipboard!
The Data Index service is a dedicated supporting service that stores the data related to the workflow instances and their associated jobs. This service provides a GraphQL endpoint allowing users to query that data.
The Data Index service processes data received through events, which can originate from any workflow or directly from the Job service.
Data Index supports Apache Kafka or Knative Eventing to consume CloudEvents messages from workflows. It indexes and stores this event data in a database, making it accessible through GraphQL. These events provide detailed information about the workflow execution. The Data Index service is central to OpenShift Serverless Logic search, insights, and management capabilities.
The key features of the Data Index service are as follows:
- A flexible data structure
- A distributable, cloud-ready format
- Message-based communication with workflows via Apache Kafka, Knative, and CloudEvents
- A powerful GraphQL-based querying API
When you are using the OpenShift Serverless Operator to deploy workflows, you do not need to manually install or configure the Data Index service. The Operator automatically manages all the necessary configurations for each workflow to connect with it.
5.2.1. GraphQL queries for workflow instances and jobs Copy linkLink copied to clipboard!
To retrieve data about workflow instances and jobs, you can use GraphQL queries.
5.2.1.1. Retrieve data from workflow instances Copy linkLink copied to clipboard!
You can retrieve information about a specific workflow instance by using the following query example:
{
ProcessInstances {
id
processId
state
parentProcessInstanceId
rootProcessId
rootProcessInstanceId
variables
nodes {
id
name
type
}
}
}
5.2.1.2. Retrieve data from jobs Copy linkLink copied to clipboard!
You can retrieve data from a specific job instance by using the following query example:
{
Jobs {
id
status
priority
processId
processInstanceId
executionCounter
}
}
5.2.1.3. Filter query results by using the where parameter Copy linkLink copied to clipboard!
You can filter query results by using the
where
Example query to filter by state
{
ProcessInstances(where: {state: {equal: ACTIVE}}) {
id
processId
processName
start
state
variables
}
}
Example query to filter by ID
{
ProcessInstances(where: {id: {equal: "d43a56b6-fb11-4066-b689-d70386b9a375"}}) {
id
processId
processName
start
state
variables
}
}
By default, filters are combined using the AND Operator. You can modify this behavior by combining filters with the AND or OR operators.
Example query to combine filters with the OR Operator
{
ProcessInstances(where: {or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}) {
id
processId
processName
start
end
state
}
}
Example query to combine filters with the AND and OR Operators
{
ProcessInstances(where: {and: {processId: {equal: "travels"}, or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}}) {
id
processId
processName
start
end
state
}
}
Depending on the attribute type, you can use the following avaialable Operators:
| Attribute type | Available Operators |
|---|---|
| String array |
|
| String |
|
| ID |
|
| Boolean |
|
| Numeric |
|
| Date |
|
5.2.1.4. Sort query results by using the orderBy parameter Copy linkLink copied to clipboard!
You can sort query results based on workflow attributes by using the
orderBy
ASC
DESC
Example query to sort by the start time in an ASC order
{
ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}) {
id
processId
processName
start
end
state
}
}
5.2.1.5. Limit the number of results by using the pagination parameter Copy linkLink copied to clipboard!
You can control the number of returned results and specify an offset by using the
pagination
Example query to limit results to 10, starting from offset 0
{
ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}, pagination: {limit: 10, offset: 0}) {
id
processId
processName
start
end
state
}
}
5.3. Managing supporting services Copy linkLink copied to clipboard!
This section provides an overview of the supporting services essential for OpenShift Serverless Logic. It specifically focuses on configuring and deploying the Data Index service and Job Service supporting services using the OpenShift Serverless Logic Operator.
In a typical OpenShift Serverless Logic installation, you must deploy both services to ensure successful workflow execution. The Data Index service allows for efficient data management, while the Job Service ensures reliable job handling.
5.3.1. Supporting services and workflow integration Copy linkLink copied to clipboard!
When you deploy a supporting service in a given namespace, you can choose between an enabled or disabled deployment. An enabled deployment signals the OpenShift Serverless Logic Operator to automatically intercept workflow deployments using the
preview
gitops
For example, when the Data Index service is enabled, workflows are automatically configured to send status change events to it. Similarly, enabling the Job Service ensures that a job is created whenever a workflow requires a timeout. The OpenShift Serverless Logic Operator also configures the Job Service to send events to the Data Index service, facilitating seamless integration between the services.
The OpenShift Serverless Logic Operator does not just deploy supporting services, it also manages other necessary configurations to ensure successful workflow execution. All these configurations are handled automatically. You only need to provide the supporting services configuration in the
SonataFlowPlatform
Deploying only one of the supporting services or using a disabled deployment are advanced use cases. In a standard installation, you must enable both services to ensure smooth workflow execution.
5.3.2. Supporting services deployment with the SonataFlowPlatform CR Copy linkLink copied to clipboard!
To deploy supporting services, configure the
dataIndex
jobService
spec.services
SonataFlowPlatform
SonataFlowPlatform
Each configuration of a service is handled independently, allowing you to customize these settings alongside other configurations in the
SonataFlowPlatform
See the following scaffold example configuration for deploying supporting services:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
services:
dataIndex:
enabled: true
# Specific configurations for the Data Index Service
# might be included here
jobService:
enabled: true
# Specific configurations for the Job Service
# might be included here
- 1
- Data Index service configuration field.
- 2
- Setting
enabled: truedeploys the Data Index service. If set tofalseor omitted, the deployment will be disabled. The default value isfalse. - 3
- Job Service configuration field.
- 4
- Setting
enabled: truedeploys the Job Service. If set tofalseor omitted, the deployment will be disabled. The default value isfalse.
5.3.3. Supporting services scope Copy linkLink copied to clipboard!
The
SonataFlowPlatform
This feature is particularly useful when separate instances of supporting services are required for different sets of workflows. For example, you can deploy an application in isolation with its workflows and supporting services, ensuring they remain independent from other deployments.
5.3.4. Supporting services persistence configurations Copy linkLink copied to clipboard!
The persistence configuration for supporting services in OpenShift Serverless Logic can be either ephemeral or PostgreSQL, depending on needs of your environment. Ephemeral persistence is ideal for development and testing, while PostgreSQL persistence is recommended for production environments.
5.3.4.1. Ephemeral persistence configuration Copy linkLink copied to clipboard!
The ephemeral persistence uses an embedded PostgreSQL database that is dedicated to each service. The OpenShift Serverless Logic Operator recreates this database with every service restart, making it suitable only for development and testing purposes. You do not need any additional configuration other than the following
SonataFlowPlatform
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
services:
dataIndex:
enabled: true
# Specific configurations for the Data Index Service
# might be included here
jobService:
enabled: true
# Specific configurations for the Job Service
# might be included here
5.3.4.2. Database migration configuration Copy linkLink copied to clipboard!
Database migration refers to either initializing a given Data Index or Jobs Service database to its respective schema, or applying data or schema updates when new versions are released. You must configure the database migration strategy individually for each supporting service by using the
dataIndex.persistence.dbMigrationStrategy
jobService.persistence.dbMigrationStrategy
service
Database migration is supported only when using the PostgreSQL persistence configuration.
You can configure any of the following database migration strategies:
5.3.4.2.1. Job-based database migration strategy Copy linkLink copied to clipboard!
When you configure the job-based strategy, the OpenShift Serverless Logic Operator uses a dedicated Kubernetes
Job
Job
5.3.4.2.2. Service-based database migration strategy Copy linkLink copied to clipboard!
When you configure the service-based strategy, the database migration is managed directly by each supporting service. The migration is executed as part of the service startup sequence. In worst-case scenarios, a service might start with failures if the migration is unsuccessful. Service-based database migration is the default strategy when you do not specify any configuration.
5.3.4.2.3. None migration strategy Copy linkLink copied to clipboard!
When you configure the
none
5.3.4.3. PostgreSQL persistence configuration Copy linkLink copied to clipboard!
For PostgreSQL persistence, you must set up a PostgreSQL server instance on your cluster. The administration of this instance remains independent of the OpenShift Serverless Logic Operator control. To connect a supporting service with the PostgreSQL server, you must configure the appropriate database connection parameters.
You can configure PostgreSQL persistence in the
SonataFlowPlatform
Example of PostgreSQL persistence configuration
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
services:
dataIndex:
enabled: true
persistence:
dbMigrationStrategy: job
postgresql:
serviceRef:
name: postgres-example
namespace: postgres-example-namespace
databaseName: example-database
databaseSchema: data-index-schema
port: 1234
secretRef:
name: postgres-secrets-example
userKey: POSTGRESQL_USER
passwordKey: POSTGRESQL_PASSWORD
jobService:
enabled: true
persistence:
dbMigrationStrategy: job
postgresql:
# Specific database configuration for the Job Service
# might be included here.
- 1
- Optional: Database migration strategy to use. Defaults to
service. - 2
- Name of the service to connect with the PostgreSQL database server.
- 3
- Optional: Defines the namespace of the PostgreSQL Service. Defaults to the
SonataFlowPlatformnamespace. - 4
- Defines the name of the PostgreSQL database for storing supporting service data.
- 5
- Optional: Specifies the schema for storing supporting service data. Default value is
SonataFlowPlatformname, suffixed with-data-index-serviceor-jobs-service. For example,sonataflow-platform-example-data-index-service. - 6
- Optional: Port number to connect with the PostgreSQL Service. Default value is
5432. - 7
- Defines the name of the secret containing the username and password for database access.
- 8
- Defines the name of the key in the secret that contains the username to connect with the database.
- 9
- Defines the name of the key in the secret that contains the password to connect with the database.
You can configure each service’s persistence independently by using the respective persistence field.
Create the secrets to access PostgreSQL by running the following command:
$ oc create secret generic <postgresql_secret_name> \
--from-literal=POSTGRESQL_USER=<user> \
--from-literal=POSTGRESQL_PASSWORD=<password> \
-n <namespace>
5.3.4.4. Common PostgreSQL persistence configuration Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator automatically connects supporting services to the common PostgreSQL server configured in the
spec.persistence
For rules, the following precedence is applicable:
-
If you configure a specific persistence for a supporting service, for example, , it uses that configuration.
services.dataIndex.persistence - If you do not configure persistence for a service, the system uses the common persistence configuration from the current platform.
When using a common PostgreSQL configuration, each service schema is automatically set as the
SonataFlowPlatform
-data-index-service
-jobs-service
sonataflow-platform-example-data-index-service
5.3.4.5. Platform-scoped PostgreSQL persistence configuration Copy linkLink copied to clipboard!
You can configure a common PostgreSQL service and database for all supporting services by using the
spec.persistence.postgresql
SonataFlowPlatform
preview
gitops
The following rules apply when configuring platform-scoped persistence:
-
If a supporting service has its own persistence configuration, for example, if is set, then that configuration takes precedence.
services.dataIndex.persistence.postgresql - If a supporting service does not have a custom persistence configuration, the configuration is inherited from the current platform.
-
If a supporting service requires a specific database migration strategy, configure it by using the and
dataIndex.persistence.dbMigrationStrategyfields.jobService.persistence.dbMigrationStrategy
The following
SonataFlowPlatform
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
persistence:
postgresql:
serviceRef:
name: postgres-example
namespace: postgres-example-namespace
databaseName: example-database
port: 1234
secretRef:
name: postgres-secrets-example
userKey: POSTGRESQL_USER
passwordKey: POSTGRESQL_PASSWORD
dataIndex:
enabled: true
persistence:
dbMigrationStrategy: job
jobService:
enabled: true
persistence:
dbMigrationStrategy: service
- 1
- Name of the Kubernetes service to connect to the PostgreSQL database server.
- 2
- (Optional) Namespace containing the PostgreSQL service. Defaults to the
SonataFlowPlatformnamespace. - 3
- Name of the PostgreSQL database to store supporting services and workflows data.
- 4
- (Optional) Port to connect to the PostgreSQL service. Defaults to 5432.
- 5
- Name of the Kubernetes Secret that contains database credentials.
- 6
- Secret key that stores the database username.
- 7
- Secret key that stores the database password.
- 8
- (Optional) Database migration strategy for the Data Index. Defaults to
service. - 9
- (Optional) Database migration strategy for the Jobs Service. Defaults to
service. You can configure distinct strategies per service if needed.
5.3.5. Supporting services eventing system configurations Copy linkLink copied to clipboard!
For a OpenShift Serverless Logic installation, the following types of events are generated:
- Outgoing and incoming events related to workflow business logic.
- Events sent from workflows to the Data Index and Job Service.
- Events sent from the Job Service to the Data Index Service.
The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these events and services, ensuring efficient and reliable event handling.
5.3.5.1. Platform-scoped eventing system configuration Copy linkLink copied to clipboard!
To configure a platform-scoped eventing system, you can use the
spec.eventing.broker.ref
SonataFlowPlatform
A workflow deployed in the same namespace with the
preview
gitops
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays how to configure the
SonataFlowPlatform
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
eventing:
broker:
ref:
name: example-broker
namespace: example-broker-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
5.3.5.2. Service-scoped eventing system configuration Copy linkLink copied to clipboard!
A service-scoped eventing system configuration allows for fine-grained control over the eventing system, specifically for the Data Index or the Job Service.
For a OpenShift Serverless Logic installation, consider using a platform-scoped eventing system configuration. The service-scoped configuration is intended for advanced use cases only.
5.3.5.3. Data Index eventing system configuration Copy linkLink copied to clipboard!
To configure a service-scoped eventing system for the Data Index, you must use the
spec.services.dataIndex.source.ref
SonataFlowPlatform
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays the Data Index eventing system configuration:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
spec:
services:
dataIndex:
source:
ref:
name: data-index-source-example-broker
namespace: data-index-source-example-broker-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
- 1
- Specifies the Knative Eventing Broker from which the Data Index consumes events.
- 2
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatformnamespace. Consider creating the broker in the same namespace asSonataFlowPlatform.
5.3.5.4. Job Service eventing system configuration Copy linkLink copied to clipboard!
To configure a service-scoped eventing system for the Job Service, you must use the
spec.services.jobService.source.ref
spec.services.jobService.sink.ref
SonataFlowPlatform
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays the Job Service eventing system configuration:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
spec:
services:
jobService:
source:
ref:
name: jobs-service-source-example-broker
namespace: jobs-service-source-example-broker-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
sink:
ref:
name: jobs-service-sink-example-broker
namespace: jobs-service-sink-example-broker-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
- 1
- Specifies the Knative Eventing Broker from which the Job Service consumes events.
- 2
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatformnamespace. Consider creating the Broker in the same namespace asSonataFlowPlatform. - 3
- Specifies the Knative Eventing Broker on which the Job Service produces events.
- 4
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatformnamespace. Consider creating the Broker in the same namespace asSonataFlowPlatform.
5.3.5.5. Cluster-scoped eventing system configuration for supporting services Copy linkLink copied to clipboard!
When you deploy cluster-scoped supporting services, the supporting services automatically link to the Broker specified in the
SonataFlowPlatform
SonataFlowClusterPlatform
5.3.5.6. Eventing system configuration precedence rules for supporting services Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator follows a defined order of precedence to configure the eventing system for a supporting service.
Eventing system configuration precedence rules are as follows:
- If the supporting service has its own eventing system configuration, using either the Data Index eventing system or the Job Service eventing system configuration, then supporting service configuration takes precedence.
-
If the CR enclosing the supporting service is configured with a platform-scoped eventing system, that configuration takes precedence.
SonataFlowPlatform - If the current cluster is configured with a cluster-scoped eventing system, that configuration takes precedence.
- f none of the previous configurations exist, the supporting service delivers events by direct HTTP calls.
5.3.5.7. Eventing system linking configuration Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator automatically creates Knative Eventing,
SinkBindings
The following example displays the Knative Native eventing objects created for the
SonataFlowPlatform
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
eventing:
broker:
ref:
name: example-broker
apiVersion: eventing.knative.dev/v1
kind: Broker
services:
dataIndex:
enabled: true
jobService:
enabled: true
The following example displays how to configure a Knative Kafka Broker for use with the
SonataFlowPlatform
Example of Knative Kafka Broker example used by the SonataFlowPlatform CR
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
annotations:
eventing.knative.dev/broker.class: Kafka
name: example-broker
namespace: example-namespace
spec:
config:
apiVersion: v1
kind: ConfigMap
name: kafka-broker-config
namespace: knative-eventing
- 1
- Use the Kafka class to create a Kafka Knative Broker.
The following command displays the list of triggers set up for the Data Index and Job Service events, showing which services are subscribed to the events:
$ oc get triggers -n example-namespace
Example output
NAME BROKER SINK AGE CONDITIONS READY REASON
data-index-jobs-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True -
data-index-process-definition-e48b4e4bf73e22b90ecf7e093ff6b1eaf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True -
data-index-process-error-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True -
data-index-process-instance-mul35f055c67a626f51bb8d2752606a6b54 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True -
data-index-process-node-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True -
data-index-process-state-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True -
data-index-process-variable-ac727d6051750888dedb72f697737c0dfbf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True -
jobs-service-create-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True -
jobs-service-delete-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True -
To see the
SinkBinding
$ oc get sources -n example-namespace
Example output
NAME TYPE RESOURCE SINK READY
sonataflow-platform-example-jobs-service-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True
5.3.6. Advanced supporting services configurations Copy linkLink copied to clipboard!
In scenarios where you must apply advanced configurations for supporting services, use the
podTemplate
SonataFlowPlatform
You can configure advanced settings for the service by using the following example:
Advanced configurations example for the Data Index service
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
services:
# This can be either 'dataIndex' or 'jobService'
dataIndex:
enabled: true
podTemplate:
replicas: 2
container:
env:
- name: <any_advanced_config_property>
value: <any_value>
image:
initContainers:
You can set the 'services' field to either 'dataIndex' or 'jobService' depending on your requirement. The rest of the configuration remains the same.
- 1
- Defines the number of replicas. Default value is
1. In the case ofjobService, this value is always overridden to1because it operates as a singleton service. - 2
- Holds specific configurations for the container running the service.
- 3
- Allows you to fine-tune service properties by specifying environment variables.
- 4
- Configures the container image for the service, useful if you need to update or customize the image.
- 5
- Configures init containers for the pod, useful for setting up prerequisites before the main container starts.
The
podTemplate
PodSpec
5.3.7. Cluster scoped supporting services Copy linkLink copied to clipboard!
You can define a cluster-wide set of supporting services that can be consumed by workflows across different namespaces, by using the
SonataFlowClusterPlatform
SonataFlowPlatform
You can use the following example of a basic configuration that enables workflows deployed in any namespace to utilize supporting services deployed in a specific namespace, such as
example-namespace
Example of a SonataFlowClusterPlatform CR
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowClusterPlatform
metadata:
name: cluster-platform
spec:
platformRef:
name: sonataflow-platform-example
namespace: example-namespace
You can override these cluster-wide services within any namespace by configuring that namespace in
SonataFlowPlatform.spec.services
Chapter 6. Configuring workflow services Copy linkLink copied to clipboard!
This section describes how to configure a workflow service by using the OpenShift Serverless Logic Operator. The section outlines key concepts and configuration options that you can reference for customizing your workflow service according to your environment and use case. You can edit workflow configurations, manage specific properties, and define global managed properties to ensure consistent and efficient execution of your workflows.
6.1. Modifying workflow configuration Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator decides the workflow configuration based on two
ConfigMaps
user-defined
managed-properties
-
User-defined properties: if your workflow requires particular configurations, ensure that you create a named
ConfigMapthat includes all the configurations before workflow deployment. For example, if your workflow name is<workflow-name>-props, thegreetingname isConfigMap. If suchgreeting-managed-propsdoes not exists, the Operator creates the workflow to have empty or default content.ConfigMap -
Managed properties: automatically generated by the Operator and stored in a named
ConfigMap. These properties are typically related to configurations for the workflow. The properties connect to supporting services, the eventing system, and so on.<workflow-name>-managed-props
Managed properties always override user-defined properties with the same key. These managed properties are read-only and reset by the Operator during each reconciliation cycle.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI ().
oc -
You have previously created the workflow properties
user-defined, or the Operator has created it.ConfigMap
Procedure
Open your terminal and access the OpenShift Serverless Logic project. Ensure that you are working within the correct project,
, where your workflow service is deployed.namespace$ oc project <your-project-name>Identify the name of the workflow you want to configure.
For example, if your workflow is named
, the user-defined properties are stored in agreetingnamedConfigMap.greeting-propsEdit the workflow
by executing the following example command:ConfigMap$ oc edit configmap greeting-propsReplace
with the actual name of your workflow.greetingModify the
section.application.propertiesLocate the
section and update thedatafield with your desired configuration.application.propertiesExample of
ConfigMapapiVersion: v1 kind: ConfigMap metadata: labels: app: greeting name: greeting-props namespace: default data: application.properties: | my.properties.key = any-value ...- After updating the properties, save the file and exit the editor. The updated configuration will be applied automatically.
The workflow runtime is based on Quarkus, so all the keys under
application.properties
Verification
To confirm that your changes are applied successfully, execute the following example command:
$ oc get configmap greeting-props -o yaml
6.2. Managed properties in workflow services Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator uses managed properties to control essential runtime behavior. These values are stored separately and override user-defined properties during each reconciliation cycle. You can also apply custom managed properties globally by updating the
SonataFlowPlatform
Some properties used by the OpenShift Serverless Logic Operator are managed properties and cannot be changed through the standard user configuration. These properties are stored in a dedicated
ConfigMap
<workflow-name>-managed-props
You cannot override the default managed properties set by the Operator using global managed properties. These defaults are always enforced during reconciliation.
The following table lists some core managed properties as an example:
| Property Key | Immutable Value | Profile |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
Other managed properties include Kubernetes service discovery settings, Data Index location properties, Job Service location properties, and Knative Eventing system configurations.
6.3. Defining global-managed-properties Copy linkLink copied to clipboard!
You can define custom global managed properties for all workflows in a specific namespace by editing the
SonataFlowPlatform
.spec.properties.flow
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI ().
oc
Procedure
Locate the
resource in the same namespace as your workflow services.SonataFlowPlatformThis is where you will define global managed properties.
Open the
resource in your default editor by executing the following command:SonataFlowPlatform$ oc edit sonataflowplatform sonataflow-platform-exampleDefine custom global managed properties.
In the editor, navigate to the
section and define your desired properties as shown in the following example:spec.properties.flowExample of a SonataFlowPlatform with flow properties
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: properties: flow:1 - name: quarkus.log.category2 value: INFO3 This configuration adds the
property to the managed properties of every workflow service in the namespace.quarkus.log.category=INFOOptional: Use external
orConfigMaps.SecretsYou can also reference values from existing
orConfigMapresources using theSecretattribute as shown in the following example:valueFromExample of a SonataFlowPlatform properties from ConfigMap and Secret
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: properties: flow: - name: my.petstore.auth.token valueFrom:1 secretKeyRef: petstore-credentials2 keyName: AUTH_TOKEN - name: my.petstore.url valueFrom: configMapRef: petstore-props3 keyName: PETSTORE_URL- 1
- The
valueFromattribute is derived from the KubernetesEnvVarAPI and works similarly to how environment variables reference external sources. - 2
valueFrom.secretKeyRefpulls the value from a key namedAUTH_TOKENin thepetstore-credentialssecret.- 3
valueFrom.configMapRefpulls the value from a key namedPETSTORE_URLin thepetstore-propsConfigMap.
Chapter 7. Managing workflow persistence Copy linkLink copied to clipboard!
You can configure a
SonataFlow
By design, Kubernetes pods are stateless. This behavior can pose challenges for workloads that need to maintain the application state across pod restarts. In the case of OpenShift Serverless Logic, the workflow context is lost when the pod restarts by default.
To ensure workflow recovery in such scenarios, you must configure workflow runtime persistence. Use the
SonataFlowPlatform
SonataFlow
7.1. Configuring persistence using the SonataFlowPlatform CR Copy linkLink copied to clipboard!
The
SonataFlowPlatform
SonataFlow
The OpenShift Serverless Logic Operator also uses this configuration to set up persistence for supporting services.
The persistence configurations are applied only at the time of workflow deployment. Changes to the
SonataFlowPlatform
Procedure
-
Define the CR.
SonataFlowPlatform Specify the persistence settings in the
field under thepersistenceCR spec.SonataFlowPlatformapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: persistence: postgresql: serviceRef: name: postgres-example1 namespace: postgres-example-namespace2 databaseName: example-database3 port: 12344 secretRef: name: postgres-secrets-example5 userKey: POSTGRESQL_USER6 passwordKey: POSTGRESQL_PASSWORD7 - 1
- Name of the Kubernetes Service connecting to the PostgreSQL database.
- 2
- Optional: Namespace of the PostgreSQL Service. Defaults to the namespace of the
SonataFlowPlatform. - 3
- Name of the PostgreSQL database for storing workflow data.
- 4
- Optional: Port number to connect to the PostgreSQL service. Defaults to
5432. - 5
- Name of the Kubernetes Secret containing database credentials.
- 6
- Key in the
Secretobject that contains the database username. - 7
- Key in the
Secretobject that contains the database password.
View the generated environment variables for the workflow.
The following example shows the generated environment variables for a workflow named
deployed with the earlierexample-workflowconfiguration. These configurations specifically relate to persistence and are managed by the OpenShift Serverless Logic Operator. You cannot modify these settings once you have applied them.SonataFlowPlatform
When you use the
SonataFlowPlatform
env:
- name: QUARKUS_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: postgres-secrets-example
key: POSTGRESQL_USER
- name: QUARKUS_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets-example
key: POSTGRESQL_PASSWORD
- name: QUARKUS_DATASOURCE_DB_KIND
value: postgresql
- name: QUARKUS_DATASOURCE_JDBC_URL
value: >-
jdbc:postgresql://postgres-example.postgres-example-namespace:1234/example-database?currentSchema=example-workflow
- name: KOGITO_PERSISTENCE_TYPE
value: jdbc
When this persistence configuration is in place, the OpenShift Serverless Logic Operator configures every workflow deployed in this namespace using the
preview
gitops
PostgreSQL is currently the only supported database for persistence.
For
SonataFlow
preview
7.2. Configuring persistence using the SonataFlow CR Copy linkLink copied to clipboard!
The
SonataFlow
SonataFlowPlatform
Procedure
-
Configure persistence by using the field in the
persistenceCR specification as shown in the following example:SonataFlow
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: example-workflow
annotations:
sonataflow.org/description: Example Workflow
sonataflow.org/version: 0.0.1
spec:
persistence:
postgresql:
serviceRef:
name: postgres-example
namespace: postgres-example-namespace
databaseName: example-database
databaseSchema: example-schema
port: 1234
secretRef:
name: postgres-secrets-example
userKey: POSTGRESQL_USER
passwordKey: POSTGRESQL_PASSWORD
flow:
- 1
- Name of the Kubernetes Service that connects to the PostgreSQL database server.
- 2
- Optional: Namespace containing the PostgreSQL Service. Defaults to the workflow namespace.
- 3
- Name of the PostgreSQL database where workflow data is stored.
- 4
- Optional: Name of the database schema for workflow data. Defaults to the workflow name.
- 5
- Optional: Port to connect to the PostgreSQL Service. Defaults to
5432. - 6
- Name of the Kubernetes Secret containing database credentials.
- 7
- Key in the
Secretobject containing the database username. - 8
- Key in the
Secretobject containing the database password.
This configuration informs the OpenShift Serverless Logic Operator that the workflow must connect to the specified PostgreSQL database server when deployed. The OpenShift Serverless Logic Operator adds the relevant JDBC connection parameters as environment variables to the workflow container.
PostgreSQL is currently the only supported database for persistence.
For
SonataFlow
preview
7.3. Persistence configuration precedence rules Copy linkLink copied to clipboard!
You can use
SonataFlow
SonataFlowPlatform
SonataFlowPlatform
-
If the CR includes a persistence configuration, that configuration takes precedence and applies to the workflow.
SonataFlow -
If the CR does not include a persistence configuration and the
SonataFlowfield is absent, the OpenShift Serverless Logic Operator uses the persistence configuration from the currentspec.persistenceif any.SonataFlowPlatform -
To disable persistence for the workflow, explicitly set in the
spec.persistence: {}CR. This configuration ensures the workflow does not inherit persistence settings from theSonataFlowCR.SonataFlowPlatform
7.4. Profile specific persistence requirements Copy linkLink copied to clipboard!
The persistence configurations provided for both
SonataFlowPlatform
SonataFlow
preview
gitops
dev
The primary difference between the
preview
gitops
When using the
gitops
| groupId | artifactId | version |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
If you are using the
registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.36.0
$ QUARKUS_EXTENSIONS=io.quarkus:quarkus-agroal:3.15.4.redhat-00001,io.quarkus:quarkus-jdbc-postgresql:3.15.4.redhat-00001,org.kie:kie-addons-quarkus-persistence-jdbc:9.103.0.redhat-00003
7.5. Database schema initialization Copy linkLink copied to clipboard!
When you are using
SonataFlow
Flyway is managed by the
kie-addons-quarkus-flyway
7.5.1. Flyway configuration in the workflow ConfigMap Copy linkLink copied to clipboard!
To enable Flyway in the workflow
ConfigMap
Example of enabling Flyway in the workflow ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: example-workflow
name: example-workflow-props
data:
application.properties: |
kie.flyway.enabled = true
7.5.2. Flyway configuration using environment variables in the workflow container Copy linkLink copied to clipboard!
You can enable Flyway by adding an environment variable to the
spec.podTemplate.container
SonataFlow
Example of enabling Flyway by using the workflow container environment variable
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: example-workflow
annotations:
sonataflow.org/description: Example Workflow
sonataflow.org/version: 0.0.1
spec:
podTemplate:
container:
env:
- name: KIE_FLYWAY_ENABLED
value: 'true'
flow: ...
7.5.3. Flyway configuration using SonataFlowPlatform properties Copy linkLink copied to clipboard!
To apply a common Flyway configuration to all workflows within a namespace, you can add the property to the
spec.properties.flow
SonataFlowPlatform
This configuration is applied during workflow deployment. Ensure the Flyway property is set before deploying workflows.
Example of enabling Flyway by using the SonataFlowPlatform properties
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform
spec:
properties:
flow:
- name: kie.flyway.enabled
value: true
7.5.4. Initializing a manual database using DDL scripts Copy linkLink copied to clipboard!
If you prefer manual initialization, you must disable Flyway by ensuring the
kie.flyway.enabled
false
- By default, each workflow uses a schema name equal to the workflow name. Ensure that you manually apply the schema initialization for each workflow.
-
If you are using the custom resource (CR) persistence configuration, you can specify a custom schema name.
SonataFlow
Procedure
- Download the DDL scripts from the kogito-ddl-9.103.0.redhat-00003-db-scripts.zip location.
- Extract the files.
Run the
files located in the root directory on the target PostgreSQL database. Ensure that the files are executed in the order of their version numbers..sqlFor example:
-
V1.35.0__create_runtime_PostgreSQL.sql -
V10.0.0__add_business_key_PostgreSQL.sql V10.0.1__alter_correlation_PostgreSQL.sqlNoteThe file version numbers are not associated with the OpenShift Serverless Logic Operator versioning.
-
Chapter 8. Workflow eventing system Copy linkLink copied to clipboard!
You can set up the eventing system for a
SonataFlow
In a OpenShift Serverless Logic installation, the following types of events are generated:
- Outgoing and incoming events related to workflow business logic.
- Events sent from workflows to the Data Index and Job Service.
- Events sent from the Job Service to the Data Index Service.
The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these services, ensuring efficient and reliable event handling.
8.1. Platform-scoped eventing system configuration Copy linkLink copied to clipboard!
To configure a platform-scoped eventing system, you can use the
spec.eventing.broker.ref
SonataFlowPlatform
This configuration instructs the OpenShift Serverless Logic Operator to automatically link every workflow deployed in the specified namespace, using the
preview
gitops
The supporting services deployed in the namespace without a custom eventing configuration are also linked to this broker.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays how to configure the
SonataFlowPlatform
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: <example-namespace>
spec:
eventing:
broker:
ref:
name: example-broker
namespace: <example-broker-namespace>
apiVersion: eventing.knative.dev/v1
kind: Broker
8.2. Workflow-scoped eventing system configuration Copy linkLink copied to clipboard!
A workflow-scoped eventing system configuration allows for detailed customization of the events produced and consumed by a specific workflow. You can use the
spec.sink.ref
spec.sources[]
SonataFlow
8.2.1. Outgoing eventing system configuration Copy linkLink copied to clipboard!
To configure outgoing events, you can use the
spec.sink.ref
SonataFlow
The following example displays how to configure the
SonataFlow
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: example-workflow
namespace: example-workflow-namespace
annotations:
sonataflow.org/description: Example Workflow
sonataflow.org/version: 0.0.1
sonataflow.org/profile: preview
spec:
sink:
ref:
name: outgoing-example-broker
namespace: outgoing-example-broker-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
flow:
start: ExampleStartState
events:
- name: outEvent1
source: ''
kind: produced
type: out-event-type1
...
- 1
- Name of the Knative Eventing Broker to use for all the events produced by the workflow, including the SonataFlow system events.
- 2
- Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlownamespace. Consider creating the broker in the same namespace asSonataFlow. - 3
- Flow definition field in the
SonataFlowCR. - 4
- Events definition field in the
SonataFlowCR. - 5
- Example of an outgoing event
outEvent1definition. - 6
- Event type for the
outEvent1outgoing event.
8.2.2. Incoming eventing system configuration Copy linkLink copied to clipboard!
To configure incoming events, you can use the
spec.sources[]
SonataFlow
If an incoming event type lacks a specific Broker configuration, the system applies eventing system configuration precedence rules.
The following example displays how to configure the
SonataFlow
The link between a
spec.sources[]
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: example-workflow
namespace: example-workflow-namespace
annotations:
sonataflow.org/description: Example Workflow
sonataflow.org/version: 0.0.1
sonataflow.org/profile: preview
spec:
sources:
- eventType: in-event-type1
ref:
name: incoming-example-broker1
namespace: incoming-example-broker1-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
- eventType: in-event-type2
ref:
name: incoming-example-broker2
namespace: incoming-example-broker2-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
flow:
start: ExampleStartState
events:
- name: inEvent1
source: ''
kind: consumed
type: in-event-type1
- name: inEvent2
source: ''
kind: consumed
type: in-event-type2
...
- 1
- Configure the workflow to consume events of type
in-event-type1using the specified Knative Eventing Broker. - 2
- Name of the Knative Eventing Broker to use for the consumption of the events of type
in-event-type1sent to this workflow. - 3
- Optional: If you do not specify the value, the parameter defaults to the
SonataFlownamespace. Consider creating the broker in the same namespace as theSonataFlowworkflow. - 4
- Configure the workflow to consume events of type
in-event-type2using the specified Knative Eventing Broker. - 5
- Name of the Knative Eventing Broker to use for the consumption of the events of type
in-event-type2sent to this workflow. - 6
- Optional: If you do not specify the value, the parameter defaults to the
SonataFlownamespace. Consider creating the broker in the same namespace as theSonataFlowworkflow. - 7
- Flow definition field in the
SonataFlowCR. - 8
- Events definition field in the
SonataFlowCR. - 9
- Example of an incoming event
inEvent1definition. - 10
- Event type for the incoming event
inEvent1. The link of the workflow event, with the correspondingspec.sources[]entry, is by using the event type namein-event-type1. - 11
- Example of an incoming event
inEvent2definition. - 12
- Event type for the incoming event
inEvent2. The link of the workflow event, with the corresponding spec.sources[] entry, is created by using the event type name in-event-type2.
8.3. Cluster-scoped eventing system configuration Copy linkLink copied to clipboard!
In a
SonataFlowClusterPlatform
SonataFlowPlatform
To ensure proper integration, you can configure
Broker
SonataFlowPlatform
SonataFlowClusterPlatform
The following example displays how to configure the
SonataFlowClusterPlatform
SonataFlowPlatform
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: global-platform
namespace: global-namespace
spec:
eventing:
broker:
ref:
name: global-broker
namespace: global-namespace
apiVersion: eventing.knative.dev/v1
kind: Broker
---
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowClusterPlatform
metadata:
name: cluster-platform-example
spec:
platformRef:
name: global-platform
namespace: global-namespace
...
The
SonataFlowClusterPlatform
SonataFlowPlatform
8.4. Eventing system configuration precedence rules Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator follows a defined order of precedence to determine the eventing system configuration for a workflow.
Eventing system configuration precedence rules are as follows:
- If the workflow has a defined eventing system by using either workflow-scoped outgoing or incoming eventing system configurations, the choice of configuration takes priority over the other configuration and applies to the workflow.
-
If the CR enclosing the workflow has a platform-scoped eventing system configured, this configuration is then applied to the next step.
SonataFlowPlatform - If the current cluster is configured with a cluster-scoped eventing system, it is applied if no workflow-scoped or platform-scoped configuration exists.
Review the following information, if none of the above configurations are defined:
- The workflow uses direct HTTP calls to deliver SonataFlow system events to supporting services.
-
The workflow consumes incoming events by HTTP POST calls at the workflow service root path .
/ - No eventing system is configured to produce workflow business events. Any attempt to produce such events might result in a failure.
8.5. Linking workflows to the eventing system Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator links workflows with the eventing system using Knative Eventing, SinkBindings, and triggers. These objects are created automatically by the OpenShift Serverless Logic Operator and simplify the production and consumption of workflow events.
The following example shows the Knative Eventing objects created for an
example-workflow
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
name: sonataflow-platform-example
namespace: example-namespace
spec:
eventing:
broker:
ref:
name: example-broker
apiVersion: eventing.knative.dev/v1
kind: Broker
services:
dataIndex:
enabled: true
jobService:
enabled: true
...
The
example-broker
kafka-broker-config
The following example displays how to configure a Kafka Knative Broker for use with the SonataFlowPlatform:
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
annotations:
eventing.knative.dev/broker.class: Kafka
name: example-broker
namespace: example-namespace
spec:
config:
apiVersion: v1
kind: ConfigMap
name: kafka-broker-config
namespace: knative-eventing
- 1
- The Kafka class is used to create the
example-brokerobject.
The following example displays how the
example-workflow
example-broker
example-namespace
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
name: example-workflow
namespace: example-namespace
annotations:
sonataflow.org/description: Example Workflow
sonataflow.org/version: 0.0.1
sonataflow.org/profile: preview
spec:
flow:
start: ExampleStartState
events:
- name: outEvent1
source: ''
kind: produced
type: out-event-type1
- name: inEvent1
source: ''
kind: consumed
type: in-event-type1
- name: inEvent2
source: ''
kind: consumed
type: in-event-type2
states:
- name: ExampleStartState
...
- 1
- The
example-workflowoutgoing events are produced by using theSinkBindingnamedexample-workflow-sb. - 2
- Events of type
in-event-type1are consumed by using theexample-workflow-inevent1-b40c067c-595b-4913-81a4-c8efa980bc11trigger. - 3
- Events of type
in-event-type2are consumed by using theexample-workflow-inevent2-b40c067c-595b-4913-81a4-c8efa980bc11trigger.
You can list the automatically created
SinkBinding
example-workflow-sb
$ oc get sinkbindings -n example-namespace
Example output
NAME TYPE RESOURCE SINK READY
example-workflow-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True
You can use the following command to list the automatically created triggers for event consumption:
$ oc get triggers -n <example-namespace>
Example output
NAME BROKER SINK AGE CONDITIONS READY REASON
example-workflow-inevent1-b40c067c-595b-4913-81a4-c8efa980bc11 example-broker service:example-workflow 16m 7 OK / 7 True
example-workflow-inevent2-b40c067c-595b-4913-81a4-c8efa980bc11 example-broker service:example-workflow 16m 7 OK / 7 True
Chapter 9. Configuring custom Maven mirrors Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses Maven Central by default to resolve Maven artifacts during workflow builds. The provided builder and development images include all required Java libraries to run workflows, but in certain scenarios, such as when you add a custom Quarkus extension, you must download the additional dependencies from Maven Central.
In environments with restricted or firewalled network access, direct access to Maven Central might not be available. In such cases, you can configure the workflow containers to use a custom Maven mirror, such as an internal company registry or repository manager.
You can configure a custom Maven mirror at different levels as follows:
-
Per workflow build by updating the custom resource.
SonataFlowBuild -
At the platform level by updating the custom resource.
SonataFlowPlatform -
For development mode deployments by editing the custom resource.
SonataFlow - When building custom images externally with the builder image
9.1. Adding a Maven mirror when building workflows Copy linkLink copied to clipboard!
You can configure a Maven mirror by setting the
MAVEN_MIRROR_URL
SonataFlowBuild
SonataFlowPlatform
The recommended approach is to update the
SonataFlowPlatform
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to a custom Maven mirror or internal repository.
Procedure
Edit the
CR to configure a Maven mirror for all workflow builds in a namespace, as shown in the following example:SonataFlowPlatformExample of Maven mirror configuration in a
SonataFlowPlatformCRapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: my-platform spec: build: template: envs: - name: MAVEN_MIRROR_URL value: http://my.company.registry.localThis configuration applies to all workflow builds in the same namespace that use the
profile. When a workflow builder instance runs, it updates the internal Maven settings file to use the specified mirror as the default for external locations such as Maven Central.previewOptional: If you need a specific configuration for a single workflow build, create the
CR before creating the correspondingSonataFlowBuildCR. TheSonataFlowandSonataFlowBuildCRs must have the same name.SonataFlowExample of Maven mirror configuration in a
SonataFlowBuildCRapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: my-workflow1 annotations: sonataflow.org/restartBuild: "true"2 spec: # suppressed for brevity envs: - name: MAVEN_MIRROR_URL3 value: http://my.company.registry.localNoteYou can use the
CR configuration only when you require workflow-specific behavior, for example, debugging. For general use, configure theSonataFlowBuildCR instead.SonataFlowPlatform
9.2. Adding a Maven mirror when deploying in development mode Copy linkLink copied to clipboard!
You can configure a Maven mirror for workflows that run in
dev
MAVEN_MIRROR_URL
SonataFlow
Prerequisites
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have a workflow deployed in profile.
dev - You have access to a custom Maven mirror or internal repository.
Procedure
Edit the
CR to include the Maven mirror configuration as shown in the following example:SonataFlowExample of Maven mirror configuration on SonataFlow CR
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: greeting annotations: sonataflow.org/description: Greeting example on k8s! sonataflow.org/version: 0.0.1 sonataflow.org/profile: dev spec: podTemplate: container: env: - name: MAVEN_MIRROR_URL1 value: http://my.company.registry.local flow: #suppressed for brevity- 1
- The
MAVEN_MIRROR_URLvariable specifies the custom Maven mirror.
Only workflows deployed with the
dev
9.3. Configuring a Maven mirror on a custom image Copy linkLink copied to clipboard!
You can configure a Maven mirror for workflows that run in
dev
MAVEN_MIRROR_URL
SonataFlow
Prerequisites
- You have created your OpenShift Serverless Logic project.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to a dockerfile or container build context that uses the SonataFlow Builder image.
- You have access to a custom Maven mirror or internal repository.
Procedure
Set the Maven mirror as an environment variable in the Dockerfile as shown in the following example:
Example of custom container file with Maven mirror set as an environment variable
FROM docker.io/apache/incubator-kie-sonataflow-builder:main AS builder # Content suppressed for brevity # The Maven mirror URL set as an env var during the build process ENV MAVEN_MIRROR_URL=http://my.company.registry.localThe
directive ensures that all builds with this Dockerfile automatically use the specified Maven mirror.ENVSet the Maven mirror as a build-time argument in the Dockerfile as shown in the following example:
Example of custom container file with Maven mirror set as an argument
FROM docker.io/apache/incubator-kie-sonataflow-builder:main AS builder # Content suppressed for brevity # The Maven mirror URL passed as a build argument during the build process ARG MAVEN_MIRROR_URLThe
directive allows you to pass the Maven mirror value dynamically at build time.ARG
Chapter 10. Managing upgrades Copy linkLink copied to clipboard!
10.1. Upgrading OpenShift Serverless Logic Operator from version 1.34.0 to 1.35.0 Copy linkLink copied to clipboard!
This section provides step-by-step instructions to upgrade the OpenShift Serverless Logic Operator from version 1.34.0 to 1.35.0. The upgrade process involves preparing the existing workflows and services, updating the Operator, and restoring the workflows after the upgrade.
Different workflow profiles require different upgrade steps. Carefully follow the instructions for each profile.
10.1.1. Preparing for the upgrade Copy linkLink copied to clipboard!
Before starting the upgrade process, you need to prepare your OpenShift Serverless Logic environment. This section outlines the necessary steps to ensure a upgrade from version 1.34.0 to 1.35.0.
The preparation process includes:
- Deleting or scaling workflows based on their profiles.
- Backing up all necessary databases and resources.
- Ensuring you have a record of all custom configurations.
- Running required database migration scripts for workflows using persistence.
Prerequisites
-
You have access to the cluster with privileges.
cluster-admin - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI ().
oc
10.1.1.1. Deleting workflows with dev profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the
dev
Procedure
-
Ensure you have a backup of all necessary Kubernetes resources, including custom resource definitions (CRDs),
SonataFlow, or any other related custom configurations.ConfigMaps Delete the workflow by executing the following command:
$ oc delete -f <my-workflow.yaml> -n <target_namespace>
10.1.1.2. Deleting and migrating workflows with the preview profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the
Preview
Procedure
- If you are using persistence, back up the workflow database and ensure the backup includes both database objects and table data.
-
Ensure you have a backup of all necessary Kubernetes resources, including CRDs,
SonataFlow, or any other related custom configurations.ConfigMaps Delete the workflow by executing the following command:
$ oc delete -f <my-workflow.yaml> -n <target_namespace>If you are using persistence, you must execute the following database migration script:
ALTER TABLE flyway_schema_history RENAME CONSTRAINT flyway_schema_history_pk TO kie_flyway_history_runtime_persistence_pk; ALTER INDEX flyway_schema_history_s_idx RENAME TO kie_flyway_history_runtime_persistence_s_idx; ALTER TABLE flyway_schema_history RENAME TO kie_flyway_history_runtime_persistence;
10.1.1.3. Scaling down workflows with the gitops profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must scale down workflows running with the
gitops
Procedure
Modify the
CRD and scale down each workflow tomy-workflow.yamlbefore upgrading as shown in the following example:zerospec: podTemplate: replicas: 0Apply the updated CRD by running the following command:
$ oc apply -f <my-workflow.yaml> -n <target_namespace>(Optional) Scale the workflow to
by running the following command:0$ oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
10.1.1.4. Backing up the Data Index database Copy linkLink copied to clipboard!
You must back up the Data Index database before upgrading to prevent data loss.
Procedure
Take a full backup of the Data Index database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.1.1.5. Backing up the Jobs Service database Copy linkLink copied to clipboard!
You must back up the Jobs Service database before upgrading to maintain job scheduling data.
Procedure
Take a full backup of the Jobs Service database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.1.2. Upgrading the OpenShift Serverless Logic Operator Copy linkLink copied to clipboard!
To transition from OpenShift Serverless Logic Operator (OSL) version 1.34.0 to 1.35.0, you must upgrade the OSL using the Red Hat OpenShift Serverless web console. This upgrade ensures compatibility with newer features and proper functioning of your workflows.
Prerequisites
-
You have access to the cluster with privileges.
cluster-admin - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI ().
oc
Procedure
- In the web console, navigate to the Operators → OperatorHub → Installed Operator page.
-
Select the namespace from the Installed Namespace.
openshift-serverless-logic - In the list of installed operators, find and click the OpenShift Serverless Logic Operator.
- In the Operator details page, click on the Subscription tab. Click Edit Subscription.
- In the Upgrade status, click the Upgrade available link.
- Click the Preview install plan and Approve to start the update.
To monitor the upgrade process, run the following command:
$ oc get subscription logic-operator-rhel8 -n openshift-serverless-logic -o jsonpath='{.status.installedCSV}'Expected output
$ logic-operator-rhel8.v1.35.0
Verification
To verify the new Operator version is installed, run the following command:
$ oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logicExpected output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.35.0 OpenShift Serverless Logic Operator 1.35.0 logic-operator-rhel8.v1.34.0 Succeeded
10.1.3. Finalizing the upgrade Copy linkLink copied to clipboard!
After upgrading the OpenShift Serverless Logic Operator to version 1.35.0, you must finalize the upgrade process by restoring or scaling workflows and cleaning up old services. This ensures that your system runs cleanly on the new version and that all dependent components are configured correctly.
Follow the appropriate steps below based on the profile of your workflows and services.
Prerequisites
-
You have access to the cluster with privileges.
cluster-admin - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI ().
oc
10.1.3.1. Finalizing the Data Index upgrade Copy linkLink copied to clipboard!
After the Operator upgrade, a new ReplicaSet is automatically created for Data Index 1.35.0. You must delete the old one manually.
Procedure
Verify the new ReplicaSet exists by listing all ReplicaSets by running the following command:
$ oc get replicasets -n <target_namespace> -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].imageIdentify the old Data Index ReplicaSet (with version 1.34.0) and delete it:
$ oc delete replicaset <old_replicaset_name> -n <target_namespace>
10.1.3.2. Finalizing the Job Service upgrade Copy linkLink copied to clipboard!
You must manually clean up the Jobs Service components from the older version to trigger deployment of version 1.35.0 components.
Procedure
Delete the old Jobs Service deployment by running the following command:
$ oc delete deployment <jobs-service-deployment-name> -n <target_namespace>This will trigger automatic cleanup of the older Pods and ReplicaSets and initiate a fresh deployment using version 1.35.0.
10.1.3.3. Redeploying workflows with the dev profile Copy linkLink copied to clipboard!
After the upgrade, you must redeploy workflows that use the
dev
Procedure
-
Ensure all required resources are restored, including CRDs,
SonataFlow, or any other related custom configurations.ConfigMaps Redeploy the workflow by running the following command:
$ oc apply -f <my-workflow.yaml> -n <target_namespace>
10.1.3.4. Restoring workflows with the preview profile Copy linkLink copied to clipboard!
Workflows with the
preview
Procedure
If the workflow uses persistence, add the following property to the
associated with the workflow:ConfigMapapiVersion: v1 kind: ConfigMap metadata: labels: app: my-workflow name: my-workflow-props data: application.properties: | kie.flyway.enabled=true-
Ensure all required resources are recreated, including CRDs,
SonataFlow, or any other related custom configurations.ConfigMaps Redeploy the workflow by running the following command:
$ oc apply -f <my-workflow.yaml> -n <target_namespace>
10.1.3.5. Scaling up workflows with the gitops profile Copy linkLink copied to clipboard!
Workflows with the
gitops
Procedure
Modify the
CRD and scale up each workflow tomy-workflow.yamlbefore upgrading as shown in the following example:onespec: podTemplate: replicas: 1Apply the updated CRD by running the following command:
$ oc apply -f <my-workflow.yaml> -n <target_namespace>(Optional) Scale the workflow back to
by running the following command:1$ oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
10.1.3.6. Verifying the upgrade Copy linkLink copied to clipboard!
After restoring workflows and services, it is essential to verify that the upgrade was successful and that all components are functioning as expected.
Procedure
Check if all workflows and services are running by entering the following command:
$ oc get pods -n <target_namespace>Ensure that all pods related to workflows, Data Index, and Jobs Service are in a
orRunningstate.CompletedVerify that the OpenShift Serverless Logic Operator is running correctly by entering the following command:
$ oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logicExpected output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.35.0 OpenShift Serverless Logic Operator 1.35.0 logic-operator-rhel8.v1.34.0 SucceededCheck Operator logs for any errors by entering the following command:
$ oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic
10.2. Upgrading OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0 Copy linkLink copied to clipboard!
You can upgrade the OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0. The upgrade process involves preparing the existing workflows and services, updating the Operator, and restoring the workflows after the upgrade.
Different workflow profiles require different upgrade steps. Follow the instructions for each profile carefully.
10.2.1. Preparing for the upgrade Copy linkLink copied to clipboard!
Before starting the upgrade process, you need to prepare your OpenShift Serverless Logic environment to upgrade from version 1.35.0 to 1.36.0.
The preparation process is as follows:
- Deleting or scaling workflows based on their profiles.
- Backing up all necessary databases and resources.
- Ensuring you have a record of all custom configurations.
Prerequisites
-
You have access to the cluster with privileges.
cluster-admin - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI ().
oc
10.2.1.1. Deleting workflows with the dev profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the
dev
Procedure
-
Ensure you have a backup of all necessary Kubernetes resources, including custom resources (CRs),
SonataFlowresources, or any other related custom configurations.ConfigMap Delete the workflow by executing the following command:
$ oc delete workflow <workflow_name> -n <target_namespace>
10.2.1.2. Deleting workflows with the preview profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must delete workflows running with the
preview
Procedure
- If you are using persistence, back up the workflow database and ensure the backup includes both database objects and table data.
-
Ensure you have a backup of all necessary Kubernetes resources, including custom resources (CRs),
SonataFlowresources, or any other related custom configurations.ConfigMap Delete the workflow by executing the following command:
$ oc delete workflow <workflow_name> -n <target_namespace>
10.2.1.3. Scaling down workflows with the gitops profile Copy linkLink copied to clipboard!
Before upgrading the Operator, you must scale down workflows running with the
gitops
Procedure
Modify the
custom resources (CR) and scale down each workflow tomy-workflow.yamlbefore upgrading as shown in the following example:0spec: podTemplate: replicas: 0 # ...Apply the updated
CR by running the following command:my-workflow.yaml$ oc apply -f my-workflow.yaml -n <target_namespace>Optional: Scale the workflow to
by running the following command:0$ oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
10.2.1.4. Backing up the Data Index database Copy linkLink copied to clipboard!
You must back up the Data Index database before upgrading to prevent data loss.
Procedure
Take a full backup of the Data Index database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.2.1.5. Backing up the Jobs Service database Copy linkLink copied to clipboard!
You must back up the Jobs Service database before upgrading to maintain job scheduling data.
Procedure
Take a full backup of the Jobs Service database, ensuring:
- The backup includes all database objects and not just table data.
- The backup is stored in a secure location.
10.2.2. Upgrading the OpenShift Serverless Logic Operator to 1.36.0 Copy linkLink copied to clipboard!
You can upgrade the OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0 by performing the following steps.
Prerequisites
-
You have access to the cluster with privileges.
cluster-admin - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI ().
oc - You have version 1.35.0 of OpenShift Serverless Logic Operator installed.
Procedure
Patch the
(CSV) for the 1.35.0 OpenShift Serverless Logic Operator to update the deployment labels by running the following command:ClusterServiceVersion$ oc patch csv logic-operator-rhel8.v1.35.0 \ -n openshift-serverless-logic \ --type=json \ -p='[ { "op": "replace", "path": "/spec/install/spec/deployments/0/spec/selector/matchLabels", "value": { "app.kubernetes.io/name": "sonataflow-operator" } }, { "op": "replace", "path": "/spec/install/spec/deployments/0/label", "value": { "app.kubernetes.io/name": "sonataflow-operator" } }, { "op": "replace", "path": "/spec/install/spec/deployments/0/spec/template/metadata/labels", "value": { "app.kubernetes.io/name": "sonataflow-operator" } } ]'Delete the current Operator deployment by running the following command:
$ oc delete deployment logic-operator-rhel8-controller-manager -n openshift-serverless-logic- In the web console, navigate to the Operators → OperatorHub → Installed Operators page.
- In the list of installed Operators, find and click the Operator named OpenShift Serverless Logic Operator.
- Initiate the OpenShift Serverless Logic Operator upgrade to version 1.36.0.
Verification
After applying the upgrade, verify that the Operator is running and in the
phase, by running the following command:Succeeded$ oc get clusterserviceversion logic-operator-rhel8.v1.36.0Example output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.36.0 OpenShift Serverless Logic Operator 1.36.0 logic-operator-rhel8.v1.35.0 Succeeded
10.2.3. Finalizing the upgrade Copy linkLink copied to clipboard!
After upgrading the OpenShift Serverless Logic Operator to version 1.36.0, you must finalize the upgrade process by restoring or scaling workflows and cleaning up old services. This ensures that your system runs cleanly on the new version and that all dependent components are configured correctly.
Follow the appropriate steps below based on the profile of your workflows and services.
Prerequisites
-
You have access to the cluster with privileges.
cluster-admin - You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenShift Management Console for Operator upgrades.
-
You have installed the OpenShift CLI ().
oc
10.2.3.1. Finalizing the Data Index upgrade Copy linkLink copied to clipboard!
After the Operator upgrade, if your deployment is configured to use a Knative Eventing Kafka Broker, you must delete the old
data-index-process-definition
ReplicaSet
Procedure
List all the triggers by running the following command:
$ oc get triggers -n <target_namespace>Example output
NAME BROKER SUBSCRIBER_URI data-index-jobs-a25c8405-f740-47d2-a9a5-f80ccaec2955 example-broker http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/jobs data-index-process-definition-473e1ddbb3ca1d62768187eb80de99bca example-broker http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/definitions data-index-process-error-a25c8405-f740-47d2-a9a5-f80ccaec2955 example-broker http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes data-index-process-instance-mul07f593476e8c14353a337590e0bfd5ae example-broker http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes data-index-process-node-a25c8405-f740-47d2-a9a5-f80ccaec2955 example-broker http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes data-index-process-state-a25c8405-f740-47d2-a9a5-f80ccaec2955 example-broker http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes data-index-process-variable-487e9a6777fff650e60097c9e17111aea25 example-broker http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes jobs-service-create-job-a25c8405-f740-47d2-a9a5-f80ccaec2955 example-broker http://sonataflow-platform-jobs-service.<target_namespace>.svc.cluster.local/v2/jobs/events jobs-service-delete-job-a25c8405-f740-47d2-a9a5-f80ccaec2955 example-broker http://sonataflow-platform-jobs-service.<target_namespace>.svc.cluster.local/v2/jobs/eventsBased on the generated example output, delete the old
trigger by running the following command:data-index-process-definition$ oc delete trigger data-index-process-definition-473e1ddbb3ca1d62768187eb80de99bca -n <target_namespace>After deletion, a new trigger compatible with OpenShift Serverless Logic 1.36.0 is automatically created.
Optional: Identify the old
resource by running the following command:ReplicaSet$ oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>Example output
Name Image sonataflow-platform-data-index-service-1111111111 registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.35.0 sonataflow-platform-data-index-service-2222222222 registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.36.0Optional: Delete your old
resource by running the following command:ReplicaSet$ oc delete replicaset <old_replicaset_name> -n <target_namespace>Example command based on the example output
$ oc delete replicaset sonataflow-platform-data-index-service-1111111111 -n <target_namespace>
10.2.3.2. Finalizing the Job Service upgrade Copy linkLink copied to clipboard!
After the OpenShift Serverless Operator is upgraded to version 1.36.0 you can optionally delete the old
ReplicaSet
Procedure
Identify the old
resource by running the following command:ReplicaSet$ oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>Example output
Name Image sonataflow-platform-jobs-service-1111111111 registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.35.0 sonataflow-platform-jobs-service-2222222222 registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.36.0Delete your old
resource by running the following command:ReplicaSet$ oc delete replicaset <old_replicaset_name> -n <target_namespace>Example command based on the example output
$ oc delete replicaset sonataflow-platform-jobs-service-1111111111 -n <target_namespace>
10.2.3.3. Redeploying workflows with the dev profile Copy linkLink copied to clipboard!
After the upgrade, you must redeploy workflows that use the
dev
Procedure
-
Ensure that all required Kubernetes resources, including the with the
ConfigMapfield, are restored before redeploying the workflow.application.properties Redeploy the workflow by running the following command:
$ oc apply -f <workflow_name> -n <target_namespace>
10.2.3.4. Restoring workflows with the preview profile Copy linkLink copied to clipboard!
After the upgrade, you must redeploy workflows that use the
preview
Procedure
-
Ensure that all required Kubernetes resources, including the with the
ConfigMapfield, are restored before redeploying the workflow.application.properties Redeploy the workflow by running the following command:
$ oc apply -f <workflow_name> -n <target_namespace>
10.2.3.5. Scaling up workflows with the gitops profile Copy linkLink copied to clipboard!
To continue operation, you must scale up workflows that you previously scaled down with the
gitops
Procedure
Modify the
custom resources (CR) and scale up each workflow tomy-workflow.yamlas shown in the following example:1spec: podTemplate: replicas: 1 # ...Apply the updated CR by running the following command:
$ oc apply -f my-workflow.yaml -n <target_namespace>Optional: Scale the workflow back to
by running the following command:1$ oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
10.2.4. Verifying the 1.36.0 upgrade Copy linkLink copied to clipboard!
After restoring workflows and services, verify that the upgrade was successful and all components are functioning as expected.
Procedure
Check if all workflows and services are running by entering the following command:
$ oc get pods -n <target_namespace>Ensure that all pods related to workflows, Data Index, and Jobs Service are in a
orRunningstate.CompletedVerify that the OpenShift Serverless Logic Operator is running correctly by entering the following command:
$ oc get clusterserviceversion logic-operator-rhel8.v1.36.0 -n openshift-serverless-logicExample output
NAME DISPLAY VERSION REPLACES PHASE logic-operator-rhel8.v1.36.0 OpenShift Serverless Logic Operator 1.36.0 logic-operator-rhel8.v1.35.0 SucceededCheck Operator logs for any errors by entering the following command:
$ oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic