Serverless Logic


Red Hat OpenShift Serverless 1.36

Introduction to OpenShift Serverless Logic

Red Hat OpenShift Documentation Team

Abstract

This document provides an overview of OpenShift Serverless Logic features.

Chapter 1. Getting started

You can create and run the OpenShift Serverless Logic workflows locally.

1.1.1. Creating a workflow

You can use the create command with kn workflow to set up a new OpenShift Serverless Logic project in your current directory.

Prerequisites

  • You have installed the OpenShift Serverless Logic kn-workflow CLI plugin.

Procedure

  1. Create a new OpenShift Serverless Logic workflow project by running the following command:

    $ kn workflow create
    Copy to Clipboard Toggle word wrap

    By default, the generated project name is new-project. You can change the project name by using the [-n|--name] flag as follows:

    Example command

    $ kn workflow create --name my-project
    Copy to Clipboard Toggle word wrap

1.1.2. Running a workflow locally

You can use the run command with kn workflow to build and run your OpenShift Serverless Logic workflow project in your current directory.

Prerequisites

  • You have installed Podman on your local machine.
  • You have installed the OpenShift Serverless Logic kn-workflow CLI plugin.
  • You have created an OpenShift Serverless Logic workflow project.

Procedure

  1. From the directory where you created your OpenShift Serverless Logic project, move to your project directory by running the following command:

    $ cd ./<your-project-name>
    Copy to Clipboard Toggle word wrap
  2. Run the following command to build and run your OpenShift Serverless Logic workflow project:

    $ kn workflow run
    Copy to Clipboard Toggle word wrap

    When the project is ready, the Development UI automatically opens in your browser at localhost:8080/q/dev-ui and you will find the Serverless Workflow Tools tile available. Alternatively, you can access the tool directly using http://localhost:8080/q/dev-ui/org.apache.kie.sonataflow.sonataflow-quarkus-devui/workflows.

Note

You can execute a workflow locally using a container that runs on your machine. Stop the container with Ctrl+C.

1.2. Deployment options and deploying workflows

You can deploy the Serverless Logic workflows on the cluster using one of three deployment profiles:

  • Dev
  • Preview
  • GitOps

Each profile defines how the Operator builds and manages workflow deployments, including image lifecycle, live updates, and reconciliation behavior.

1.2.1. Deploying workflows using the Dev profile

You can deploy your local workflow on OpenShift Container Platform using the Dev profile. You can use this deployment to experiment and modify your workflow directly on the cluster, seeing changes almost immediately. Dev profile is designed for development and testing purposes. Because it automatically reloads the workflow without restarting the container, it is ideal for initial development stages and for testing workflow changes in a live environment.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create the workflow configuration YAML file.

    Example workflow-dev.yaml file

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlow
    metadata:
      name: greeting 
    1
    
      annotations:
        sonataflow.org/description: Greeting example on k8s!
        sonataflow.org/version: 0.0.1
        sonataflow.org/profile: dev 
    2
    
    spec:
      flow:
        start: ChooseOnLanguage
        functions:
          - name: greetFunction
            type: custom
            operation: sysout
        states:
          - name: ChooseOnLanguage
            type: switch
            dataConditions:
              - condition: "${ .language == \"English\" }"
                transition: GreetInEnglish
              - condition: "${ .language == \"Spanish\" }"
                transition: GreetInSpanish
            defaultCondition: GreetInEnglish
          - name: GreetInEnglish
            type: inject
            data:
              greeting: "Hello from JSON Workflow, "
            transition: GreetPerson
          - name: GreetInSpanish
            type: inject
            data:
              greeting: "Saludos desde JSON Workflow, "
            transition: GreetPerson
          - name: GreetPerson
            type: operation
            actions:
              - name: greetAction
                functionRef:
                  refName: greetFunction
                  arguments:
                    message:  ".greeting + .name"
            end: true
    Copy to Clipboard Toggle word wrap

    1
    Is a <workflow_name>.
    2
    Indicates that you must deploy the workflow using the Dev profile.
  2. To deploy the application, apply the YAML file by entering the following command:

    $ oc apply -f <filename> -n <your_namespace>
    Copy to Clipboard Toggle word wrap
  3. Verify the deployment and check the status of the deployed workflow by entering the following command:

    $ oc get workflow -n <your_namespace> -w
    Copy to Clipboard Toggle word wrap

    Ensure that your workflow is listed and the status is Running or Completed.

  4. Edit the workflow directly in the cluster by entering the following command:

    $ oc edit sonataflow <workflow_name> -n <your_namespace>
    Copy to Clipboard Toggle word wrap
  5. After editing, save the changes. The OpenShift Serverless Logic Operator detects the changes and updates the workflow accordingly.

Verification

  1. To ensure the changes are applied correctly, verify the status and logs of the workflow by entering the following commands:

    1. View the status of the workflow by running the following command:

      $ oc get sonataflows -n <your_namespace>
      Copy to Clipboard Toggle word wrap
    2. View the workflow logs by running the following command:

      $ oc logs <workflow_pod_name> -n <your_namespace>
      Copy to Clipboard Toggle word wrap

Next steps

  1. After completing the testing, delete the resources to avoid unnecessary usage by running the following command:

    $ oc delete sonataflow <workflow_name> -n <your_namespace>
    Copy to Clipboard Toggle word wrap

You can deploy your local workflow on OpenShift Container Platform using the Preview profile. This allows you to validate and test workflows in a production-like environment directly on the cluster. Preview profile is ideal for final testing and validation before moving workflows to production, as well as for quick iteration without directly managing the build pipeline. It also ensures that workflows will run smoothly in a production-like setting.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

To deploy a workflow in Preview profile, OpenShift Serverless Logic Operator uses the build system on OpenShift Container Platform, which automatically creates the image for deploying your workflow.

The following sections explain how to build and deploy your workflow on a cluster using the OpenShift Serverless Logic Operator with a SonataFlow custom resource.

1.2.2.1. Configuring workflows in Preview profile

If your scenario requires strict policies for image usage, such as security or hardening constraints, replace the default image used by the OpenShift Serverless Logic Operator to build the final workflow container image.

By default, the OpenShift Serverless Logic Operator uses the image distributed in the official Red Hat Registry to build workflows. If your scenario requires strict policies for image use, such as security or hardening constraints, you can replace the default image.

To change this image, you edit the SonataFlowPlatform custom resource (CR) in the namespace where you deployed your workflows.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List the SonataFlowPlatform resources in your namespace by running the following command:

    $ oc get sonataflowplatform -n <your_namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <your_namespace> with the name of your namespace.
  2. Patch the SonataFlowPlatform resource with the new builder image by running the following command:

    $ oc patch sonataflowplatform <name> --patch 'spec:\n  build:\n    config:\n      baseImage: <your_new_image_full_name_with_tag>' -n <your_namespace>
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the SonataFlowPlatform CR has been patched correctly by running the following command:

    $ oc describe sonataflowplatform <name> -n <your_namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <name> with the name of your SonataFlowPlatform resource and <your_namespace> with the name of your namespace.

    Ensure that the baseImage field under spec.build.config reflects the new image.

The OpenShift Serverless Logic Operator uses the logic-operator-rhel8-builder-config config map custom resource (CR) in its openshift-serverless-logicOpenShift Serverless Logic Operator installation namespace to configure and run the workflow build process. You can change the Dockerfile entry in this config map to adjust the Dockerfile to your needs.

Important

Modifying the Dockerfile can break the build process.

Note

This example is for reference only. The actual version might be slightly different. Do not use this example for your installation.

Example logic-operator-rhel8-builder-config config map CR

apiVersion: v1
data:
  DEFAULT_WORKFLOW_EXTENSION: .sw.json
  Dockerfile: |
    FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 AS builder

    # Variables that can be overridden by the builder
    # To add a Quarkus extension to your application
    ARG QUARKUS_EXTENSIONS
    # Args to pass to the Quarkus CLI add extension command
    ARG QUARKUS_ADD_EXTENSION_ARGS
    # Additional java/mvn arguments to pass to the builder
    ARG MAVEN_ARGS_APPEND

    # Copy from build context to skeleton resources project
    COPY --chown=1001 . ./resources

    RUN /home/kogito/launch/build-app.sh ./resources

    #=============================
    # Runtime Run
    #=============================
    FROM registry.access.redhat.com/ubi9/openjdk-17:latest

    ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'

    # We make four distinct layers so if there are application changes, the library layers can be re-used
    COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/lib/ /deployments/lib/
    COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/*.jar /deployments/
    COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/app/ /deployments/app/
    COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/quarkus/ /deployments/quarkus/

    EXPOSE 8080
    USER 185
    ENV AB_JOLOKIA_OFF=""
    ENV JAVA_OPTS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
    ENV JAVA_APP_JAR="/deployments/quarkus-run.jar"
kind: ConfigMap
metadata:
  name: sonataflow-operator-builder-config
  namespace: sonataflow-operator-system
Copy to Clipboard Toggle word wrap

1.2.2.1.3. Changing resource requirements

You can specify resource requirements for the internal builder pods, by creating or editing a SonataFlowPlatform resource in the workflow namespace.

Example SonataFlowPlatform resource

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform
spec:
  build:
    template:
      resources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "128Mi"
          cpu: "500m"
Copy to Clipboard Toggle word wrap

Note

Only one SonataFlowPlatform resource is allowed per namespace. Fetch and edit the resource that the OpenShift Serverless Logic Operator created for you instead of trying to create another resource.

You can fine-tune the resource requirements for a particular workflow. Each workflow instance has a SonataFlowBuild instance created with the same name as the workflow. You can edit the SonataFlowBuild custom resource (CR) and specify the parameters as follows:

Example of SonataFlowBuild CR

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowBuild
metadata:
  name: my-workflow
spec:
  resources:
    requests:
      memory: "64Mi"
      cpu: "250m"
    limits:
      memory: "128Mi"
      cpu: "500m"
Copy to Clipboard Toggle word wrap

These parameters apply only to new build instances.

You can customize the build process by passing build arguments to the SonataFlowBuild instance or setting default build arguments in the SonataFlowPlatform resource.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check for the existing SonataFlowBuild instance by running the following command:

    $ oc get sonataflowbuild <name> -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <name> with the name of your SonataFlowBuild instance and <namespace> with your namespace.
  2. Add build arguments to the SonataFlowBuild instance by running the following command:

    $ oc edit sonataflowbuild <name> -n <namespace>
    Copy to Clipboard Toggle word wrap
  3. Add the desired build arguments under the .spec.buildArgs field of the SonataFlowBuild instance:

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowBuild
    metadata:
      name: <name>  
    1
    
    spec:
      buildArgs:
        - name: <argument_1>
          value: <value_1>
        - name: <argument_2>
          value: <value_2>
    Copy to Clipboard Toggle word wrap
    1
    The name of the existing SonataFlowBuild instance.
  4. Save the file and exit.

    A new build with the updated configuration starts.

  5. Set the default build arguments in the SonataFlowPlatform resource by running the following command:

    $ oc edit sonataflowplatform <name> -n <namespace>
    Copy to Clipboard Toggle word wrap
  6. Add the desired build arguments under the .spec.buildArgs field of the SonataFlowPlatform resource:

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowPlatform
    metadata:
      name: <name> 
    1
    
    spec:
      build:
        template:
          buildArgs:
            - name: <argument_1>
              value: <value_1>
            - name: <argument_2>
              value: <value_2>
    Copy to Clipboard Toggle word wrap
    1
    The name of the existing SonataFlowPlatform resource.
  7. Save the file and exit.

You can set environment variables to the SonataFlowBuild internal builder pod. These variables are valid for the build context only and are not set on the final built workflow image.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check for existing SonataFlowBuild instance by running the following command:

    $ oc get sonataflowbuild <name> -n <namespace>
    Copy to Clipboard Toggle word wrap

    Replace <name> with the name of your SonataFlowBuild instance and <namespace> with your namespace.

  2. Edit the SonataFlowBuild instance by running the following command:

    $ oc edit sonataflowbuild <name> -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example SonataFlowBuild instance

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowBuild
    metadata:
      name: <name>
    spec:
      envs:
        - name: <env_variable_1>
          value: <value_1>
        - name: <env_variable_2>
          value: <value_2>
    Copy to Clipboard Toggle word wrap

  3. Save the file and exit.

    A new with the updated configuration starts.

    Alternatively, you can set the enviroments in the SonataFlowPlatform, so that every new build instances will use it as a template.

    Example SonataFlowPlatform instance

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowPlatform
    metadata:
      name: <name>
    spec:
      build:
        template:
          envs:
            - name: <env_variable_1>
              value: <value_1>
            - name: <env_variable_2>
              value: <value_2>
    Copy to Clipboard Toggle word wrap

1.2.2.1.6. Changing the base builder image

You can modify the default builder image used by the OpenShift Serverless Logic Operator by editing the logic-operator-rhel8-builder-config config map.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the logic-operator-rhel8-builder-config config map by running the following command:

    $ oc edit cm/logic-operator-rhel8-builder-config -n openshift-serverless-logic
    Copy to Clipboard Toggle word wrap
  2. Modify the dockerfile entry.

    In your editor, locate the Dockerfile entry and change the first line to the desired image.

    Example

    data:
      Dockerfile: |
        FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0
        # Change the image to the desired one
    Copy to Clipboard Toggle word wrap

  3. Save the changes.
1.2.2.2. Building and deploying your workflow

You can create a SonataFlow custom resource (CR) on OpenShift Container Platform and OpenShift Serverless Logic Operator builds and deploys the workflow.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a workflow YAML file similar to the following:

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlow
    metadata:
      name: greeting
      annotations:
        sonataflow.org/description: Greeting example on k8s!
        sonataflow.org/version: 0.0.1
    spec:
      flow:
        start: ChooseOnLanguage
        functions:
          - name: greetFunction
            type: custom
            operation: sysout
        states:
          - name: ChooseOnLanguage
            type: switch
            dataConditions:
              - condition: "${ .language == \"English\" }"
                transition: GreetInEnglish
              - condition: "${ .language == \"Spanish\" }"
                transition: GreetInSpanish
            defaultCondition: GreetInEnglish
          - name: GreetInEnglish
            type: inject
            data:
              greeting: "Hello from JSON Workflow, "
            transition: GreetPerson
          - name: GreetInSpanish
            type: inject
            data:
              greeting: "Saludos desde JSON Workflow, "
            transition: GreetPerson
          - name: GreetPerson
            type: operation
            actions:
              - name: greetAction
                functionRef:
                  refName: greetFunction
                  arguments:
                    message:  ".greeting+.name"
            end: true
    Copy to Clipboard Toggle word wrap
  2. Apply the SonataFlow workflow definition to your OpenShift Container Platform namespace by running the following command:

    $ oc apply -f <workflow-name>.yaml -n <your_namespace>
    Copy to Clipboard Toggle word wrap

    Example command for the greetings-workflow.yaml file:

    $ oc apply -f greetings-workflow.yaml -n workflows
    Copy to Clipboard Toggle word wrap

  3. List all the build configurations by running the following command:

    $ oc get buildconfigs -n workflows
    Copy to Clipboard Toggle word wrap
  4. Get the logs of the build process by running the following command:

    $ oc logs buildconfig/<workflow-name> -n <your_namespace>
    Copy to Clipboard Toggle word wrap

    Example command for the greetings-workflow.yaml file:

    $ oc logs buildconfig/greeting -n workflows
    Copy to Clipboard Toggle word wrap

Verification

  1. To verify the deployment, list all the pods by running the following command:

    $ oc get pods -n <your_namespace>
    Copy to Clipboard Toggle word wrap

    Ensure that the pod corresponding to your workflow is running.

  2. Check the running pods and their logs by running the following command:

    $ oc logs pod/<pod-name> -n workflows
    Copy to Clipboard Toggle word wrap
1.2.2.3. Verifying workflow deployment

You can verify that your OpenShift Serverless Logic workflow is running by performing a test HTTP call from the workflow pod.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a workflow YAML file similar to the following:

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlow
    metadata:
      name: greeting
      annotations:
        sonataflow.org/description: Greeting example on k8s!
        sonataflow.org/version: 0.0.1
    spec:
      flow:
        start: ChooseOnLanguage
        functions:
          - name: greetFunction
            type: custom
            operation: sysout
        states:
          - name: ChooseOnLanguage
            type: switch
            dataConditions:
              - condition: "${ .language == \"English\" }"
                transition: GreetInEnglish
              - condition: "${ .language == \"Spanish\" }"
                transition: GreetInSpanish
            defaultCondition: GreetInEnglish
          - name: GreetInEnglish
            type: inject
            data:
              greeting: "Hello from JSON Workflow, "
            transition: GreetPerson
          - name: GreetInSpanish
            type: inject
            data:
              greeting: "Saludos desde JSON Workflow, "
            transition: GreetPerson
          - name: GreetPerson
            type: operation
            actions:
              - name: greetAction
                functionRef:
                  refName: greetFunction
                  arguments:
                    message:  ".greeting+.name"
            end: true
    Copy to Clipboard Toggle word wrap
  2. Create a route for the workflow service by running the following command:

    $ oc expose svc/<workflow-service-name> -n workflows
    Copy to Clipboard Toggle word wrap

    This command creates a public URL to access the workflow service.

  3. Set an environment variable for the public URL by running the following command:

    $ WORKFLOW_SVC=$(oc get route/<workflow-service-name> -n <namespace> --template='{{.spec.host}}')
    Copy to Clipboard Toggle word wrap
  4. Make an HTTP call to the workflow to send a POST request to the service by running the following command:

    $ curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{<"your": "json_payload">}' http://$WORKFLOW_SVC/<endpoint>
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "id": "b5fbfaa3-b125-4e6c-9311-fe5a3577efdd",
      "workflowdata": {
        "name": "John",
        "language": "English",
        "greeting": "Hello from JSON Workflow, "
      }
    }
    Copy to Clipboard Toggle word wrap

    This output shows an example of the expected response if the workflow is running.

1.2.2.4. Restarting a build

To restart a build, you can add or edit the sonataflow.org/restartBuild: true annotation in the SonataFlowBuild instance. Restarting a build is necessary if there is a problem with your workflow or the initial build revision.

Prerequisites

  • You have an OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check if the SonataFlowBuild instance exists by running the following command:

    $ oc get sonataflowbuild <name> -n <namespace>
    Copy to Clipboard Toggle word wrap
  2. Edit the SonataFlowBuild instance by running the following command:

    $ oc edit sonataflowbuild/<name> -n <namespace>
    Copy to Clipboard Toggle word wrap

    Replace <name> with the name of your SonataFlowBuild instance and <namespace> with the namespace where your workflow is deployed.

  3. Add the sonataflow.org/restartBuild: true annotation to restart the build.

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowBuild
    metadata:
      name: <name>
      annotations:
        sonataflow.org/restartBuild: true
    Copy to Clipboard Toggle word wrap

    This action triggers the OpenShift Serverless Logic Operator to start a new build of the workflow.

  4. To monitor the build process, check the build logs by running the following command:

    $ oc logs buildconfig/<name> -n <namespace>
    Copy to Clipboard Toggle word wrap

    Replace <name> with the name of your SonataFlowBuild instance and <namespace> with the namespace where your workflow is deployed.

Important

Use the GitOps profile only for production deployments. For development, rapid iteration, or testing, use the Dev or Preview profiles instead.

You can deploy your local workflow on OpenShift Container Platform using the GitOps profile. The GitOps profile provides full control over the workflow container image by allowing you to build and manage the image externally, typically through a CI/CD pipeline such as ArgoCD or Tekton. When a container image is defined in the SonataFlow custom resource (CR), the Operator automatically assumes the GitOps profile is being used and it does not attempt to build or modify the image in any way.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Specify your container image in your SonataFlow CR:

    Example SonataFlow CR with set GitOps profile

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlow
    metadata:
      annotations:
        sonataflow.org/profile: gitops
      name: workflow_name
    spec:
      flow: 
    1
    
    #...
      podTemplate:
        container:
          image: your-registry/workflow_name:tag
    #...
    Copy to Clipboard Toggle word wrap

    1
    The flow definition must match the workflow definition used during the build process. When you deploy your workflow using the GitOps profile, the Operator compares this definition with the workflow files embedded in the container image. If the definition and files do not match, the deployment fails.
  2. Apply your CR to deploy the workflow:

    $ oc apply -f <filename>
    Copy to Clipboard Toggle word wrap

1.2.4. Editing a workflow

When the OpenShift Serverless Logic Operator deploys a workflow service, it creates two config maps to store runtime properties:

  • User properties: Defined in a ConfigMap named after the SonataFlow object with the suffix -props. For example, if your workflow name is greeting, then the ConfigMap name is greeting-props.
  • Managed properties: Defined in a ConfigMap named after the SonataFlow object with the suffix -managed-props. For example, if your workflow name is greeting, then the ConfigMap name is greeting-managed-props.
Note

Managed properties always override any user property with the same key name and cannot be edited by the user. Any change would be overwritten by the Operator at the next reconciliation cycle.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Open and edit the ConfigMap by running the following command:

    $ oc edit cm <workflow_name>-props -n <namespace>
    Copy to Clipboard Toggle word wrap

    Replace <workflow_name> with the name of your workflow and <namespace> with the namespace where your workflow is deployed.

  2. Add the properties in the application.properties section.

    Example of a workflow properties stored within a ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        app: greeting
      name: greeting-props
      namespace: default
    data:
      application.properties: |
        my.properties.key = any-value
    Copy to Clipboard Toggle word wrap

    Ensure the properties are correctly formatted to prevent the Operator from replacing your configuration with the default one.

  3. After making the necessary changes, save the file and exit the editor.

1.2.5. Testing a workflow

To verify that your OpenShift Serverless Logic workflow is running correctly, you can perform a test HTTP call from the relevant pod.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. To create a route for the specified service in your namespace by running the following command:

    $ oc expose svc <service_name> -n <namespace>
    Copy to Clipboard Toggle word wrap
  2. To fetch the URL for the newly exposed service by running the following command:

    $ WORKFLOW_SVC=$(oc get route/<service_name> --template='{{.spec.host}}')
    Copy to Clipboard Toggle word wrap
  3. Perform a test HTTP call and send a POST request by running the following command:

    $ curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '<request_body>' http://$WORKFLOW_SVC/<endpoint>
    Copy to Clipboard Toggle word wrap
  4. Verify the response to ensure the workflow is functioning as expected.

1.2.6. Troubleshooting a workflow

The OpenShift Serverless Logic Operator deploys its pod with health check probes to ensure the Workflow runs in a healthy state. If changes cause these health checks to fail, the pod will stop responding.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check the workflow status by running the following command:

    $ oc get workflow <name> -o jsonpath={.status.conditions} | jq .
    Copy to Clipboard Toggle word wrap
  2. To fetch and analyze the the logs from the workflow’s deployment, run the following command:

    $ oc logs deployment/<workflow_name> -f
    Copy to Clipboard Toggle word wrap

1.2.7. Deleting a workflow

You can use the oc delete command to delete your OpenShift Serverless Logic workflow in your current directory.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Verify that you have the correct file that defines the Workflow you want to delete. For example, workflow.yaml.
  2. Run the oc delete command to remove the Workflow from your specified namespace:

    $ oc delete -f <your_file> -n <your_namespace>
    Copy to Clipboard Toggle word wrap

    Replace <your_file> with the name of your Workflow file and <your_namespace> with your namespace.

Chapter 2. Global configuration settings

You can set global configuration options for the OpenShift Serverless Logic Operator.

2.1. Prerequisites

  • You have installed the OpenShift Serverless Logic Operator in the target cluster.

2.2. Customization of global configurations

After installing the OpenShift Serverless Logic Operator, you can access the logic-operator-rhel8-controllers-config config map file in the openshift-serverless-logic namespace. This configuration file defines how the Operator behaves when it creates new resources in the cluster. However, changes to this configuration does not affect resources that already exist.

You can modify any of the options within the controllers_cfg.yaml key in the config map.

The following table outlines all the available global configuration options:

Expand
Configuration keyDefault valueDescription

defaultPvcKanikoSize

1Gi

The default size of Kaniko persistent volume claim (PVC) when using the internal OpenShift Serverless Logic Operator builder manager.

healthFailureThresholdDevMode

50

How much time (in seconds) to wait for a developer mode workflow to start. This information is used for the controller manager to create new developer mode containers and setup the health check probes.

kanikoDefaultWarmerImageTag

gcr.io/kaniko-project/warmer:v1.9.0

Default image used internally by the Operator managed Kaniko builder to create the warmup pods.

kanikoExecutorImageTag

gcr.io/kaniko-project/executor:v1.9.0

Default image used internally by the Operator managed Kaniko builder to create the executor pods.

jobsServicePostgreSQLImageTag

registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.36.0

The Jobs service image for PostgreSQL to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version.

jobsServiceEphemeralImageTag

registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8:1.36.0

The Jobs service image without persistence to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version.

dataIndexPostgreSQLImageTag

registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.36.0

The Data Index service image for PostgreSQL to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version.

dataIndexEphemeralImageTag

registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8:1.36.0

The Data Index service image without persistence to use. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version.

sonataFlowBaseBuilderImageTag

registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.36.0

OpenShift Serverless Logic base builder image used in the internal Dockerfile to build workflow applications in preview profile. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version.

sonataFlowDevModeImageTag

registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8:1.36.0

The image to use to deploy OpenShift Serverless Logic workflow images in devmode profile. If empty, the OpenShift Serverless Logic Operator uses the default Apache community image based on the current OpenShift Serverless Logic Operator version.

builderConfigMapName

logic-operator-rhel8-builder-config

The default name of the builder config map in the OpenShift Serverless Logic Operator namespace.

postgreSQLPersistenceExtensions

next column

Quarkus extensions required for workflows persistence. These extensions are used by the OpenShift Serverless Logic Operator builder in cases where the workflow being built has configured PostgreSQL persistence.

kogitoEventsGrouping

true

When set to true, configures every workflow deployment with the gitops or preview profiles to send accumulated workflow status change events to the Data Index service, reducing the number of produced events. You can set the value to false to send individual events.

kogitoEventsGroupingBinary

true

When set to true, the accumulated workflow status change events are sent in binary mode, reducing the size of the produced events. You can set the value to false to send plain JSON events.

kogitoEventsGroupingCompress

false

When set to true, the accumulated workflow status change events, if sent in binary mode, are zipped at the cost of some performance.

You can edit this by updating the logic-operator-controllers-config config map by using the oc command-line tool.

2.2.1. Impact of global configuration changes

When you update the global configurations, the changes immediately affect only newly created resources. For example, if you change the sonataFlowDevModeImageTag property and already have a workflow deployed in dev mode, the OpenShift Serverless Logic Operator does not roll out a new deployment with the updated image configuration. Only new deployments reflect the changes.

2.2.2. Customizing the base builder image

You can directly change the base builder image in the Dockerfile used by the OpenShift Serverless Logic Operator.

Additionally, you can specify the base builder image in the SonataFlowPlatform configuration within the current namespace. This ensures that the specified base image is used exclusively in the given namespace.

Example of SonataFlowPlatform with a custom base builder image

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform
spec:
  build:
    config:
        baseImage: dev.local/my-workflow-builder:1.0.0
Copy to Clipboard Toggle word wrap

Alternatively, you can also modify the base builder image in the global configuration config map as shown in the following example:

Example of ConfigMap with a custom base builder image

apiVersion: v1
data:
  controllers_cfg.yaml: |
    sonataFlowBaseBuilderImageTag: dev.local/my-workflow-builder:1.0.0
kind: ConfigMap
metadata:
  name: logic-operator-rhel8-controllers-config
  namespace: openshift-serverless-logic
Copy to Clipboard Toggle word wrap

When customizing the base builder image, the following order of precedence is applicable:

  1. The SonataFlowPlatform configuration in the current context.
  2. The global configuration entry in the ConfigMap resource.
  3. The FROM clause in the Dockerfile within the OpenShift Serverless Logic Operator namespace, defined in the logic-operator-rhel8-builder-config config map.

The entry in the SonataFlowPlatform configuration always overrides any other value.

Chapter 3. Managing services

3.1. Configuring OpenAPI services

The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface for HTTP APIs. You can understand a service’s capabilities without access to the source code, additional documentation, or network traffic inspection. When you define a service by using the OpenAPI, you can understand and interact with it using minimal implementation logic. Just as interface descriptions simplify lower-level programming, the OpenAPI Specification eliminates guesswork in calling a service.

3.1.1. OpenAPI function definition

OpenShift Serverless Logic allows the workflows to interact with remote services using an OpenAPI specfication reference in a function.

Example OpenAPI function definition

{
   "functions": [
      {
         "name": "myFunction1",
         "operation": "specs/myopenapi-file.yaml#myFunction1"
      }
   ]
}
Copy to Clipboard Toggle word wrap

The operation attribute is a string composed of the following parameters:

  • URI: The engine uses this to locate the specification file.
  • Operation identifier: You can find this identifier in the OpenAPI specification file.

OpenShift Serverless Logic supports the following URI schemes:

  • file: Use this for files located in the file system.
  • http or https: Use these for remotely located files.

Ensure the OpenAPI specification files are available during build time. OpenShift Serverless Logic uses an internal code generation feature to send requests at runtime. After you build the application image, OpenShift Serverless Logic will not have access to these files.

If the OpenAPI service you want to add to the workflow does not have a specification file, you can either create one or update the service to generate and expose the file.

To send REST requests that are based on the OpenAPI specification files, you must perform the following procedures:

  • Define the function references
  • Access the defined functions in the workflow states

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenAPI specification files.

Procedure

  1. To define the OpenAPI functions:

    1. Identify and access the OpenAPI specification files for the services you intend to invoke.
    2. Copy the OpenAPI specification files into your workflow service directory, such as <project_application_dir>/specs.

      The following example shows the OpenAPI specification for the multiplication REST service:

      Example multiplication REST service OpenAPI specification

      openapi: 3.0.3
      info:
        title: Generated API
        version: "1.0"
      paths:
        /:
          post:
            operationId: doOperation
            parameters:
              - in: header
                name: notUsed
                schema:
                  type: string
                required: false
            requestBody:
              content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/MultiplicationOperation'
            responses:
              "200":
                description: OK
                content:
                  application/json:
                    schema:
                      type: object
                      properties:
                        product:
                          format: float
                          type: number
      components:
        schemas:
          MultiplicationOperation:
            type: object
            properties:
              leftElement:
                format: float
                type: number
              rightElement:
                format: float
                type: number
      Copy to Clipboard Toggle word wrap

    3. To define functions in the workflow, use the operationId from the OpenAPI specification to reference the desired operations in your function definitions.

      Example function definitions in the temperature conversion application

      {
         "functions": [
           {
             "name": "multiplication",
             "operation": "specs/multiplication.yaml#doOperation"
           },
           {
             "name": "subtraction",
             "operation": "specs/subtraction.yaml#doOperation"
           }
         ]
      }
      Copy to Clipboard Toggle word wrap

    4. Ensure that your function definitions reference the correct paths to the OpenAPI files stored in the <project_application_dir>/specs directory.
  2. To access the defined functions in the workflow states:

    1. Define workflow actions to call the function definitions you added. Ensure each action references a function defined earlier.
    2. Use the functionRef attribute to refer to the specific function by its name. Map the arguments in the functionRef using the parameters defined in the OpenAPI specification.

      The following example shows about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:

      Example for mapping function arguments in workflow

      {
         "states": [
          {
            "name": "SetConstants",
            "type": "inject",
            "data": {
              "subtractValue": 32.0,
              "multiplyValue": 0.5556
            },
            "transition": "Computation"
          },
          {
            "name": "Computation",
            "actionMode": "sequential",
            "type": "operation",
            "actions": [
              {
                "name": "subtract",
                "functionRef": {
                  "refName": "subtraction",
                  "arguments": {
                    "leftElement": ".fahrenheit",
                    "rightElement": ".subtractValue"
                  }
                }
              },
              {
                "name": "multiply",
                "functionRef": {
                  "refName": "multiplication",
                  "arguments": {
                     "leftElement": ".difference",
                     "rightElement": ".multiplyValue"
                  }
                }
              }
            ],
            "end": {
              "terminate": true
            }
          }
        ]
      }
      Copy to Clipboard Toggle word wrap

    3. Check the Operation Object section of the OpenAPI specification to understand how to structure parameters in the request.
    4. Use jq expressions to extract data from the payload and map it to the required parameters. Ensure the engine maps parameter names according to the OpenAPI specification.
    5. For operations requiring parameters in the request path instead of the body, refer to the parameter definitions in the OpenAPI specification.

      For more information about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:

      Example for mapping path parameters

      {
        "/pet/{petId}": {
          "get": {
            "tags": ["pet"],
            "summary": "Find pet by ID",
            "description": "Returns a single pet",
            "operationId": "getPetById",
            "parameters": [
              {
                "name": "petId",
                "in": "path",
                "description": "ID of pet to return",
                "required": true,
                "schema": {
                  "type": "integer",
                  "format": "int64"
                }
              }
            ]
          }
        }
      }
      Copy to Clipboard Toggle word wrap

      Following is an example invocation of a function, in which only one parameter named petId is added in the request path:

      Example of calling the PetStore function

      {
        "name": "CallPetStore", 
      1
      
        "actionMode": "sequential",
        "type": "operation",
        "actions": [
          {
            "name": "getPet",
            "functionRef": {
              "refName": "getPetById", 
      2
      
              "arguments": { 
      3
      
                "petId": ".petId"
              }
            }
          }
        ]
      }
      Copy to Clipboard Toggle word wrap

      1
      State definition, such as CallPetStore.
      2
      Function definition reference. In the previous example, the function definition getPetById is for PetStore OpenAPI specification.
      3
      Arguments definition. OpenShift Serverless Logic adds the argument petId to the request path before sending a request.

After accessing the function definitions in workflow states, you can configure the endpoint URL of OpenAPI services.

Prerequisites

  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created your OpenShift Serverless Logic project.
  • You have access to the OpenAPI specification files.
  • You have defined the function definitions in the workflow.
  • You have the access to the defined functions in the workflow states.

Procedure

  1. Locate the OpenAPI specification file you want to configure. For example, substraction.yaml.
  2. Convert the file name into a valid configuration key by replacing special characters, such as ., with underscores and converting letters to lowercase. For example, change substraction.yaml to substraction_yaml.
  3. To define the configuration key, use the converted file name as the REST client configuration key. Set this key as an environment variable, as shown in the following example:

    quarkus.rest-client.subtraction_yaml.url=http://myserver.com
    Copy to Clipboard Toggle word wrap
  4. To prevent hardcoding URLs in the application.properties file, use environment variable substitution, as shown in the following example:

    quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
    Copy to Clipboard Toggle word wrap

    In this example:

    • Configuration Key: quarkus.rest-client.subtraction_yaml.url
    • Environment variable: SUBTRACTION_URL
    • Fallback URL: http://myserver.com
  5. Ensure that the (SUBTRACTION_URL) environment variable is set in your system or deployment environment. If the variable is not found, the application uses the fallback URL (http://myserver.com).
  6. Add the configuration key and URL substitution to the application.properties file:

    quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
    Copy to Clipboard Toggle word wrap
  7. Deploy or restart your application to apply the new configuration settings.

3.2. Configuring OpenAPI services endpoints

OpenShift Serverless Logic uses the kogito.sw.operationIdStrategy property to generate the REST client for invoking services defined in OpenAPI documents. This property determines how the configuration key is derived for the REST client configuration.

The kogito.sw.operationIdStrategy property supports the following values: FILE_NAME, FULL_URI, FUNCTION_NAME, and SPEC_TITLE.

FILE_NAME

OpenShift Serverless Logic uses the OpenAPI document file name to create the configuration key. The key is based on the file name, where special characters are replaced with underscores.

Example configuration:

quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/ 
1
Copy to Clipboard Toggle word wrap

1
The OpenAPI file path is <project_application_dir>/specs/stock-portfolio-svc.yaml. The generated key that configures the URL for the REST client is stock_portfolio_svc_yaml.
FULL_URI

OpenShift Serverless Logic uses the complete URI path of the OpenAPI document as the configuration key. The full URI is sanitized to form the key.

Example for Serverless Workflow

{
    "id": "myworkflow",
    "functions": [
        {
          "name": "myfunction",
          "operation": "https://my.remote.host/apicatalog/apis/123/document"
        }
    ]
}
Copy to Clipboard Toggle word wrap

Example configuration:

quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/ 
1
Copy to Clipboard Toggle word wrap

1
The URI path is https://my.remote.host/apicatalog/apis/123/document. The generated key that configures the URL for the REST client is apicatalog_apis_123_document.
FUNCTION_NAME

OpenShift Serverless Logic combines the workflow ID and the function name referencing the OpenAPI document to generate the configuration key.

Example for Serverless Workflow

{
    "id": "myworkflow",
    "functions": [
        {
          "name": "myfunction",
          "operation": "https://my.remote.host/apicatalog/apis/123/document"
        }
    ]
}
Copy to Clipboard Toggle word wrap

Example configuration:

quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/ 
1
Copy to Clipboard Toggle word wrap

1
The workflow ID is myworkflow. The function name is myfunction. The generated key that configures the URL for the REST client is myworkflow_myfunction.
SPEC_TITLE

OpenShift Serverless Logic uses the info.title value from the OpenAPI document to create the configuration key. The title is sanitized to form the key.

Example for OpenAPI document

openapi: 3.0.3
info:
  title: stock-service API
  version: 2.0.0-SNAPSHOT
paths:
  /stock-price/{symbol}:
...
Copy to Clipboard Toggle word wrap

Example configuration:

quarkus.rest-client.stock-service_API.url=http://localhost:8282/ 
1
Copy to Clipboard Toggle word wrap

1
The OpenAPI document title is stock-service API. The generated key that configures the URL for the REST client is stock-service_API.

3.2.1. Using URI alias

As an alternative to the kogito.sw.operationIdStrategy property, you can assign an alias to a URI by using the workflow-uri-definitions custom extension. This alias simplifies the configuration process and can be used as a configuration key in REST client settings and function definitions.

The workflow-uri-definitions extension allows you to map a URI to an alias, which you can reference throughout the workflow and in your configuration files. This approach provides a centralized way to manage URIs and their configurations.

Prerequisites

  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenAPI specification files.

Procedure

  1. Add the workflow-uri-definitions extension to your workflow. Within this extension, create aliases for your URIs.

    Example workflow

    {
      "extensions": [
        {
          "extensionid": "workflow-uri-definitions", 
    1
    
          "definitions": {
            "remoteCatalog": "https://my.remote.host/apicatalog/apis/123/document" 
    2
    
          }
        }
      ],
      "functions": [ 
    3
    
        {
          "name": "operation1",
          "operation": "remoteCatalog#operation1"
        },
        {
          "name": "operation2",
          "operation": "remoteCatalog#operation2"
        }
      ]
    }
    Copy to Clipboard Toggle word wrap

1
Set the extension ID to workflow-uri-definitions.
2
Set the alias definition by mapping the remoteCatalog alias to a URI, for example, https://my.remote.host/apicatalog/apis/123/document URI.
3
Set the function operations by using the remoteCatalog alias with the operation identifiers, for example, operation1 and operation2 operation identifiers.
  1. In the application.properties file, configure the REST client by using the alias defined in the workflow.

    Example property

    quarkus.rest-client.remoteCatalog.url=http://localhost:8282/
    Copy to Clipboard Toggle word wrap

    In the previous example, the configuration key is set to quarkus.rest-client.remoteCatalog.url, and the URL is set to http://localhost:8282/, which the REST clients use by referring to the remoteCatalog alias.

  2. In your workflow, use the alias when defining functions that operate on the URI.

    Example Workflow (continued):

    {
      "functions": [
        {
          "name": "operation1",
          "operation": "remoteCatalog#operation1"
        },
        {
          "name": "operation2",
          "operation": "remoteCatalog#operation2"
        }
      ]
    }
    Copy to Clipboard Toggle word wrap

3.3. Troubleshooting services

Efficient troubleshooting of the HTTP-based function invocations, such as those using OpenAPI functions, is crucial for maintaining workflow orchestrations.

To diagnose issues, you can trace HTTP requests and responses.

3.3.1. Tracing HTTP requests and responses

OpenShift Serverless Logic uses the Apache HTTP client to the trace HTTP requests and responses.

Prerequisites

  • You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenAPI specification files.
  • You have access to the workflow definition and instance IDs for correlating HTTP requests and responses.
  • You have access to the log configuration of the application where the HTTP service invocations are occurring

Procedure

  1. To trace HTTP requests and responses, OpenShift Serverless Logic uses the Apache HTTP client by setting the following property:

    # Turning HTTP tracing on
    quarkus.log.category."org.apache.http".level=DEBUG
    Copy to Clipboard Toggle word wrap
  2. Add the following configuration to your application’s application.properties file to turn on debugging for the Apache HTTP Client:

    quarkus.log.category."org.apache.http".level=DEBUG
    Copy to Clipboard Toggle word wrap
  3. Restart your application to propagate the log configuration changes.
  4. After restarting, check the logs for HTTP request traces.

    Example logs of a traced HTTP request

    2023-09-25 19:00:55,242 DEBUG Executing request POST /v2/models/yolo-model/infer HTTP/1.1
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> POST /v2/models/yolo-model/infer HTTP/1.1
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Accept: application/json
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Type: application/json
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocid: inferencepipeline
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocinstanceid: 85114b2d-9f64-496a-bf1d-d3a0760cde8e
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocist: Active
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoproctype: SW
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocversion: 1.0
    2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Length: 23177723
    2023-09-25 19:00:55,244 DEBUG http-outgoing-0 >> Host: yolo-model-opendatahub-model.apps.trustyai.dzzt.p1.openshiftapps.com
    Copy to Clipboard Toggle word wrap

  5. Check the logs for HTTP response traces following the request logs.

    Example logs of a traced HTTP response

    2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "HTTP/1.1 500 Internal Server Error[\r][\n]"
    2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-type: application/json[\r][\n]"
    2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "date: Mon, 25 Sep 2023 19:01:00 GMT[\r][\n]"
    2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-length: 186[\r][\n]"
    2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "set-cookie: 276e4597d7fcb3b2cba7b5f037eeacf5=5427fafade21f8e7a4ee1fa6c221cf40; path=/; HttpOnly; Secure; SameSite=None[\r][\n]"
    2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "[\r][\n]"
    2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "{"code":13, "message":"Failed to load Model due to adapter error: Error calling stat on model file: stat /models/yolo-model__isvc-1295fd6ba9/yolov5s-seg.onnx: no such file or directory"}"
    Copy to Clipboard Toggle word wrap

Chapter 4. Managing security

4.1. Authentication for OpenAPI services

To secure an OpenAPI service operation, define a Security Scheme by using the OpenAPI specification. These schemes are in the securitySchemes section of the OpenAPI specification file. You must configure the operation by adding a Security Requirement that refers to that Security Scheme. When a workflow invokes such operation, that information is used to determine the required authentication configuration.

This section outlines the supported authentication types and demonstrates how to configure them to access secured OpenAPI service operations within your workflows.

4.1.1. Overview of OpenAPI service authentication

In OpenShift Serverless Logic, you can secure OpenAPI service operations using the Security Schemes defined in the OpenAPI specification file. These schemes help define the authentication requirements for operations invoked within a workflow.

The Security Schemes are declared in the securitySchemes section of the OpenAPI document. Each scheme specifies the type of authentication to apply, such as HTTP Basic, API key, and so on.

When a workflow calls a secured operation, it references these defined schemes to determine the required authentication configuration.

Example security scheme definitions

"securitySchemes": {
  "http-basic-example": {
    "type": "http",
    "scheme": "basic"
  },
  "api-key-example": {
    "type": "apiKey",
    "name": "my-example-key",
    "in": "header"
  }
}
Copy to Clipboard Toggle word wrap

If the OpenAPI file defines Security Schemes, but does not include Security Requirements for operations, the generator can be configured to create them by default. These defaults apply to operations without explicitly defined requirements.

To configure that scheme, you must use the quarkus.openapi-generator.codegen.default-security-scheme property. The default-security-scheme property is used only at code generation time and not during the runtime. The value must match any of the available schemes in securitySchemes section, such as http-basic-example or api-key-example:

For Example

$ quarkus.openapi-generator.codegen.default-security-scheme=http-basic-example
Copy to Clipboard Toggle word wrap

To invoke OpenAPI service operations secured by authentication schemes, you must configure the corresponding credentials and parameters in your application. OpenShift Serverless Logic uses these configurations to authenticate with the external services during workflow execution.

This section describes how to define and apply the necessary configuration properties for security schemes declared in the OpenAPI specification file. You can use either application.properties, the ConfigMap associated with your workflow, or environment variables in the SonataFlow CR to provide these credentials.

Note

The security schemes defined in an OpenAPI specification file are global to all the operations that are available in the same file. This means that the configurations set for a particular security scheme also apply to the other secured operations.

Prerequisites

  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • Your OpenAPI specification includes one or more security schemes.
  • You have access to the OpenAPI specification files.
  • You have identified the schemes you want to configure http-basic-example or api-key-example.
  • You have access to the application.properties file, the workflow ConfigMap, or the SonataFlow CR.

Procedure

  • Use the following format to compose your property keys:

    quarkus.openapi-generator.[filename].auth.[security_scheme_name].[auth_property_name]
    Copy to Clipboard Toggle word wrap
    • filename is the sanitized name of the file containing the OpenAPI specification, such as security_example_json. To sanitize this name, you must replace all non-alphabetic characters with _ underscores.
    • security_scheme_name is the sanitized name of the security scheme object definition in the OpenAPI specification file, such as http_basic_example or api_key_example. To sanitize this name, you must replace all non-alphabetic characters with _ underscores.
    • auth_property_name is the name of the property to configure, such as username. This property depends on the defined security scheme type.

      Note

      When you are using environment variables to configure properties, follow the MicroProfile environment variable mapping rules. You can replace all non-alphabetic characters in the property key with underscores _, and convert the entire key to uppercase.

The following examples show how to provide these configuration properties using application.properties, the ConfigMap associated with your workflow, or environment variables defined in the SonataFlow CR:

Example of configuring the credentials by using the application.properties file

quarkus.openapi-generator.security_example_json.auth.http_basic_example.username=myuser
quarkus.openapi-generator.security_example_json.auth.http_basic_example.password=mypassword
Copy to Clipboard Toggle word wrap

Example of configuring the credentials by using the workflow ConfigMap

apiVersion: v1
data:
  application.properties: |
    quarkus.openapi-generator.security_example_json.auth.http_basic_example.username=myuser
    quarkus.openapi-generator.security_example_json.auth.http_basic_example.password=mypassword
kind: ConfigMap
metadata:
  labels:
    app: example-workflow
  name: example-workflow-props
  namespace: example-namespace
Copy to Clipboard Toggle word wrap

Note

If the name of the workflow is example-workflow, the name of the ConfigMap with the user defined properties must be example-workflow-props.

Example of configuring the credentials by using environment variables in the SonataFlow CR

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
  name: example-workflow
  namespace: example-namespace
  annotations:
    sonataflow.org/description: Example Workflow
    sonataflow.org/version: 0.0.1
    sonataflow.org/profile: preview
spec:
  podTemplate:
    container:
      env:
        - name: QUARKUS_OPENAPI_GENERATOR_SECURITY_EXAMPLE_JSON_AUTH_HTTP_BASIC_EXAMPLE_USERNAME
          value: myuser
        - name: QUARKUS_OPENAPI_GENERATOR_SECURITY_EXAMPLE_JSON_AUTH_HTTP_BASIC_EXAMPLE_PASSWORD
          value: mypassowrd
Copy to Clipboard Toggle word wrap

4.1.3. Example of basic HTTP authentication

The following example shows how to secure a workflow operation using the HTTP basic authentication scheme. The security-example.json file defines an OpenAPI service with a single operation, sayHelloBasic, which uses the http-basic-example security scheme. You can configure credentials using application properties, the worfklow ConfigMap, or environment variables.

Example OpenAPI specification with HTTP basic authentication

{
  "openapi": "3.1.0",
  "info": {
    "title": "Http Basic Scheme Example",
    "version": "1.0"
  },
  "paths": {
    "/hello-with-http-basic": {
      "get": {
        "operationId": "sayHelloBasic",
        "responses": {
          "200": {
            "description": "OK",
            "content": {
              "text/plain": {
                "schema": {
                  "type": "string"
                }
              }
            }
          }
        },
        "security": [{"http-basic-example" : []}]
      }
    }
  },
  "components": {
    "securitySchemes": {
      "http-basic-example": {
        "type": "http",
        "scheme": "basic"
      }
    }
  }
}
Copy to Clipboard Toggle word wrap

In this example, the sayHelloBasic operation is secured using the http-basic-example scheme defined in the securitySchemes section. When invoking this operation in a workflow, you must configure the appropriate credentials.

You can use the following configuration keys to provide authentication credentials for the http-basic-example scheme:

Expand
DescriptionProperty keyExample

Username credentials

quarkus.openapi-generator.[filename].auth.[security_scheme_name].username

quarkus.openapi-generator.security_example_json.auth.http_basic_example.username=MY_USER

Password credentials

quarkus.openapi-generator.[filename].auth.[security_scheme_name].password

quarkus.openapi-generator.security_example_json.auth.http_basic_example.password=MY_PASSWD

You can replace [filename] with the sanitized OpenAPI file name security_example_json and [security_scheme_name] with the sanitized scheme name http_basic_example.

4.1.4. Example of Bearer token authentication

The following example shows how to secure an OpenAPI operation using the HTTP Bearer authentication scheme. The security-example.json file defines an OpenAPI service with the sayHelloBearer operation, which uses the http-bearer-example scheme for authentication. To access the secured operation during workflow execution, you must configure a Bearer token using application properties, the workflow ConfigMap, or environment variables.

Example OpenAPI specification with Bearer token authentication

{
  "openapi": "3.1.0",
  "info": {
    "title": "Http Bearer Scheme Example",
    "version": "1.0"
  },
  "paths": {
    "/hello-with-http-bearer": {
      "get": {
        "operationId": "sayHelloBearer",
        "responses": {
          "200": {
            "description": "OK",
            "content": {
              "text/plain": {
                "schema": {
                  "type": "string"
                }
              }
            }
          }
        },
        "security": [
          {
            "http-bearer-example": []
          }
        ]
      }
    }
  },
  "components": {
    "securitySchemes": {
      "http-bearer-example": {
        "type": "http",
        "scheme": "bearer"
      }
    }
  }
}
Copy to Clipboard Toggle word wrap

In this example, the sayHelloBearer operation is protected by the http-bearer-example scheme. You must define a valid Bearer token in your configuration to invoke this operation successfully.

You can use the following configuration property key to provide the Bearer token:

Expand
DescriptionProperty keyExample

Bearer token

quarkus.openapi-generator.[filename].auth.[security_scheme_name].bearer-token

quarkus.openapi-generator.security_example_json.auth.http_bearer_example.bearer-token=MY_TOKEN

You can replace [filename] with the sanitized OpenAPI file name security_example_json and [security_scheme_name] with the sanitized scheme name http_bearer_example.

4.1.5. Example of API key authentication

The following example shows how to secure an OpenAPI service operation using the apiKey authentication scheme. The security-example.json file defines the sayHelloApiKey operation, which uses the api-key-example security scheme. You can configure the API key using application properties, the workflow ConfigMap, or environment variables.

Example OpenAPI specification with API key authentication

{
  "openapi": "3.1.0",
  "info": {
    "title": "Api Key Scheme Example",
    "version": "1.0"
  },
  "paths": {
    "/hello-with-api-key": {
      "get": {
        "operationId": "sayHelloApiKey",
        "responses": {
          "200": {
            "description": "OK",
            "content": {
              "text/plain": {
                "schema": {
                  "type": "string"
                }
              }
            }
          }
        },
        "security": [{"api-key-example" : []}]
      }
    }
  },
  "components": {
    "securitySchemes": {
      "api-key-example": {
        "type": "apiKey",
        "name": "api-key-name",
        "in": "header"
      }
    }
  }
}
Copy to Clipboard Toggle word wrap

In this example, the sayHelloApiKey operation is protected by the api-key-example security scheme, which uses an API key passed in the HTTP request header.

You can use the following configuration property to configure the API key:

Expand
DescriptionProperty keyExample

API Key

quarkus.openapi-generator.[filename].auth.[security_scheme_name].api-key

quarkus.openapi-generator.security_example_json.auth.api_key_example.api-key=MY_KEY

You can replace [filename] with the sanitized OpenAPI file name security_example_json and [security_scheme_name] with the sanitized scheme name api_key_example.

The apiKey scheme type contains an additional name property that configures the key name to use when the Open API service is invoked. Also, the format to pass the key depends on the value of the in property.

  • When the value is header, the key is passed as an HTTP request parameter.
  • When the value is cookie, the key is passed as an HTTP cookie.
  • When the value is query, the key is passed as an HTTP query parameter.

In the example, the key is passed in the HTTP header as api-key-name: MY_KEY.

OpenShift Serverless Logic manages this internally, so no additional configuration is required beyond setting the property value.

The following example shows how to secure an OpenAPI operation using the OAuth 2.0 clientCredentials flow. The OpenAPI specification defines the sayHelloOauth2 operation, which uses the oauth-example security scheme. Unlike simpler authentication methods, such as HTTP Basic or API keys, OAuth 2.0 authentication requires additional integration with the Quarkus OpenID Connect (OIDC) Client.

Example OpenAPI specification with OAuth 2.0

{
  "openapi": "3.1.0",
  "info": {
    "title": "Oauth2 Scheme Example",
    "version": "1.0"
  },
  "paths": {
    "/hello-with-oauth2": {
      "get": {
        "operationId": "sayHelloOauth2",
        "responses": {
          "200": {
            "description": "OK",
            "content": {
              "text/plain": {
                "schema": {
                  "type": "string"
                }
              }
            }
          }
        },
        "security": [
          {
            "oauth-example": []
          }
        ]
      }
    }
  },
  "components": {
    "securitySchemes": {
      "oauth-example": {
        "type": "oauth2",
        "flows": {
          "clientCredentials": {
            "authorizationUrl": "https://example.com/oauth",
            "tokenUrl": "https://example.com/oauth/token",
            "scopes": {}
          }
        }
      }
    }
  }
}
Copy to Clipboard Toggle word wrap

In this example, the sayHelloOauth2 operation is protected by the oauth-example security scheme, which uses the clientCredentials flow for token-based authentication.

OAuth 2.0 token management is handled by a Quarkus OidcClient. To enable this integration, you must add the Quarkus OIDC Client Filter, and the Quarkus OpenApi Generator OIDC extensions to your project as shown in the following examples:

Example of adding extensions using Maven

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-oidc-client-filter</artifactId>
  <version>3.15.4.redhat-00001</version>
</dependency>

<dependency>
  <groupId>io.quarkiverse.openapi.generator</groupId>
  <artifactId>quarkus-openapi-generator-oidc</artifactId>
  <version>2.9.0-lts</version>
</dependency>
Copy to Clipboard Toggle word wrap

Example of adding extensions using gitops profile

Ensure that you configure the QUARKUS_EXTENSIONS build argument with the following value when building the workflow image:

$ --build-arg=QUARKUS_EXTENSIONS=io.quarkus:quarkus-oidc-client-filter:3.15.4.redhat-00001,io.quarkiverse.openapi.generator:quarkus-openapi-generator-oidc:2.9.0-lts
Copy to Clipboard Toggle word wrap

Example of adding extensions using preview profile

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  build:
    template:
      buildArgs:
        - name: QUARKUS_EXTENSIONS
          value: io.quarkus:quarkus-oidc-client-filter:3.15.4.redhat-00001,io.quarkiverse.openapi.generator:quarkus-openapi-generator-oidc:2.9.0-lts
Copy to Clipboard Toggle word wrap

Note

The extensions that are added in the SonataFlowPlatform CR are included for all the workflows that you deploy in that namespace with the preview profile.

4.1.6.2. OidcClient configuration

To access the secured operation, define an OidcClient configuration in your application.properties file. The configuration uses the sanitized security scheme name from the OpenAPI specification, in this case, oauth_example as follows:

# adjust these configurations according with the authentication service.
quarkus.oidc-client.oauth_example.auth-server-url=https://example.com/oauth
quarkus.oidc-client.oauth_example.token-path=/token
quarkus.oidc-client.oauth_example.discovery-enabled=false
quarkus.oidc-client.oauth_example.client-id=example-app
quarkus.oidc-client.oauth_example.grant.type=client
quarkus.oidc-client.oauth_example.credentials.client-secret.method=basic
quarkus.oidc-client.oauth_example.credentials.client-secret.value=secret
Copy to Clipboard Toggle word wrap

In this configuration:

  • oauth_example matches the sanitized name of the oauth-example scheme in the OpenAPI file. The link between the sanitized scheme name and the corresponding OidcClient is achieved by using that simple naming convention.
  • The OidcClient handles token generation and renewal automatically during workflow execution.

4.1.7. Example of authorization token propagation

OpenShift Serverless Logic supports token propagation for OpenAPI operations that use the oauth2 or http bearer security scheme types. Token propagation enables your workflow to forward the authorization token it receives during workflow creation to downstream services.This feature is useful when your workflow needs to interact with third-party services on behalf of the client that initiated the request.

You must configure token propagation individually for each security scheme. After it is enabled, all OpenAPI operations secured using the same scheme uses the propagated token unless explicitly overridden.

The following example defines the sayHelloOauth2 operation in the security-example.json file. This operation uses the oauth-example security scheme with the clientCredentials flow:

Example OpenAPI specification with token propagation

{
  "openapi": "3.1.0",
  "info": {
    "title": "Oauth2 Scheme Example",
    "version": "1.0"
  },
  "paths": {
    "/hello-with-oauth2": {
      "get": {
        "operationId": "sayHelloOauth2",
        "responses": {
          "200": {
            "description": "OK",
            "content": {
              "text/plain": {
                "schema": {
                  "type": "string"
                }
              }
            }
          }
        },
        "security": [
          {
            "oauth-example": []
          }
        ]
      }
    }
  },
  "components": {
    "securitySchemes": {
      "oauth-example": {
        "type": "oauth2",
        "flows": {
          "clientCredentials": {
            "authorizationUrl": "https://example.com/oauth",
            "tokenUrl": "https://example.com/oauth/token",
            "scopes": {}
          }
        }
      }
    }
  }
}
Copy to Clipboard Toggle word wrap

You can use the following configuration keys to enable and customize token propagation:

Note

The tokens are automatically passed to downstream services while the workflow is active. When the workflow enters a waiting state, such as a timer or event-based pause, the token propagation stops. After the workflow resumes, tokens are not re-propagated automatically. You must manage re-authentication if needed.

Expand
Property keyExampleDescription

quarkus.openapi-generator.[filename].auth.[security_scheme_name].token-propagation

quarkus.openapi-generator.security_example_json.auth.oauth_example.token-propagation=true

Enables token propagation for all operations secured with the given scheme. Default is false.

quarkus.openapi-generator.[filename].auth.[security_scheme_name].header-name

quarkus.openapi-generator.security_example_json.auth.oauth_example.header-name=MyHeaderName

(Optional) Overrides the default Authorization header with a custom header name to read the token from.

You can replace [filename] with the sanitized OpenAPI file name security_example_json and [security_scheme_name] with the sanitized scheme name oauth_example.

Chapter 5. Supporting services

5.1. Job service

The Job service schedules and executes tasks in a cloud environment. Independent services implement these tasks, which can be initiated through any of the supported interaction modes, including HTTP calls or Knative Events delivery.

In OpenShift Serverless Logic, the Job service is responsible for controlling the execution of the time-triggered actions. Therefore, all the time-based states that you can use in a workflow, are handled by the interaction between the workflow and the Job service.

For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job service, and when the timeout is met, an HTTP callback is executed to notify the workflow.

The main goal of the Job service is to manage active jobs, such as scheduled jobs that need to be executed. When a job reaches its final state, the Job service removes it. To retain jobs information in a permanent repository, the Job service produces status change events that can be recorded by an external service, such as the Data Index Service.

Note

You do not need to manually install or configure the Job service if you are using the OpenShift Serverless Operator to deploy workflows. The Operator handles these tasks automatically and manages all necessary configurations for each workflow to connect with it.

5.1.1. Job service leader election process

The Job service operates as a singleton service, meaning only one active instance can schedule and execute jobs.

To prevent conflicts when the service is deployed in the cloud, where multiple instances might be running, the Job service supports a leader election process. Only the instance that is elected as the leader manages external communication to receive and schedule jobs.

Non-leader instances remain inactive in a standby state but continue attempting to become the leader through the election process. When a new instance starts, it does not immediately assume leadership. Instead, it enters the leader election process to determine if it can take over the leader role.

If the current leader becomes unresponsive or if it is shut down, another running instance takes over as the leader.

Note

This leader election mechanism uses the underlying persistence backend, which is currently supported only in the PostgreSQL implementation.

5.2. Data Index service

The Data Index service is a dedicated supporting service that stores the data related to the workflow instances and their associated jobs. This service provides a GraphQL endpoint allowing users to query that data.

The Data Index service processes data received through events, which can originate from any workflow or directly from the Job service.

Data Index supports Apache Kafka or Knative Eventing to consume CloudEvents messages from workflows. It indexes and stores this event data in a database, making it accessible through GraphQL. These events provide detailed information about the workflow execution. The Data Index service is central to OpenShift Serverless Logic search, insights, and management capabilities.

The key features of the Data Index service are as follows:

  • A flexible data structure
  • A distributable, cloud-ready format
  • Message-based communication with workflows via Apache Kafka, Knative, and CloudEvents
  • A powerful GraphQL-based querying API
Note

When you are using the OpenShift Serverless Operator to deploy workflows, you do not need to manually install or configure the Data Index service. The Operator automatically manages all the necessary configurations for each workflow to connect with it.

To retrieve data about workflow instances and jobs, you can use GraphQL queries.

5.2.1.1. Retrieve data from workflow instances

You can retrieve information about a specific workflow instance by using the following query example:

{
  ProcessInstances {
    id
    processId
    state
    parentProcessInstanceId
    rootProcessId
    rootProcessInstanceId
    variables
    nodes {
      id
      name
      type
    }
  }
}
Copy to Clipboard Toggle word wrap
5.2.1.2. Retrieve data from jobs

You can retrieve data from a specific job instance by using the following query example:

{
  Jobs {
    id
    status
    priority
    processId
    processInstanceId
    executionCounter
  }
}
Copy to Clipboard Toggle word wrap

You can filter query results by using the where parameter, allowing multiple combinations based on workflow attributes.

Example query to filter by state

{
  ProcessInstances(where: {state: {equal: ACTIVE}}) {
    id
    processId
    processName
    start
    state
    variables
  }
}
Copy to Clipboard Toggle word wrap

Example query to filter by ID

{
  ProcessInstances(where: {id: {equal: "d43a56b6-fb11-4066-b689-d70386b9a375"}}) {
    id
    processId
    processName
    start
    state
    variables
  }
}
Copy to Clipboard Toggle word wrap

By default, filters are combined using the AND Operator. You can modify this behavior by combining filters with the AND or OR operators.

Example query to combine filters with the OR Operator

{
  ProcessInstances(where: {or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}) {
    id
    processId
    processName
    start
    end
    state
  }
}
Copy to Clipboard Toggle word wrap

Example query to combine filters with the AND and OR Operators

{
  ProcessInstances(where: {and: {processId: {equal: "travels"}, or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}}) {
    id
    processId
    processName
    start
    end
    state
  }
}
Copy to Clipboard Toggle word wrap

Depending on the attribute type, you can use the following avaialable Operators:

Expand
Attribute typeAvailable Operators

String array

  • contains: String
  • containsAll: Array of strings
  • containsAny: Array of strings
  • isNull: Boolean (true or false)

String

  • in: Array of strings
  • like: String
  • isNull: Boolean (true or false)
  • equal: String

ID

  • in: Array of strings
  • isNull: Boolean (true or false)
  • equal: String

Boolean

  • isNull: Boolean (true or false)
  • equal: Boolean (true or false)

Numeric

  • in: Array of integers
  • isNull: Boolean
  • equal: Integer
  • greaterThan: Integer
  • greaterThanEqual: Integer
  • lessThan: Integer
  • lessThanEqual: Integer
  • between: Numeric range
  • from: Integer
  • to: Integer

Date

  • isNull: Boolean (true or false)
  • equal: Date time
  • greaterThan: Date time
  • greaterThanEqual: Date time
  • lessThan: Date time
  • lessThanEqual: Date time
  • between: Date range
  • from: Date time
  • to: Date time

You can sort query results based on workflow attributes by using the orderBy parameter. You can also specify the sorting direction in an ascending (ASC) or a descending (DESC) order. Multiple attributes are applied in the order you specified.

Example query to sort by the start time in an ASC order

{
  ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}) {
    id
    processId
    processName
    start
    end
    state
  }
}
Copy to Clipboard Toggle word wrap

You can control the number of returned results and specify an offset by using the pagination parameter.

Example query to limit results to 10, starting from offset 0

{
  ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}, pagination: {limit: 10, offset: 0}) {
    id
    processId
    processName
    start
    end
    state
  }
}
Copy to Clipboard Toggle word wrap

5.3. Managing supporting services

This section provides an overview of the supporting services essential for OpenShift Serverless Logic. It specifically focuses on configuring and deploying the Data Index service and Job Service supporting services using the OpenShift Serverless Logic Operator.

In a typical OpenShift Serverless Logic installation, you must deploy both services to ensure successful workflow execution. The Data Index service allows for efficient data management, while the Job Service ensures reliable job handling.

When you deploy a supporting service in a given namespace, you can choose between an enabled or disabled deployment. An enabled deployment signals the OpenShift Serverless Logic Operator to automatically intercept workflow deployments using the preview or gitops profile within the namespace and configure them to connect with the service.

For example, when the Data Index service is enabled, workflows are automatically configured to send status change events to it. Similarly, enabling the Job Service ensures that a job is created whenever a workflow requires a timeout. The OpenShift Serverless Logic Operator also configures the Job Service to send events to the Data Index service, facilitating seamless integration between the services.

The OpenShift Serverless Logic Operator does not just deploy supporting services, it also manages other necessary configurations to ensure successful workflow execution. All these configurations are handled automatically. You only need to provide the supporting services configuration in the SonataFlowPlatform CR.

Note

Deploying only one of the supporting services or using a disabled deployment are advanced use cases. In a standard installation, you must enable both services to ensure smooth workflow execution.

To deploy supporting services, configure the dataIndex and jobService subfields within the spec.services section of the SonataFlowPlatform custom resource (CR). This configuration instructs the OpenShift Serverless Logic Operator to deploy each service when the SonataFlowPlatform CR is applied.

Each configuration of a service is handled independently, allowing you to customize these settings alongside other configurations in the SonataFlowPlatform CR.

See the following scaffold example configuration for deploying supporting services:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  services:
    dataIndex: 
1

      enabled: true 
2

      # Specific configurations for the Data Index Service
      # might be included here
    jobService: 
3

      enabled: true 
4

      # Specific configurations for the Job Service
      # might be included here
Copy to Clipboard Toggle word wrap
1
Data Index service configuration field.
2
Setting enabled: true deploys the Data Index service. If set to false or omitted, the deployment will be disabled. The default value is false.
3
Job Service configuration field.
4
Setting enabled: true deploys the Job Service. If set to false or omitted, the deployment will be disabled. The default value is false.

5.3.3. Supporting services scope

The SonataFlowPlatform custom resource (CR) enables the deployment of supporting services within a specific namespace. This means all automatically configured supporting services and workflow communications are restricted to the namespace of the deployed platform.

This feature is particularly useful when separate instances of supporting services are required for different sets of workflows. For example, you can deploy an application in isolation with its workflows and supporting services, ensuring they remain independent from other deployments.

The persistence configuration for supporting services in OpenShift Serverless Logic can be either ephemeral or PostgreSQL, depending on needs of your environment. Ephemeral persistence is ideal for development and testing, while PostgreSQL persistence is recommended for production environments.

5.3.4.1. Ephemeral persistence configuration

The ephemeral persistence uses an embedded PostgreSQL database that is dedicated to each service. The OpenShift Serverless Logic Operator recreates this database with every service restart, making it suitable only for development and testing purposes. You do not need any additional configuration other than the following SonataFlowPlatform CR:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  services:
    dataIndex:
      enabled: true
      # Specific configurations for the Data Index Service
      # might be included here
    jobService:
      enabled: true
      # Specific configurations for the Job Service
      # might be included here
Copy to Clipboard Toggle word wrap
5.3.4.2. Database migration configuration

Database migration refers to either initializing a given Data Index or Jobs Service database to its respective schema, or applying data or schema updates when new versions are released. You must configure the database migration strategy individually for each supporting service by using the dataIndex.persistence.dbMigrationStrategy and jobService.persistence.dbMigrationStrategy optional fields. If you do not configure a migration strategy, the system uses service as the default value.

Note

Database migration is supported only when using the PostgreSQL persistence configuration.

You can configure any of the following database migration strategies:

5.3.4.2.1. Job-based database migration strategy

When you configure the job-based strategy, the OpenShift Serverless Logic Operator uses a dedicated Kubernetes Job to manage the entire migration process. This Job runs before the supporting services deployment, ensuring that a service starts only if the corresponding migration completes successfully. You typically use this strategy in production environments.

When you configure the service-based strategy, the database migration is managed directly by each supporting service. The migration is executed as part of the service startup sequence. In worst-case scenarios, a service might start with failures if the migration is unsuccessful. Service-based database migration is the default strategy when you do not specify any configuration.

5.3.4.2.3. None migration strategy

When you configure the none strategy, neither the Operator nor the service attempts to perform the migration. You typically use this strategy in environments where a database administrator (DBA) manually executes all database migrations.

5.3.4.3. PostgreSQL persistence configuration

For PostgreSQL persistence, you must set up a PostgreSQL server instance on your cluster. The administration of this instance remains independent of the OpenShift Serverless Logic Operator control. To connect a supporting service with the PostgreSQL server, you must configure the appropriate database connection parameters.

You can configure PostgreSQL persistence in the SonataFlowPlatform CR by using the following example:

Example of PostgreSQL persistence configuration

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  services:
    dataIndex:
      enabled: true
      persistence:
        dbMigrationStrategy: job 
1

        postgresql:
          serviceRef:
            name: postgres-example 
2

            namespace: postgres-example-namespace 
3

            databaseName: example-database 
4

            databaseSchema: data-index-schema 
5

            port: 1234 
6

          secretRef:
            name: postgres-secrets-example 
7

            userKey: POSTGRESQL_USER 
8

            passwordKey: POSTGRESQL_PASSWORD 
9

    jobService:
      enabled: true
      persistence:
        dbMigrationStrategy: job
        postgresql:
        # Specific database configuration for the Job Service
        # might be included here.
Copy to Clipboard Toggle word wrap

1
Optional: Database migration strategy to use. Defaults to service.
2
Name of the service to connect with the PostgreSQL database server.
3
Optional: Defines the namespace of the PostgreSQL Service. Defaults to the SonataFlowPlatform namespace.
4
Defines the name of the PostgreSQL database for storing supporting service data.
5
Optional: Specifies the schema for storing supporting service data. Default value is SonataFlowPlatform name, suffixed with -data-index-service or -jobs-service. For example, sonataflow-platform-example-data-index-service.
6
Optional: Port number to connect with the PostgreSQL Service. Default value is 5432.
7
Defines the name of the secret containing the username and password for database access.
8
Defines the name of the key in the secret that contains the username to connect with the database.
9
Defines the name of the key in the secret that contains the password to connect with the database.
Note

You can configure each service’s persistence independently by using the respective persistence field.

Create the secrets to access PostgreSQL by running the following command:

$ oc create secret generic <postgresql_secret_name> \
  --from-literal=POSTGRESQL_USER=<user> \
  --from-literal=POSTGRESQL_PASSWORD=<password> \
  -n <namespace>
Copy to Clipboard Toggle word wrap

The OpenShift Serverless Logic Operator automatically connects supporting services to the common PostgreSQL server configured in the spec.persistence field.

For rules, the following precedence is applicable:

  • If you configure a specific persistence for a supporting service, for example, services.dataIndex.persistence, it uses that configuration.
  • If you do not configure persistence for a service, the system uses the common persistence configuration from the current platform.
Note

When using a common PostgreSQL configuration, each service schema is automatically set as the SonataFlowPlatform name, suffixed with -data-index-service or -jobs-service, for example, sonataflow-platform-example-data-index-service.

You can configure a common PostgreSQL service and database for all supporting services by using the spec.persistence.postgresql field in the SonataFlowPlatform Custom Resource (CR). When this field is configured, the OpenShift Serverless Logic Operator connects the supporting services to the specified database. Any workflows deployed in the same namespace using the preview or gitops profiles, and that do not specify a custom persistence configuration, will also connect to this database.

The following rules apply when configuring platform-scoped persistence:

  • If a supporting service has its own persistence configuration, for example, if services.dataIndex.persistence.postgresql is set, then that configuration takes precedence.
  • If a supporting service does not have a custom persistence configuration, the configuration is inherited from the current platform.
  • If a supporting service requires a specific database migration strategy, configure it by using the dataIndex.persistence.dbMigrationStrategy and jobService.persistence.dbMigrationStrategy fields.

The following SonataFlowPlatform CR fragment shows how to configure platform-scoped PostgreSQL persistence:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  persistence:
    postgresql:
      serviceRef:
        name: postgres-example 
1

        namespace: postgres-example-namespace 
2

        databaseName: example-database 
3

        port: 1234 
4

      secretRef:
        name: postgres-secrets-example 
5

        userKey: POSTGRESQL_USER 
6

        passwordKey: POSTGRESQL_PASSWORD 
7

    dataIndex:
      enabled: true
      persistence:
        dbMigrationStrategy: job 
8

    jobService:
      enabled: true
      persistence:
        dbMigrationStrategy: service 
9
Copy to Clipboard Toggle word wrap
1
Name of the Kubernetes service to connect to the PostgreSQL database server.
2
(Optional) Namespace containing the PostgreSQL service. Defaults to the SonataFlowPlatform namespace.
3
Name of the PostgreSQL database to store supporting services and workflows data.
4
(Optional) Port to connect to the PostgreSQL service. Defaults to 5432.
5
Name of the Kubernetes Secret that contains database credentials.
6
Secret key that stores the database username.
7
Secret key that stores the database password.
8
(Optional) Database migration strategy for the Data Index. Defaults to service.
9
(Optional) Database migration strategy for the Jobs Service. Defaults to service. You can configure distinct strategies per service if needed.

For a OpenShift Serverless Logic installation, the following types of events are generated:

  • Outgoing and incoming events related to workflow business logic.
  • Events sent from workflows to the Data Index and Job Service.
  • Events sent from the Job Service to the Data Index Service.

The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these events and services, ensuring efficient and reliable event handling.

To configure a platform-scoped eventing system, you can use the spec.eventing.broker.ref field in the SonataFlowPlatform CR to reference a Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the supporting services to produce and consume events by using the specified broker.

A workflow deployed in the same namespace with the preview or gitops profile and without a custom eventing system configuration, automatically links to a specified broker.

Important

In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.

The following example displays how to configure the SonataFlowPlatform CR for a platform-scoped eventing system:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  eventing:
    broker:
      ref:
        name: example-broker 
1

        namespace: example-broker-namespace 
2

        apiVersion: eventing.knative.dev/v1
        kind: Broker
Copy to Clipboard Toggle word wrap
1
Specifies the Knative Eventing Broker name.
2
Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the Broker in the same namespace as SonataFlowPlatform.

A service-scoped eventing system configuration allows for fine-grained control over the eventing system, specifically for the Data Index or the Job Service.

Note

For a OpenShift Serverless Logic installation, consider using a platform-scoped eventing system configuration. The service-scoped configuration is intended for advanced use cases only.

5.3.5.3. Data Index eventing system configuration

To configure a service-scoped eventing system for the Data Index, you must use the spec.services.dataIndex.source.ref field in the SonataFlowPlatform CR to refer to a specific Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the Data Index to consume SonataFlow system events from that Broker.

Important

In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.

The following example displays the Data Index eventing system configuration:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
spec:
  services:
    dataIndex:
      source:
        ref:
          name: data-index-source-example-broker 
1

          namespace: data-index-source-example-broker-namespace 
2

          apiVersion: eventing.knative.dev/v1
          kind: Broker
Copy to Clipboard Toggle word wrap
1
Specifies the Knative Eventing Broker from which the Data Index consumes events.
2
Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the broker in the same namespace as SonataFlowPlatform.
5.3.5.4. Job Service eventing system configuration

To configure a service-scoped eventing system for the Job Service, you must use the spec.services.jobService.source.ref and spec.services.jobService.sink.ref fields in the SonataFlowPlatform CR. These fields instruct the OpenShift Serverless Logic Operator to automatically link the Job Service to consume and produce SonataFlow system events, based on the provided configuration.

Important

In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.

The following example displays the Job Service eventing system configuration:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
spec:
  services:
    jobService:
      source:
        ref:
          name: jobs-service-source-example-broker 
1

          namespace: jobs-service-source-example-broker-namespace 
2

          apiVersion: eventing.knative.dev/v1
          kind: Broker
      sink:
        ref:
          name: jobs-service-sink-example-broker 
3

          namespace: jobs-service-sink-example-broker-namespace 
4

          apiVersion: eventing.knative.dev/v1
          kind: Broker
Copy to Clipboard Toggle word wrap
1
Specifies the Knative Eventing Broker from which the Job Service consumes events.
2
Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the Broker in the same namespace as SonataFlowPlatform.
3
Specifies the Knative Eventing Broker on which the Job Service produces events.
4
Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the Broker in the same namespace as SonataFlowPlatform.

When you deploy cluster-scoped supporting services, the supporting services automatically link to the Broker specified in the SonataFlowPlatform CR, which is referenced by the SonataFlowClusterPlatform CR.

The OpenShift Serverless Logic Operator follows a defined order of precedence to configure the eventing system for a supporting service.

Eventing system configuration precedence rules are as follows:

  1. If the supporting service has its own eventing system configuration, using either the Data Index eventing system or the Job Service eventing system configuration, then supporting service configuration takes precedence.
  2. If the SonataFlowPlatform CR enclosing the supporting service is configured with a platform-scoped eventing system, that configuration takes precedence.
  3. If the current cluster is configured with a cluster-scoped eventing system, that configuration takes precedence.
  4. f none of the previous configurations exist, the supporting service delivers events by direct HTTP calls.
5.3.5.7. Eventing system linking configuration

The OpenShift Serverless Logic Operator automatically creates Knative Eventing, SinkBindings, and triggers to link supporting services with the eventing system. These objects enable the production and consumption of events by the supporting services.

The following example displays the Knative Native eventing objects created for the SonataFlowPlatform CR:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  eventing:
    broker:
      ref:
        name: example-broker 
1

        apiVersion: eventing.knative.dev/v1
        kind: Broker
  services:
    dataIndex: 
2

      enabled: true
    jobService: 
3

      enabled: true
Copy to Clipboard Toggle word wrap
1
Used by the Data Index, Job Service, and workflows, unless overridden.
2
Data Index ephemeral deployment, configures the Data Index service.
3
Job Service ephemeral deployment, configures the Job Service.

The following example displays how to configure a Knative Kafka Broker for use with the SonataFlowPlatform CR:

Example of Knative Kafka Broker example used by the SonataFlowPlatform CR

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  annotations:
    eventing.knative.dev/broker.class: Kafka 
1

  name: example-broker
  namespace: example-namespace
spec:
  config:
    apiVersion: v1
    kind: ConfigMap
    name: kafka-broker-config
    namespace: knative-eventing
Copy to Clipboard Toggle word wrap

1
Use the Kafka class to create a Kafka Knative Broker.

The following command displays the list of triggers set up for the Data Index and Job Service events, showing which services are subscribed to the events:

$ oc get triggers -n example-namespace
Copy to Clipboard Toggle word wrap

Example output

NAME                                                        BROKER           SINK                                                       AGE   CONDITIONS   READY   REASON
data-index-jobs-fbf285df-c0a4-4545-b77a-c232ec2890e2        example-broker   service:sonataflow-platform-example-data-index-service     106s  7 OK / 7    True    -
data-index-process-definition-e48b4e4bf73e22b90ecf7e093ff6b1eaf   example-broker   service:sonataflow-platform-example-data-index-service     106s  7 OK / 7    True    -
data-index-process-error-fbf285df-c0a4-4545-b77a-c232ec2890e2   example-broker   service:sonataflow-platform-example-data-index-service     106s  7 OK / 7    True    -
data-index-process-instance-mul35f055c67a626f51bb8d2752606a6b54   example-broker   service:sonataflow-platform-example-data-index-service     106s  7 OK / 7    True    -
data-index-process-node-fbf285df-c0a4-4545-b77a-c232ec2890e2      example-broker   service:sonataflow-platform-example-data-index-service     106s  7 OK / 7    True    -
data-index-process-state-fbf285df-c0a4-4545-b77a-c232ec2890e2     example-broker   service:sonataflow-platform-example-data-index-service     106s  7 OK / 7    True    -
data-index-process-variable-ac727d6051750888dedb72f697737c0dfbf   example-broker   service:sonataflow-platform-example-data-index-service     106s  7 OK / 7    True    -
jobs-service-create-job-fbf285df-c0a4-4545-b77a-c232ec2890e2    example-broker   service:sonataflow-platform-example-jobs-service         106s  7 OK / 7    True    -
jobs-service-delete-job-fbf285df-c0a4-4545-b77a-c232ec2890e2    example-broker   service:sonataflow-platform-example-jobs-service         106s  7 OK / 7    True    -
Copy to Clipboard Toggle word wrap

To see the SinkBinding resource for the Job Service, use the following command:

$ oc get sources -n example-namespace
Copy to Clipboard Toggle word wrap

Example output

NAME                                          TYPE          RESOURCE                           SINK                    READY
sonataflow-platform-example-jobs-service-sb   SinkBinding   sinkbindings.sources.knative.dev   broker:example-broker   True
Copy to Clipboard Toggle word wrap

5.3.6. Advanced supporting services configurations

In scenarios where you must apply advanced configurations for supporting services, use the podTemplate field in the SonataFlowPlatform custom resource (CR). This field allows you to customize the service pod deployment by specifying configurations like the number of replicas, environment variables, container images, and initialization options.

You can configure advanced settings for the service by using the following example:

Advanced configurations example for the Data Index service

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  services:
    # This can be either 'dataIndex' or 'jobService'
    dataIndex:
      enabled: true
      podTemplate:
        replicas: 2 
1

        container: 
2

          env: 
3

            - name: <any_advanced_config_property>
              value: <any_value>
          image: 
4

        initContainers: 
5
Copy to Clipboard Toggle word wrap

Note

You can set the 'services' field to either 'dataIndex' or 'jobService' depending on your requirement. The rest of the configuration remains the same.

1
Defines the number of replicas. Default value is 1. In the case of jobService, this value is always overridden to 1 because it operates as a singleton service.
2
Holds specific configurations for the container running the service.
3
Allows you to fine-tune service properties by specifying environment variables.
4
Configures the container image for the service, useful if you need to update or customize the image.
5
Configures init containers for the pod, useful for setting up prerequisites before the main container starts.
Note

The podTemplate field provides flexibility for tailoring the deployment of each supporting service. It follows the standard PodSpec API, meaning the same API validation rules apply to these fields.

5.3.7. Cluster scoped supporting services

You can define a cluster-wide set of supporting services that can be consumed by workflows across different namespaces, by using the SonataFlowClusterPlatform custom resource (CR). By referencing an existing namespace-specific SonataFlowPlatform CR, you can extend the use of these services cluster-wide.

You can use the following example of a basic configuration that enables workflows deployed in any namespace to utilize supporting services deployed in a specific namespace, such as example-namespace:

Example of a SonataFlowClusterPlatform CR

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowClusterPlatform
metadata:
  name: cluster-platform
spec:
  platformRef:
    name: sonataflow-platform-example 
1

    namespace: example-namespace 
2
Copy to Clipboard Toggle word wrap

1
Specifies the name of the already installed SonataFlowPlatform CR that manages the supporting services.
2
Specifies the namespace of the SonataFlowPlatform CR that manages the supporting services.
Note

You can override these cluster-wide services within any namespace by configuring that namespace in SonataFlowPlatform.spec.services.

Chapter 6. Configuring workflow services

This section describes how to configure a workflow service by using the OpenShift Serverless Logic Operator. The section outlines key concepts and configuration options that you can reference for customizing your workflow service according to your environment and use case. You can edit workflow configurations, manage specific properties, and define global managed properties to ensure consistent and efficient execution of your workflows.

6.1. Modifying workflow configuration

The OpenShift Serverless Logic Operator decides the workflow configuration based on two ConfigMaps for each workflow: a workflow for user-defined properties and a workflow for Operator managed-properties:

  • User-defined properties: if your workflow requires particular configurations, ensure that you create a ConfigMap named <workflow-name>-props that includes all the configurations before workflow deployment. For example, if your workflow name is greeting, the ConfigMap name is greeting-managed-props. If such ConfigMap does not exists, the Operator creates the workflow to have empty or default content.
  • Managed properties: automatically generated by the Operator and stored in a ConfigMap named <workflow-name>-managed-props. These properties are typically related to configurations for the workflow. The properties connect to supporting services, the eventing system, and so on.
Note

Managed properties always override user-defined properties with the same key. These managed properties are read-only and reset by the Operator during each reconciliation cycle.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have created your OpenShift Serverless Logic project.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).
  • You have previously created the workflow user-defined properties ConfigMap, or the Operator has created it.

Procedure

  1. Open your terminal and access the OpenShift Serverless Logic project. Ensure that you are working within the correct project, namespace, where your workflow service is deployed.

    $ oc project <your-project-name>
    Copy to Clipboard Toggle word wrap
  2. Identify the name of the workflow you want to configure.

    For example, if your workflow is named greeting, the user-defined properties are stored in a ConfigMap named greeting-props.

  3. Edit the workflow ConfigMap by executing the following example command:

    $ oc edit configmap greeting-props
    Copy to Clipboard Toggle word wrap

    Replace greeting with the actual name of your workflow.

  4. Modify the application.properties section.

    Locate the data section and update the application.properties field with your desired configuration.

    Example of ConfigMap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        app: greeting
      name: greeting-props
      namespace: default
    data:
      application.properties: |
        my.properties.key = any-value
      ...
    Copy to Clipboard Toggle word wrap

  5. After updating the properties, save the file and exit the editor. The updated configuration will be applied automatically.
Note

The workflow runtime is based on Quarkus, so all the keys under application.properties must follow Quarkus property syntax. If the format is invalid, the OpenShift Serverless Logic Operator might overwrite your changes with default values during the next reconciliation cycle.

Verification

  • To confirm that your changes are applied successfully, execute the following example command:

    $ oc get configmap greeting-props -o yaml
    Copy to Clipboard Toggle word wrap

6.2. Managed properties in workflow services

The OpenShift Serverless Logic Operator uses managed properties to control essential runtime behavior. These values are stored separately and override user-defined properties during each reconciliation cycle. You can also apply custom managed properties globally by updating the SonataFlowPlatform resource within the same namespace.

Some properties used by the OpenShift Serverless Logic Operator are managed properties and cannot be changed through the standard user configuration. These properties are stored in a dedicated ConfigMap, typically named <workflow-name>-managed-props. If you try to modify any managed property directly, the Operator will automatically revert it to its default value, but it will preserve your other user-defined changes.

Note

You cannot override the default managed properties set by the Operator using global managed properties. These defaults are always enforced during reconciliation.

The following table lists some core managed properties as an example:

Expand
Table 6.1. Managed properties overview
Property KeyImmutable ValueProfile

quarkus.http.port

8080

all

kogito.service.url

http://greeting.example-namespace

all

org.kie.kogito.addons.knative.eventing.health-enabled

false

dev

Other managed properties include Kubernetes service discovery settings, Data Index location properties, Job Service location properties, and Knative Eventing system configurations.

6.3. Defining global-managed-properties

You can define custom global managed properties for all workflows in a specific namespace by editing the SonataFlowPlatform resource. These properties are defined under the .spec.properties.flow attribute and are automatically applied to every workflow service in the same namespace.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have created your OpenShift Serverless Logic project.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Locate the SonataFlowPlatform resource in the same namespace as your workflow services.

    This is where you will define global managed properties.

  2. Open the SonataFlowPlatform resource in your default editor by executing the following command:

    $ oc edit sonataflowplatform sonataflow-platform-example
    Copy to Clipboard Toggle word wrap
  3. Define custom global managed properties.

    In the editor, navigate to the spec.properties.flow section and define your desired properties as shown in the following example:

    Example of a SonataFlowPlatform with flow properties

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowPlatform
    metadata:
      name: sonataflow-platform-example
    spec:
        properties:
            flow: 
    1
    
             - name: quarkus.log.category 
    2
    
               value: INFO 
    3
    Copy to Clipboard Toggle word wrap

    1
    Attribute to define a list of custom global managed properties.
    2
    The property key.
    3
    The property value to apply globally.

    This configuration adds the quarkus.log.category=INFO property to the managed properties of every workflow service in the namespace.

  4. Optional: Use external ConfigMaps or Secrets.

    You can also reference values from existing ConfigMap or Secret resources using the valueFrom attribute as shown in the following example:

    Example of a SonataFlowPlatform properties from ConfigMap and Secret

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowPlatform
    metadata:
      name: sonataflow-platform-example
    spec:
        properties:
            flow:
             - name: my.petstore.auth.token
               valueFrom: 
    1
    
                    secretKeyRef: petstore-credentials 
    2
    
                        keyName: AUTH_TOKEN
             - name: my.petstore.url
               valueFrom:
                    configMapRef: petstore-props 
    3
    
                        keyName: PETSTORE_URL
    Copy to Clipboard Toggle word wrap

    1
    The valueFrom attribute is derived from the Kubernetes EnvVar API and works similarly to how environment variables reference external sources.
    2
    valueFrom.secretKeyRef pulls the value from a key named AUTH_TOKEN in the petstore-credentials secret.
    3
    valueFrom.configMapRef pulls the value from a key named PETSTORE_URL in the petstore-props ConfigMap.

Chapter 7. Managing workflow persistence

You can configure a SonataFlow instance to use persistence and store workflow context in a relational database.

By design, Kubernetes pods are stateless. This behavior can pose challenges for workloads that need to maintain the application state across pod restarts. In the case of OpenShift Serverless Logic, the workflow context is lost when the pod restarts by default.

To ensure workflow recovery in such scenarios, you must configure workflow runtime persistence. Use the SonataFlowPlatform custom resource (CR) or the SonataFlow CR to provide this configuration. The scope of the configuration varies depending on which resource you use.

The SonataFlowPlatform custom resource (CR) enables persistence configuration at the namespace level. This approach applies the persistence settings automatically to all workflows deployed in the namespace. It simplifies resource configuration, especially when multiple workflows in the namespace belong to the same application. While this configuration is applied by default, individual workflows in the namespace can override it using the SonataFlow CR.

The OpenShift Serverless Logic Operator also uses this configuration to set up persistence for supporting services.

Note

The persistence configurations are applied only at the time of workflow deployment. Changes to the SonataFlowPlatform CR do not affect workflows that are already deployed.

Procedure

  1. Define the SonataFlowPlatform CR.
  2. Specify the persistence settings in the persistence field under the SonataFlowPlatform CR spec.

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowPlatform
    metadata:
      name: sonataflow-platform-example
      namespace: example-namespace
    spec:
      persistence:
        postgresql:
          serviceRef:
            name: postgres-example 
    1
    
            namespace: postgres-example-namespace 
    2
    
            databaseName: example-database 
    3
    
            port: 1234 
    4
    
          secretRef:
            name: postgres-secrets-example 
    5
    
            userKey: POSTGRESQL_USER 
    6
    
            passwordKey: POSTGRESQL_PASSWORD 
    7
    Copy to Clipboard Toggle word wrap
    1
    Name of the Kubernetes Service connecting to the PostgreSQL database.
    2
    Optional: Namespace of the PostgreSQL Service. Defaults to the namespace of the SonataFlowPlatform.
    3
    Name of the PostgreSQL database for storing workflow data.
    4
    Optional: Port number to connect to the PostgreSQL service. Defaults to 5432.
    5
    Name of the Kubernetes Secret containing database credentials.
    6
    Key in the Secret object that contains the database username.
    7
    Key in the Secret object that contains the database password.
  3. View the generated environment variables for the workflow.

    The following example shows the generated environment variables for a workflow named example-workflow deployed with the earlier SonataFlowPlatform configuration. These configurations specifically relate to persistence and are managed by the OpenShift Serverless Logic Operator. You cannot modify these settings once you have applied them.

Note

When you use the SonataFlowPlatform persistence, every workflow is configured to use a PostgreSQL schema name equal to the workflow name.

env:
  - name: QUARKUS_DATASOURCE_USERNAME
    valueFrom:
      secretKeyRef:
        name: postgres-secrets-example
        key: POSTGRESQL_USER
  - name: QUARKUS_DATASOURCE_PASSWORD
    valueFrom:
      secretKeyRef:
        name: postgres-secrets-example
        key: POSTGRESQL_PASSWORD
  - name: QUARKUS_DATASOURCE_DB_KIND
    value: postgresql
  - name: QUARKUS_DATASOURCE_JDBC_URL
    value: >-
      jdbc:postgresql://postgres-example.postgres-example-namespace:1234/example-database?currentSchema=example-workflow
  - name: KOGITO_PERSISTENCE_TYPE
    value: jdbc
Copy to Clipboard Toggle word wrap

When this persistence configuration is in place, the OpenShift Serverless Logic Operator configures every workflow deployed in this namespace using the preview or gitops profile, to connect with the PostgreSQL database by injecting relevant JDBC connection parameters as environment variables.

Note

PostgreSQL is currently the only supported database for persistence.

For SonataFlow CR deployments using the preview profile, the OpenShift Serverless Logic build system automatically includes specific Quarkus extensions required for enabling persistence. This ensures compatibility with persistence mechanisms, streamlining the workflow deployment process.

The SonataFlow custom resource (CR) enables workflow-specific persistence configuration. You can use this configuration independently, even if SonataFlowPlatform persistence is already set up in the current namespace.

Procedure

  • Configure persistence by using the persistence field in the SonataFlow CR specification as shown in the following example:
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
  name: example-workflow
  annotations:
    sonataflow.org/description: Example Workflow
    sonataflow.org/version: 0.0.1
spec:
  persistence:
    postgresql:
      serviceRef:
        name: postgres-example 
1

        namespace: postgres-example-namespace 
2

        databaseName: example-database 
3

        databaseSchema: example-schema 
4

        port: 1234 
5

      secretRef:
        name: postgres-secrets-example 
6

        userKey: POSTGRESQL_USER 
7

        passwordKey: POSTGRESQL_PASSWORD 
8

  flow:
Copy to Clipboard Toggle word wrap
1
Name of the Kubernetes Service that connects to the PostgreSQL database server.
2
Optional: Namespace containing the PostgreSQL Service. Defaults to the workflow namespace.
3
Name of the PostgreSQL database where workflow data is stored.
4
Optional: Name of the database schema for workflow data. Defaults to the workflow name.
5
Optional: Port to connect to the PostgreSQL Service. Defaults to 5432.
6
Name of the Kubernetes Secret containing database credentials.
7
Key in the Secret object containing the database username.
8
Key in the Secret object containing the database password.

This configuration informs the OpenShift Serverless Logic Operator that the workflow must connect to the specified PostgreSQL database server when deployed. The OpenShift Serverless Logic Operator adds the relevant JDBC connection parameters as environment variables to the workflow container.

Note

PostgreSQL is currently the only supported database for persistence.

For SonataFlow CR deployments using the preview profile, the OpenShift Serverless Logic build system includes the required Quarkus extensions to enable persistence automatically.

7.3. Persistence configuration precedence rules

You can use SonataFlow custom resource (CR) persistence independently or alongside SonataFlowPlatform CR persistence. If a SonataFlowPlatform CR persistence configuration exists in the current namespace, the following rules determine which persistence configuration applies:

  1. If the SonataFlow CR includes a persistence configuration, that configuration takes precedence and applies to the workflow.
  2. If the SonataFlow CR does not include a persistence configuration and the spec.persistence field is absent, the OpenShift Serverless Logic Operator uses the persistence configuration from the current SonataFlowPlatform if any.
  3. To disable persistence for the workflow, explicitly set spec.persistence: {} in the SonataFlow CR. This configuration ensures the workflow does not inherit persistence settings from the SonataFlowPlatform CR.

7.4. Profile specific persistence requirements

The persistence configurations provided for both SonataFlowPlatform custom resource (CR) and SonataFlow CR apply equally to the preview and gitops profiles. However, you must avoid using these configurations with the dev profile, as this profile ignores them entirely.

The primary difference between the preview and gitops profiles lies in the build process.

When using the gitops profile, ensure that the following Quarkus extensions are included in the workflow image during the build process.

Expand
groupIdartifactIdversion

io.quarkus

quarkus-agroal

3.15.4.redhat-00001

io.quarkus

quarkus-jdbc-postgresql

3.15.4.redhat-00001

org.kie

kie-addons-quarkus-persistence-jdbc

9.103.0.redhat-00003

If you are using the registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.36.0 to generate your images, you can pass the following build argument to include these extensions:

$ QUARKUS_EXTENSIONS=io.quarkus:quarkus-agroal:3.15.4.redhat-00001,io.quarkus:quarkus-jdbc-postgresql:3.15.4.redhat-00001,org.kie:kie-addons-quarkus-persistence-jdbc:9.103.0.redhat-00003
Copy to Clipboard Toggle word wrap

7.5. Database schema initialization

When you are using SonataFlow with PostgreSQL persistence, you can initialize the database schema either by enabling Flyway or by manually applying database schema updates using Data Definition Language (DDL) scripts.

Flyway is managed by the kie-addons-quarkus-flyway runtime module and it is disabled by default. To enable Flyway, you must configure it using one of the following methods:

To enable Flyway in the workflow ConfigMap, you can add the following property:

Example of enabling Flyway in the workflow ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: example-workflow
  name: example-workflow-props
data:
  application.properties: |
    kie.flyway.enabled = true
Copy to Clipboard Toggle word wrap

You can enable Flyway by adding an environment variable to the spec.podTemplate.container field in the SonataFlow CR by using the following example:

Example of enabling Flyway by using the workflow container environment variable

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
  name: example-workflow
  annotations:
    sonataflow.org/description: Example Workflow
    sonataflow.org/version: 0.0.1
spec:
  podTemplate:
    container:
      env:
        - name: KIE_FLYWAY_ENABLED
          value: 'true'
  flow: ...
Copy to Clipboard Toggle word wrap

To apply a common Flyway configuration to all workflows within a namespace, you can add the property to the spec.properties.flow field of the SonataFlowPlatform CR shown in the following example:

Note

This configuration is applied during workflow deployment. Ensure the Flyway property is set before deploying workflows.

Example of enabling Flyway by using the SonataFlowPlatform properties

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform
spec:
  properties:
    flow:
      - name: kie.flyway.enabled
        value: true
Copy to Clipboard Toggle word wrap

If you prefer manual initialization, you must disable Flyway by ensuring the kie.flyway.enabled property is either not configured or explicitly set to false.

  • By default, each workflow uses a schema name equal to the workflow name. Ensure that you manually apply the schema initialization for each workflow.
  • If you are using the SonataFlow custom resource (CR) persistence configuration, you can specify a custom schema name.

Procedure

  1. Download the DDL scripts from the kogito-ddl-9.103.0.redhat-00003-db-scripts.zip location.
  2. Extract the files.
  3. Run the .sql files located in the root directory on the target PostgreSQL database. Ensure that the files are executed in the order of their version numbers.

    For example:

    • V1.35.0__create_runtime_PostgreSQL.sql
    • V10.0.0__add_business_key_PostgreSQL.sql
    • V10.0.1__alter_correlation_PostgreSQL.sql

      Note

      The file version numbers are not associated with the OpenShift Serverless Logic Operator versioning.

Chapter 8. Workflow eventing system

You can set up the eventing system for a SonataFlow workflow.

In a OpenShift Serverless Logic installation, the following types of events are generated:

  • Outgoing and incoming events related to workflow business logic.
  • Events sent from workflows to the Data Index and Job Service.
  • Events sent from the Job Service to the Data Index Service.

The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these services, ensuring efficient and reliable event handling.

8.1. Platform-scoped eventing system configuration

To configure a platform-scoped eventing system, you can use the spec.eventing.broker.ref field in the SonataFlowPlatform custom resource (CR) to reference a Knative Eventing broker.

This configuration instructs the OpenShift Serverless Logic Operator to automatically link every workflow deployed in the specified namespace, using the preview or gitops profile, to produce and consume events through the defined broker.

The supporting services deployed in the namespace without a custom eventing configuration are also linked to this broker.

Note

In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.

The following example displays how to configure the SonataFlowPlatform CR for a platform-scoped eventing system:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: <example-namespace>
spec:
  eventing:
    broker:
      ref:
        name: example-broker 
1

        namespace: <example-broker-namespace> 
2

        apiVersion: eventing.knative.dev/v1
        kind: Broker
Copy to Clipboard Toggle word wrap
1
Specifies the Knative Eventing Broker name.
2
Optional: Specifies the namespace of the Knative Eventing Broker. If you do not sepcify the value, then the parameter defaults to the namespace of the SonataFlowPlatform CR. Consider creating the broker in the same namespace as SonataFlowPlatform.

8.2. Workflow-scoped eventing system configuration

A workflow-scoped eventing system configuration allows for detailed customization of the events produced and consumed by a specific workflow. You can use the spec.sink.ref and spec.sources[] fields in the SonataFlow CR to configure outgoing and incoming events.

8.2.1. Outgoing eventing system configuration

To configure outgoing events, you can use the spec.sink.ref field in the SonataFlow CR. This configuration ensures the workflow produces events using the specified Knative Eventing Broker, including both system events and workflow business events.

The following example displays how to configure the SonataFlow CR for a workflow-scoped outgoing eventing system:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
  name: example-workflow
  namespace: example-workflow-namespace
  annotations:
    sonataflow.org/description: Example Workflow
    sonataflow.org/version: 0.0.1
    sonataflow.org/profile: preview
spec:
  sink:
    ref:
      name: outgoing-example-broker 
1

      namespace: outgoing-example-broker-namespace 
2

      apiVersion: eventing.knative.dev/v1
      kind: Broker
  flow: 
3

    start: ExampleStartState
    events: 
4

      - name: outEvent1 
5

        source: ''
        kind: produced
        type: out-event-type1 
6

    ...
Copy to Clipboard Toggle word wrap
1
Name of the Knative Eventing Broker to use for all the events produced by the workflow, including the SonataFlow system events.
2
Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlow namespace. Consider creating the broker in the same namespace as SonataFlow.
3
Flow definition field in the SonataFlow CR.
4
Events definition field in the SonataFlow CR.
5
Example of an outgoing event outEvent1 definition.
6
Event type for the outEvent1 outgoing event.

8.2.2. Incoming eventing system configuration

To configure incoming events, you can use the spec.sources[] field in the SonataFlow CR. You can add an entry for each event type requiring specific configuration. This setup allows workflows to consume events from different brokers based on event type.

If an incoming event type lacks a specific Broker configuration, the system applies eventing system configuration precedence rules.

The following example displays how to configure the SonataFlow CR for a workflow-scoped incoming eventing system:

Note

The link between a spec.sources[] entry and the workflow event, is by using the event type.

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
  name: example-workflow
  namespace: example-workflow-namespace
  annotations:
    sonataflow.org/description: Example Workflow
    sonataflow.org/version: 0.0.1
    sonataflow.org/profile: preview
spec:
  sources:
    - eventType: in-event-type1 
1

      ref:
        name: incoming-example-broker1 
2

        namespace: incoming-example-broker1-namespace 
3

        apiVersion: eventing.knative.dev/v1
        kind: Broker
    - eventType: in-event-type2 
4

      ref:
        name: incoming-example-broker2 
5

        namespace: incoming-example-broker2-namespace 
6

        apiVersion: eventing.knative.dev/v1
        kind: Broker
  flow: 
7

    start: ExampleStartState
    events: 
8

      - name: inEvent1 
9

        source: ''
        kind: consumed
        type: in-event-type1 
10

      - name: inEvent2 
11

        source: ''
        kind: consumed
        type: in-event-type2 
12

    ...
Copy to Clipboard Toggle word wrap
1
Configure the workflow to consume events of type in-event-type1 using the specified Knative Eventing Broker.
2
Name of the Knative Eventing Broker to use for the consumption of the events of type in-event-type1 sent to this workflow.
3
Optional: If you do not specify the value, the parameter defaults to the SonataFlow namespace. Consider creating the broker in the same namespace as the SonataFlow workflow.
4
Configure the workflow to consume events of type in-event-type2 using the specified Knative Eventing Broker.
5
Name of the Knative Eventing Broker to use for the consumption of the events of type in-event-type2 sent to this workflow.
6
Optional: If you do not specify the value, the parameter defaults to the SonataFlow namespace. Consider creating the broker in the same namespace as the SonataFlow workflow.
7
Flow definition field in the SonataFlow CR.
8
Events definition field in the SonataFlow CR.
9
Example of an incoming event inEvent1 definition.
10
Event type for the incoming event inEvent1. The link of the workflow event, with the corresponding spec.sources[] entry, is by using the event type name in-event-type1.
11
Example of an incoming event inEvent2 definition.
12
Event type for the incoming event inEvent2. The link of the workflow event, with the corresponding spec.sources[] entry, is created by using the event type name in-event-type2.

8.3. Cluster-scoped eventing system configuration

In a SonataFlowClusterPlatform setup, workflows are automatically linked to the Broker specified in the associated SonataFlowPlatform CR. This linkage follows the Eventing System configuration precedence rules.

To ensure proper integration, you can configure Broker in the SonataFlowPlatform CR referenced by the SonataFlowClusterPlatform CR.

The following example displays how to configure the SonataFlowClusterPlatform CR and its reference to the SonataFlowPlatform CR:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: global-platform
  namespace: global-namespace
spec:
  eventing:
    broker:
      ref:
        name: global-broker
        namespace: global-namespace
        apiVersion: eventing.knative.dev/v1
        kind: Broker
---
apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowClusterPlatform
metadata:
  name: cluster-platform-example
spec:
  platformRef:
    name: global-platform
    namespace: global-namespace
    ...
Copy to Clipboard Toggle word wrap
Note

The SonataFlowClusterPlatform CR can refer to any SonataFlowPlatform CR that has already been deployed.

The OpenShift Serverless Logic Operator follows a defined order of precedence to determine the eventing system configuration for a workflow.

Eventing system configuration precedence rules are as follows:

  1. If the workflow has a defined eventing system by using either workflow-scoped outgoing or incoming eventing system configurations, the choice of configuration takes priority over the other configuration and applies to the workflow.
  2. If the SonataFlowPlatform CR enclosing the workflow has a platform-scoped eventing system configured, this configuration is then applied to the next step.
  3. If the current cluster is configured with a cluster-scoped eventing system, it is applied if no workflow-scoped or platform-scoped configuration exists.
  4. Review the following information, if none of the above configurations are defined:

    • The workflow uses direct HTTP calls to deliver SonataFlow system events to supporting services.
    • The workflow consumes incoming events by HTTP POST calls at the workflow service root path /.
    • No eventing system is configured to produce workflow business events. Any attempt to produce such events might result in a failure.

8.5. Linking workflows to the eventing system

The OpenShift Serverless Logic Operator links workflows with the eventing system using Knative Eventing, SinkBindings, and triggers. These objects are created automatically by the OpenShift Serverless Logic Operator and simplify the production and consumption of workflow events.

The following example shows the Knative Eventing objects created for an example-workflow workflow configured with a platform-scoped eventing system:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlowPlatform
metadata:
  name: sonataflow-platform-example
  namespace: example-namespace
spec:
  eventing:
    broker:
      ref:
        name: example-broker 
1

        apiVersion: eventing.knative.dev/v1
        kind: Broker
  services:
    dataIndex: 
2

      enabled: true
    jobService: 
3

      enabled: true
  ...
Copy to Clipboard Toggle word wrap
1
The example-broker object is used by the Data Index, Jobs Service, and the example-workflow workflow.
2
The Data Index is deployed with ephemeral configurations.
3
The Jobs Service is deployed with ephemeral configurations.

The example-broker object is a Kafka class Broker, and its configuration is defined in the kafka-broker-config config map.

The following example displays how to configure a Kafka Knative Broker for use with the SonataFlowPlatform:

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  annotations:
    eventing.knative.dev/broker.class: Kafka 
1

  name: example-broker
  namespace: example-namespace
spec:
  config:
    apiVersion: v1
    kind: ConfigMap
    name: kafka-broker-config
    namespace: knative-eventing
Copy to Clipboard Toggle word wrap
1
The Kafka class is used to create the example-broker object.

The following example displays how the example-workflow is automatically linked to the example-broker in the example-namespace for event production and consumption:

apiVersion: sonataflow.org/v1alpha08
kind: SonataFlow
metadata:
  name: example-workflow
  namespace: example-namespace
  annotations:
    sonataflow.org/description: Example Workflow
    sonataflow.org/version: 0.0.1
    sonataflow.org/profile: preview
spec:
  flow:
    start: ExampleStartState
    events:
      - name: outEvent1
        source: ''
        kind: produced
        type: out-event-type1 
1

      - name: inEvent1
        source: ''
        kind: consumed
        type: in-event-type1 
2

      - name: inEvent2
        source: ''
        kind: consumed
        type: in-event-type2 
3

    states:
      - name: ExampleStartState
    ...
Copy to Clipboard Toggle word wrap
1
The example-workflow outgoing events are produced by using the SinkBinding named example-workflow-sb.
2
Events of type in-event-type1 are consumed by using the example-workflow-inevent1-b40c067c-595b-4913-81a4-c8efa980bc11 trigger.
3
Events of type in-event-type2 are consumed by using the example-workflow-inevent2-b40c067c-595b-4913-81a4-c8efa980bc11 trigger.

You can list the automatically created SinkBinding named example-workflow-sb by using the following command:

$ oc get sinkbindings -n example-namespace
Copy to Clipboard Toggle word wrap

Example output

NAME                   TYPE          RESOURCE                           SINK                    READY
example-workflow-sb    SinkBinding   sinkbindings.sources.knative.dev   broker:example-broker   True
Copy to Clipboard Toggle word wrap

You can use the following command to list the automatically created triggers for event consumption:

$ oc get triggers -n <example-namespace>
Copy to Clipboard Toggle word wrap

Example output

NAME                                                              BROKER           SINK                                                     AGE   CONDITIONS   READY   REASON
example-workflow-inevent1-b40c067c-595b-4913-81a4-c8efa980bc11    example-broker   service:example-workflow                                 16m   7 OK / 7     True
example-workflow-inevent2-b40c067c-595b-4913-81a4-c8efa980bc11    example-broker   service:example-workflow                                 16m   7 OK / 7     True
Copy to Clipboard Toggle word wrap

Chapter 9. Configuring custom Maven mirrors

OpenShift Serverless Logic uses Maven Central by default to resolve Maven artifacts during workflow builds. The provided builder and development images include all required Java libraries to run workflows, but in certain scenarios, such as when you add a custom Quarkus extension, you must download the additional dependencies from Maven Central.

In environments with restricted or firewalled network access, direct access to Maven Central might not be available. In such cases, you can configure the workflow containers to use a custom Maven mirror, such as an internal company registry or repository manager.

You can configure a custom Maven mirror at different levels as follows:

  • Per workflow build by updating the SonataFlowBuild custom resource.
  • At the platform level by updating the SonataFlowPlatform custom resource.
  • For development mode deployments by editing the SonataFlow custom resource.
  • When building custom images externally with the builder image

9.1. Adding a Maven mirror when building workflows

You can configure a Maven mirror by setting the MAVEN_MIRROR_URL environment variable in the SonataFlowBuild or SonataFlowPlatform custom resources (CR).

Note

The recommended approach is to update the SonataFlowPlatform CR. This ensures the mirror configuration is propagated automatically to all workflow builds within the platform scope.

Prerequisites

  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have created your OpenShift Serverless Logic project.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to a custom Maven mirror or internal repository.

Procedure

  1. Edit the SonataFlowPlatform CR to configure a Maven mirror for all workflow builds in a namespace, as shown in the following example:

    Example of Maven mirror configuration in a SonataFlowPlatform CR

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowPlatform
    metadata:
      name: my-platform
    spec:
      build:
        template:
          envs:
            - name: MAVEN_MIRROR_URL
              value: http://my.company.registry.local
    Copy to Clipboard Toggle word wrap

    This configuration applies to all workflow builds in the same namespace that use the preview profile. When a workflow builder instance runs, it updates the internal Maven settings file to use the specified mirror as the default for external locations such as Maven Central.

  2. Optional: If you need a specific configuration for a single workflow build, create the SonataFlowBuild CR before creating the corresponding SonataFlow CR. The SonataFlowBuild and SonataFlow CRs must have the same name.

    Example of Maven mirror configuration in a SonataFlowBuild CR

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlowBuild
    metadata:
      name: my-workflow 
    1
    
      annotations:
        sonataflow.org/restartBuild: "true" 
    2
    
    spec:
      # suppressed for brevity
      envs:
        - name: MAVEN_MIRROR_URL 
    3
    
          value: http://my.company.registry.local
    Copy to Clipboard Toggle word wrap

    1
    The SonataFlowBuild CR must have the same name as the corresponding SonataFlow CR.
    2
    The sonataflow.org/restartBuild: "true" annotation forces the existing build to restart with the new configuration.
    3
    The MAVEN_MIRROR_URL environment variable specifies the custom Maven mirror.
    Note

    You can use the SonataFlowBuild CR configuration only when you require workflow-specific behavior, for example, debugging. For general use, configure the SonataFlowPlatform CR instead.

You can configure a Maven mirror for workflows that run in dev mode by adding the MAVEN_MIRROR_URL environment variable to the SonataFlow custom resource (CR).

Prerequisites

  • You have created your OpenShift Serverless Logic project.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a workflow deployed in dev profile.
  • You have access to a custom Maven mirror or internal repository.

Procedure

  • Edit the SonataFlow CR to include the Maven mirror configuration as shown in the following example:

    Example of Maven mirror configuration on SonataFlow CR

    apiVersion: sonataflow.org/v1alpha08
    kind: SonataFlow
    metadata:
      name: greeting
      annotations:
        sonataflow.org/description: Greeting example on k8s!
        sonataflow.org/version: 0.0.1
        sonataflow.org/profile: dev
    spec:
      podTemplate:
        container:
          env:
            - name: MAVEN_MIRROR_URL 
    1
    
              value: http://my.company.registry.local
      flow: #suppressed for brevity
    Copy to Clipboard Toggle word wrap

    1
    The MAVEN_MIRROR_URL variable specifies the custom Maven mirror.
Note

Only workflows deployed with the dev profile can use Maven mirrors. Other deployment models run compiled code only, so they do not need to connect to a Maven registry.

9.3. Configuring a Maven mirror on a custom image

You can configure a Maven mirror for workflows that run in dev mode by adding the MAVEN_MIRROR_URL environment variable to the SonataFlow custom resource (CR).

Prerequisites

  • You have created your OpenShift Serverless Logic project.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to a dockerfile or container build context that uses the SonataFlow Builder image.
  • You have access to a custom Maven mirror or internal repository.

Procedure

  1. Set the Maven mirror as an environment variable in the Dockerfile as shown in the following example:

    Example of custom container file with Maven mirror set as an environment variable

    FROM docker.io/apache/incubator-kie-sonataflow-builder:main AS builder
    
    # Content suppressed for brevity
    
    # The Maven mirror URL set as an env var during the build process
    ENV MAVEN_MIRROR_URL=http://my.company.registry.local
    Copy to Clipboard Toggle word wrap

    The ENV directive ensures that all builds with this Dockerfile automatically use the specified Maven mirror.

  2. Set the Maven mirror as a build-time argument in the Dockerfile as shown in the following example:

    Example of custom container file with Maven mirror set as an argument

    FROM docker.io/apache/incubator-kie-sonataflow-builder:main AS builder
    
    # Content suppressed for brevity
    
    # The Maven mirror URL passed as a build argument during the build process
    ARG MAVEN_MIRROR_URL
    Copy to Clipboard Toggle word wrap

    The ARG directive allows you to pass the Maven mirror value dynamically at build time.

Chapter 10. Managing upgrades

This section provides step-by-step instructions to upgrade the OpenShift Serverless Logic Operator from version 1.34.0 to 1.35.0. The upgrade process involves preparing the existing workflows and services, updating the Operator, and restoring the workflows after the upgrade.

Note

Different workflow profiles require different upgrade steps. Carefully follow the instructions for each profile.

10.1.1. Preparing for the upgrade

Before starting the upgrade process, you need to prepare your OpenShift Serverless Logic environment. This section outlines the necessary steps to ensure a upgrade from version 1.34.0 to 1.35.0.

The preparation process includes:

  • Deleting or scaling workflows based on their profiles.
  • Backing up all necessary databases and resources.
  • Ensuring you have a record of all custom configurations.
  • Running required database migration scripts for workflows using persistence.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenShift Management Console for Operator upgrades.
  • You have installed the OpenShift CLI (oc).
10.1.1.1. Deleting workflows with dev profile

Before upgrading the Operator, you must delete workflows running with the dev profile and redeploy them after the upgrade is completed.

Procedure

  1. Ensure you have a backup of all necessary Kubernetes resources, including SonataFlow custom resource definitions (CRDs), ConfigMaps, or any other related custom configurations.
  2. Delete the workflow by executing the following command:

    $ oc delete -f <my-workflow.yaml> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

Before upgrading the Operator, you must delete workflows running with the Preview profile, and migrate any data that is persisted. When the upgrade is completed, you must redeploy the workflows.

Procedure

  1. If you are using persistence, back up the workflow database and ensure the backup includes both database objects and table data.
  2. Ensure you have a backup of all necessary Kubernetes resources, including SonataFlow CRDs, ConfigMaps, or any other related custom configurations.
  3. Delete the workflow by executing the following command:

    $ oc delete -f <my-workflow.yaml> -n <target_namespace>
    Copy to Clipboard Toggle word wrap
  4. If you are using persistence, you must execute the following database migration script:

    ALTER TABLE flyway_schema_history
        RENAME CONSTRAINT flyway_schema_history_pk TO kie_flyway_history_runtime_persistence_pk;
    
    ALTER INDEX flyway_schema_history_s_idx
      RENAME TO kie_flyway_history_runtime_persistence_s_idx;
    
    ALTER TABLE flyway_schema_history RENAME TO kie_flyway_history_runtime_persistence;
    Copy to Clipboard Toggle word wrap

Before upgrading the Operator, you must scale down workflows running with the gitops profile, and scale them up again after the upgrade is completed.

Procedure

  1. Modify the my-workflow.yaml CRD and scale down each workflow to zero before upgrading as shown in the following example:

    spec:
      podTemplate:
        replicas: 0
    Copy to Clipboard Toggle word wrap
  2. Apply the updated CRD by running the following command:

    $ oc apply -f <my-workflow.yaml> -n <target_namespace>
    Copy to Clipboard Toggle word wrap
  3. (Optional) Scale the workflow to 0 by running the following command:

    $ oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
    Copy to Clipboard Toggle word wrap
10.1.1.4. Backing up the Data Index database

You must back up the Data Index database before upgrading to prevent data loss.

Procedure

  • Take a full backup of the Data Index database, ensuring:

    • The backup includes all database objects and not just table data.
    • The backup is stored in a secure location.
10.1.1.5. Backing up the Jobs Service database

You must back up the Jobs Service database before upgrading to maintain job scheduling data.

Procedure

  • Take a full backup of the Jobs Service database, ensuring:

    • The backup includes all database objects and not just table data.
    • The backup is stored in a secure location.

To transition from OpenShift Serverless Logic Operator (OSL) version 1.34.0 to 1.35.0, you must upgrade the OSL using the Red Hat OpenShift Serverless web console. This upgrade ensures compatibility with newer features and proper functioning of your workflows.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenShift Management Console for Operator upgrades.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. In the web console, navigate to the OperatorsOperatorHubInstalled Operator page.
  2. Select the openshift-serverless-logic namespace from the Installed Namespace.
  3. In the list of installed operators, find and click the OpenShift Serverless Logic Operator.
  4. In the Operator details page, click on the Subscription tab. Click Edit Subscription.
  5. In the Upgrade status, click the Upgrade available link.
  6. Click the Preview install plan and Approve to start the update.
  7. To monitor the upgrade process, run the following command:

    $ oc get subscription logic-operator-rhel8 -n openshift-serverless-logic -o jsonpath='{.status.installedCSV}'
    Copy to Clipboard Toggle word wrap

    Expected output

    $ logic-operator-rhel8.v1.35.0
    Copy to Clipboard Toggle word wrap

Verification

  1. To verify the new Operator version is installed, run the following command:

    $ oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logic
    Copy to Clipboard Toggle word wrap

    Expected output

    NAME                           DISPLAY                               VERSION   REPLACES                       PHASE
    logic-operator-rhel8.v1.35.0   OpenShift Serverless Logic Operator   1.35.0    logic-operator-rhel8.v1.34.0   Succeeded
    Copy to Clipboard Toggle word wrap

10.1.3. Finalizing the upgrade

After upgrading the OpenShift Serverless Logic Operator to version 1.35.0, you must finalize the upgrade process by restoring or scaling workflows and cleaning up old services. This ensures that your system runs cleanly on the new version and that all dependent components are configured correctly.

Follow the appropriate steps below based on the profile of your workflows and services.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenShift Management Console for Operator upgrades.
  • You have installed the OpenShift CLI (oc).
10.1.3.1. Finalizing the Data Index upgrade

After the Operator upgrade, a new ReplicaSet is automatically created for Data Index 1.35.0. You must delete the old one manually.

Procedure

  1. Verify the new ReplicaSet exists by listing all ReplicaSets by running the following command:

    $ oc get replicasets -n <target_namespace> -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image
    Copy to Clipboard Toggle word wrap
  2. Identify the old Data Index ReplicaSet (with version 1.34.0) and delete it:

    $ oc delete replicaset <old_replicaset_name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap
10.1.3.2. Finalizing the Job Service upgrade

You must manually clean up the Jobs Service components from the older version to trigger deployment of version 1.35.0 components.

Procedure

  1. Delete the old Jobs Service deployment by running the following command:

    $ oc delete deployment <jobs-service-deployment-name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    This will trigger automatic cleanup of the older Pods and ReplicaSets and initiate a fresh deployment using version 1.35.0.

After the upgrade, you must redeploy workflows that use the dev profile and any associated Kubernetes resources.

Procedure

  1. Ensure all required resources are restored, including SonataFlow CRDs, ConfigMaps, or any other related custom configurations.
  2. Redeploy the workflow by running the following command:

    $ oc apply -f <my-workflow.yaml> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

Workflows with the preview profile require an additional configuration step before being redeployed.

Procedure

  1. If the workflow uses persistence, add the following property to the ConfigMap associated with the workflow:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        app: my-workflow
      name: my-workflow-props
    data:
      application.properties: |
        kie.flyway.enabled=true
    Copy to Clipboard Toggle word wrap
  2. Ensure all required resources are recreated, including SonataFlow CRDs, ConfigMaps, or any other related custom configurations.
  3. Redeploy the workflow by running the following command:

    $ oc apply -f <my-workflow.yaml> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

Workflows with the gitops profile that were previously scaled down must be scaled back up to continue operation.

Procedure

  1. Modify the my-workflow.yaml CRD and scale up each workflow to one before upgrading as shown in the following example:

    spec:
      podTemplate:
        replicas: 1
    Copy to Clipboard Toggle word wrap
  2. Apply the updated CRD by running the following command:

    $ oc apply -f <my-workflow.yaml> -n <target_namespace>
    Copy to Clipboard Toggle word wrap
  3. (Optional) Scale the workflow back to 1 by running the following command:

    $ oc patch sonataflow <my-workflow> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
    Copy to Clipboard Toggle word wrap
10.1.3.6. Verifying the upgrade

After restoring workflows and services, it is essential to verify that the upgrade was successful and that all components are functioning as expected.

Procedure

  1. Check if all workflows and services are running by entering the following command:

    $ oc get pods -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    Ensure that all pods related to workflows, Data Index, and Jobs Service are in a Running or Completed state.

  2. Verify that the OpenShift Serverless Logic Operator is running correctly by entering the following command:

    $ oc get clusterserviceversion logic-operator-rhel8.v1.35.0 -n openshift-serverless-logic
    Copy to Clipboard Toggle word wrap

    Expected output

    NAME                           DISPLAY                               VERSION   REPLACES                       PHASE
    logic-operator-rhel8.v1.35.0   OpenShift Serverless Logic Operator   1.35.0    logic-operator-rhel8.v1.34.0   Succeeded
    Copy to Clipboard Toggle word wrap

  3. Check Operator logs for any errors by entering the following command:

    $ oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic
    Copy to Clipboard Toggle word wrap

You can upgrade the OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0. The upgrade process involves preparing the existing workflows and services, updating the Operator, and restoring the workflows after the upgrade.

Note

Different workflow profiles require different upgrade steps. Follow the instructions for each profile carefully.

10.2.1. Preparing for the upgrade

Before starting the upgrade process, you need to prepare your OpenShift Serverless Logic environment to upgrade from version 1.35.0 to 1.36.0.

The preparation process is as follows:

  • Deleting or scaling workflows based on their profiles.
  • Backing up all necessary databases and resources.
  • Ensuring you have a record of all custom configurations.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenShift Management Console for Operator upgrades.
  • You have installed the OpenShift CLI (oc).
10.2.1.1. Deleting workflows with the dev profile

Before upgrading the Operator, you must delete workflows running with the dev profile and redeploy them after the upgrade is complete.

Procedure

  1. Ensure you have a backup of all necessary Kubernetes resources, including SonataFlow custom resources (CRs), ConfigMap resources, or any other related custom configurations.
  2. Delete the workflow by executing the following command:

    $ oc delete workflow <workflow_name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

Before upgrading the Operator, you must delete workflows running with the preview profile. When the upgrade is complete, you must redeploy the workflows.

Procedure

  1. If you are using persistence, back up the workflow database and ensure the backup includes both database objects and table data.
  2. Ensure you have a backup of all necessary Kubernetes resources, including SonataFlow custom resources (CRs), ConfigMap resources, or any other related custom configurations.
  3. Delete the workflow by executing the following command:

    $ oc delete workflow <workflow_name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

Before upgrading the Operator, you must scale down workflows running with the gitops profile, and scale them up again after the upgrade is complete.

Procedure

  1. Modify the my-workflow.yaml custom resources (CR) and scale down each workflow to 0 before upgrading as shown in the following example:

    spec:
      podTemplate:
        replicas: 0
      # ...
    Copy to Clipboard Toggle word wrap
  2. Apply the updated my-workflow.yaml CR by running the following command:

    $ oc apply -f my-workflow.yaml -n <target_namespace>
    Copy to Clipboard Toggle word wrap
  3. Optional: Scale the workflow to 0 by running the following command:

    $ oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 0}}}'
    Copy to Clipboard Toggle word wrap
10.2.1.4. Backing up the Data Index database

You must back up the Data Index database before upgrading to prevent data loss.

Procedure

  • Take a full backup of the Data Index database, ensuring:

    • The backup includes all database objects and not just table data.
    • The backup is stored in a secure location.
10.2.1.5. Backing up the Jobs Service database

You must back up the Jobs Service database before upgrading to maintain job scheduling data.

Procedure

  • Take a full backup of the Jobs Service database, ensuring:

    • The backup includes all database objects and not just table data.
    • The backup is stored in a secure location.

You can upgrade the OpenShift Serverless Logic Operator from version 1.35.0 to 1.36.0 by performing the following steps.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenShift Management Console for Operator upgrades.
  • You have installed the OpenShift CLI (oc).
  • You have version 1.35.0 of OpenShift Serverless Logic Operator installed.

Procedure

  1. Patch the ClusterServiceVersion (CSV) for the 1.35.0 OpenShift Serverless Logic Operator to update the deployment labels by running the following command:

    $ oc patch csv logic-operator-rhel8.v1.35.0 \
      -n openshift-serverless-logic \
      --type=json \
      -p='[
        {
          "op": "replace",
          "path": "/spec/install/spec/deployments/0/spec/selector/matchLabels",
          "value": {
            "app.kubernetes.io/name": "sonataflow-operator"
          }
        },
        {
          "op": "replace",
          "path": "/spec/install/spec/deployments/0/label",
          "value": {
            "app.kubernetes.io/name": "sonataflow-operator"
          }
        },
        {
          "op": "replace",
          "path": "/spec/install/spec/deployments/0/spec/template/metadata/labels",
          "value": {
            "app.kubernetes.io/name": "sonataflow-operator"
          }
        }
      ]'
    Copy to Clipboard Toggle word wrap
  2. Delete the current Operator deployment by running the following command:

    $ oc delete deployment logic-operator-rhel8-controller-manager -n openshift-serverless-logic
    Copy to Clipboard Toggle word wrap
  3. In the web console, navigate to the OperatorsOperatorHubInstalled Operators page.
  4. In the list of installed Operators, find and click the Operator named OpenShift Serverless Logic Operator.
  5. Initiate the OpenShift Serverless Logic Operator upgrade to version 1.36.0.

Verification

  • After applying the upgrade, verify that the Operator is running and in the Succeeded phase, by running the following command:

    $ oc get clusterserviceversion logic-operator-rhel8.v1.36.0
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                           DISPLAY                               VERSION   REPLACES                       PHASE
    logic-operator-rhel8.v1.36.0   OpenShift Serverless Logic Operator   1.36.0    logic-operator-rhel8.v1.35.0   Succeeded
    Copy to Clipboard Toggle word wrap

10.2.3. Finalizing the upgrade

After upgrading the OpenShift Serverless Logic Operator to version 1.36.0, you must finalize the upgrade process by restoring or scaling workflows and cleaning up old services. This ensures that your system runs cleanly on the new version and that all dependent components are configured correctly.

Follow the appropriate steps below based on the profile of your workflows and services.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have OpenShift Serverless Logic Operator installed on your cluster.
  • You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to the OpenShift Management Console for Operator upgrades.
  • You have installed the OpenShift CLI (oc).
10.2.3.1. Finalizing the Data Index upgrade

After the Operator upgrade, if your deployment is configured to use a Knative Eventing Kafka Broker, you must delete the old data-index-process-definition trigger that was created in the OpenShift Serverless Logic 1.35.0 version. Optionally, you can delete the old ReplicaSet resource as well.

Procedure

  1. List all the triggers by running the following command:

    $ oc get triggers -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                              BROKER              SUBSCRIBER_URI
    data-index-jobs-a25c8405-f740-47d2-a9a5-f80ccaec2955              example-broker      http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/jobs
    data-index-process-definition-473e1ddbb3ca1d62768187eb80de99bca   example-broker      http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/definitions
    data-index-process-error-a25c8405-f740-47d2-a9a5-f80ccaec2955     example-broker      http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes
    data-index-process-instance-mul07f593476e8c14353a337590e0bfd5ae   example-broker      http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes
    data-index-process-node-a25c8405-f740-47d2-a9a5-f80ccaec2955      example-broker      http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes
    data-index-process-state-a25c8405-f740-47d2-a9a5-f80ccaec2955     example-broker      http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes
    data-index-process-variable-487e9a6777fff650e60097c9e17111aea25   example-broker      http://sonataflow-platform-data-index-service.<target_namespace>.svc.cluster.local/processes
    
    jobs-service-create-job-a25c8405-f740-47d2-a9a5-f80ccaec2955      example-broker      http://sonataflow-platform-jobs-service.<target_namespace>.svc.cluster.local/v2/jobs/events
    jobs-service-delete-job-a25c8405-f740-47d2-a9a5-f80ccaec2955      example-broker      http://sonataflow-platform-jobs-service.<target_namespace>.svc.cluster.local/v2/jobs/events
    Copy to Clipboard Toggle word wrap

  2. Based on the generated example output, delete the old data-index-process-definition trigger by running the following command:

    $ oc delete trigger data-index-process-definition-473e1ddbb3ca1d62768187eb80de99bca -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    After deletion, a new trigger compatible with OpenShift Serverless Logic 1.36.0 is automatically created.

  3. Optional: Identify the old ReplicaSet resource by running the following command:

    $ oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    Name                                                Image
    sonataflow-platform-data-index-service-1111111111   registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.35.0
    
    sonataflow-platform-data-index-service-2222222222   registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8:1.36.0
    Copy to Clipboard Toggle word wrap

  4. Optional: Delete your old ReplicaSet resource by running the following command:

    $ oc delete replicaset <old_replicaset_name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    Example command based on the example output

    $ oc delete replicaset sonataflow-platform-data-index-service-1111111111 -n <target_namespace>
    Copy to Clipboard Toggle word wrap

10.2.3.2. Finalizing the Job Service upgrade

After the OpenShift Serverless Operator is upgraded to version 1.36.0 you can optionally delete the old ReplicaSet resource.

Procedure

  1. Identify the old ReplicaSet resource by running the following command:

    $ oc get replicasets -o custom-columns=Name:metadata.name,Image:spec.template.spec.containers[*].image -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    Name                                                Image
    sonataflow-platform-jobs-service-1111111111         registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.35.0
    sonataflow-platform-jobs-service-2222222222         registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8:1.36.0
    Copy to Clipboard Toggle word wrap

  2. Delete your old ReplicaSet resource by running the following command:

    $ oc delete replicaset <old_replicaset_name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    Example command based on the example output

    $ oc delete replicaset sonataflow-platform-jobs-service-1111111111 -n <target_namespace>
    Copy to Clipboard Toggle word wrap

After the upgrade, you must redeploy workflows that use the dev profile and any associated Kubernetes resources.

Procedure

  1. Ensure that all required Kubernetes resources, including the ConfigMap with the application.properties field, are restored before redeploying the workflow.
  2. Redeploy the workflow by running the following command:

    $ oc apply -f <workflow_name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

After the upgrade, you must redeploy workflows that use the preview profile and any associated Kubernetes resources.

Procedure

  1. Ensure that all required Kubernetes resources, including the ConfigMap with the application.properties field, are restored before redeploying the workflow.
  2. Redeploy the workflow by running the following command:

    $ oc apply -f <workflow_name> -n <target_namespace>
    Copy to Clipboard Toggle word wrap

To continue operation, you must scale up workflows that you previously scaled down with the gitops profile.

Procedure

  1. Modify the my-workflow.yaml custom resources (CR) and scale up each workflow to 1 as shown in the following example:

    spec:
      podTemplate:
        replicas: 1
      # ...
    Copy to Clipboard Toggle word wrap
  2. Apply the updated CR by running the following command:

    $ oc apply -f my-workflow.yaml -n <target_namespace>
    Copy to Clipboard Toggle word wrap
  3. Optional: Scale the workflow back to 1 by running the following command:

    $ oc patch workflow <workflow_name> -n <target_namespace> --type=merge -p '{"spec": {"podTemplate": {"replicas": 1}}}'
    Copy to Clipboard Toggle word wrap

10.2.4. Verifying the 1.36.0 upgrade

After restoring workflows and services, verify that the upgrade was successful and all components are functioning as expected.

Procedure

  1. Check if all workflows and services are running by entering the following command:

    $ oc get pods -n <target_namespace>
    Copy to Clipboard Toggle word wrap

    Ensure that all pods related to workflows, Data Index, and Jobs Service are in a Running or Completed state.

  2. Verify that the OpenShift Serverless Logic Operator is running correctly by entering the following command:

    $ oc get clusterserviceversion logic-operator-rhel8.v1.36.0 -n openshift-serverless-logic
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                           DISPLAY                               VERSION   REPLACES                       PHASE
    logic-operator-rhel8.v1.36.0   OpenShift Serverless Logic Operator   1.36.0    logic-operator-rhel8.v1.35.0   Succeeded
    Copy to Clipboard Toggle word wrap

  3. Check Operator logs for any errors by entering the following command:

    $ oc logs -l control-plane=sonataflow-operator -n openshift-serverless-logic
    Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat