Serverless Logic
Introduction to OpenShift Serverless Logic
Abstract
Chapter 1. Getting started Copy linkLink copied to clipboard!
1.1. Creating and running workflows with Knative Workflow plugin Copy linkLink copied to clipboard!
You can create and run the OpenShift Serverless Logic workflows locally.
1.1.1. Creating a workflow Copy linkLink copied to clipboard!
You can use the create command with kn workflow to set up a new OpenShift Serverless Logic project in your current directory.
Prerequisites
-
You have installed the OpenShift Serverless Logic
kn-workflowCLI plugin.
Procedure
Create a new OpenShift Serverless Logic workflow project by running the following command:
kn workflow create
$ kn workflow createCopy to Clipboard Copied! Toggle word wrap Toggle overflow By default, the generated project name is
new-project. You can change the project name by using the[-n|--name]flag as follows:Example command
kn workflow create --name my-project
$ kn workflow create --name my-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.2. Running a workflow locally Copy linkLink copied to clipboard!
You can use the run command with kn workflow to build and run your OpenShift Serverless Logic workflow project in your current directory.
Prerequisites
- You have installed Podman on your local machine.
-
You have installed the OpenShift Serverless Logic
kn-workflowCLI plugin. - You have created an OpenShift Serverless Logic workflow project.
Procedure
Run the following command to build and run your OpenShift Serverless Logic workflow project:
kn workflow run
$ kn workflow runCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the project is ready, the Development UI automatically opens in your browser on
localhost:8080/q/dev-uiand you will find the Serverless Workflow Tools tile available. Alternatively, you can access the tool directly usinghttp://localhost:8080/q/dev-ui/org.apache.kie.sonataflow.sonataflow-quarkus-devui/workflows.
You can execute a workflow locally using a container that runs on your machine. Stop the container with Ctrl+C.
1.2. Deploying workflows Copy linkLink copied to clipboard!
You can deploy the Serverless Logic workflows on the cluster in two modes: Dev mode and Preview mode.
1.2.1. Deploying workflows in Dev mode Copy linkLink copied to clipboard!
You can deploy your local workflow on OpenShift Container Platform in Dev mode. You can use this deployment to experiment and modify your workflow directly on the cluster, seeing changes almost immediately. Dev mode is designed for development and testing purposes. It is ideal for initial development stages and for testing new changes.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Create the workflow configuration YAML file.
Example
workflow-dev.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy the application, apply the YAML file by entering the following command:
oc apply -f <filename> -n <your_namespace>
$ oc apply -f <filename> -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the deployment and check the status of the deployed workflow by entering the following command:
oc get workflow -n <your_namespace> -w
$ oc get workflow -n <your_namespace> -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that your workflow is listed and the status is
RunningorCompleted.Edit the workflow directly in the cluster by entering the following command:
oc edit sonataflow <workflow_name> -n <your_namespace>
$ oc edit sonataflow <workflow_name> -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing, save the changes. The OpenShift Serverless Logic Operator detects the changes and updates the workflow accordingly.
Verification
To ensure the changes are applied correctly, verify the status and logs of the workflow by entering the following commands:
View the status of the workflow by running the following command:
oc get sonataflows -n <your_namespace>
$ oc get sonataflows -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the workflow logs by running the following command:
oc logs <workflow_pod_name> -n <your_namespace>
$ oc logs <workflow_pod_name> -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
After completing the testing, delete the resources to avoid unnecessary usage by running the following command:
oc delete sonataflow <workflow_name> -n <your_namespace>
$ oc delete sonataflow <workflow_name> -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2. Deploying workflows in Preview mode Copy linkLink copied to clipboard!
You can deploy your local workflow on OpenShift Container Platform in Preview mode. This allows you to experiment and modify your workflow directly on the cluster, seeing changes almost immediately. Preview mode is used for final testing and validation before deploying to production. It also ensures that workflows will run smoothly in a production-like setting.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
To deploy a workflow in Preview mode, OpenShift Serverless Logic Operator uses the build system on OpenShift Container Platform, which automatically creates the image for deploying your workflow.
The following sections explain how to build and deploy your workflow on a cluster using the OpenShift Serverless Logic Operator with a SonataFlow custom resource.
1.2.2.1. Configuring workflows in Preview mode Copy linkLink copied to clipboard!
1.2.2.1.1. Configuring the workflow base builder image Copy linkLink copied to clipboard!
If your scenario requires strict policies for image usage, such as security or hardening constraints, replace the default image used by the OpenShift Serverless Logic Operator to build the final workflow container image.
By default, the OpenShift Serverless Logic Operator uses the image distributed in the official Red Hat Registry to build workflows. If your scenario requires strict policies for image use, such as security or hardening constraints, you can replace the default image.
To change this image, you edit the SonataFlowPlatform custom resource (CR) in the namespace where you deployed your workflows.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
List the
SonataFlowPlatformresources in your namespace by running the following command:oc get sonataflowplatform -n <your_namespace>
$ oc get sonataflowplatform -n <your_namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<your_namespace>with the name of your namespace.
Patch the
SonataFlowPlatformresource with the new builder image by running the following command:oc patch sonataflowplatform <name> --patch 'spec:\n build:\n config:\n baseImage: <your_new_image_full_name_with_tag>' -n <your_namespace>
$ oc patch sonataflowplatform <name> --patch 'spec:\n build:\n config:\n baseImage: <your_new_image_full_name_with_tag>' -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
SonataFlowPlatformCR has been patched correctly by running the following command:oc describe sonataflowplatform <name> -n <your_namespace>
$ oc describe sonataflowplatform <name> -n <your_namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>with the name of yourSonataFlowPlatformresource and<your_namespace>with the name of your namespace.
Ensure that the
baseImagefield underspec.build.configreflects the new image.
1.2.2.1.2. Customization for the base builder Dockerfile Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator uses the logic-operator-rhel8-builder-config config map custom resource (CR) in its openshift-serverless-logicOpenShift Serverless Logic Operator installation namespace to configure and run the workflow build process. You can change the Dockerfile entry in this config map to adjust the Dockerfile to your needs.
Modifying the Dockerfile can break the build process.
This example is for reference only. The actual version might be slightly different. Do not use this example for your installation.
Example logic-operator-rhel8-builder-config config map CR
1.2.2.1.3. Changing resource requirements Copy linkLink copied to clipboard!
You can specify resource requirements for the internal builder pods, by creating or editing a SonataFlowPlatform resource in the workflow namespace.
Example SonataFlowPlatform resource
Only one SonataFlowPlatform resource is allowed per namespace. Fetch and edit the resource that the OpenShift Serverless Logic Operator created for you instead of trying to create another resource.
You can fine-tune the resource requirements for a particular workflow. Each workflow instance has a SonataFlowBuild instance created with the same name as the workflow. You can edit the SonataFlowBuild custom resource (CR) and specify the parameters as follows:
Example of SonataFlowBuild CR
These parameters apply only to new build instances.
1.2.2.1.4. Passing arguments to the internal builder Copy linkLink copied to clipboard!
You can customize the build process by passing build arguments to the SonataFlowBuild instance or setting default build arguments in the SonataFlowPlatform resource.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Check for the existing
SonataFlowBuildinstance by running the following command:oc get sonataflowbuild <name> -n <namespace>
$ oc get sonataflowbuild <name> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<name>with the name of yourSonataFlowBuildinstance and<namespace>with your namespace.
Add build arguments to the
SonataFlowBuildinstance by running the following command:oc edit sonataflowbuild <name> -n <namespace>
$ oc edit sonataflowbuild <name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the desired build arguments under the
.spec.buildArgsfield of theSonataFlowBuildinstance:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the existing
SonataFlowBuildinstance.
Save the file and exit.
A new build with the updated configuration starts.
Set the default build arguments in the
SonataFlowPlatformresource by running the following command:oc edit sonataflowplatform <name> -n <namespace>
$ oc edit sonataflowplatform <name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the desired build arguments under the
.spec.buildArgsfield of theSonataFlowPlatformresource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the existing
SonataFlowPlatformresource.
- Save the file and exit.
1.2.2.1.5. Setting environment variables in the internal builder Copy linkLink copied to clipboard!
You can set environment variables to the SonataFlowBuild internal builder pod. These variables are valid for the build context only and are not set on the final built workflow image.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Check for existing
SonataFlowBuildinstance by running the following command:oc get sonataflowbuild <name> -n <namespace>
$ oc get sonataflowbuild <name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<name>with the name of yourSonataFlowBuildinstance and<namespace>with your namespace.Edit the
SonataFlowBuildinstance by running the following command:oc edit sonataflowbuild <name> -n <namespace>
$ oc edit sonataflowbuild <name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
SonataFlowBuildinstanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit.
A new with the updated configuration starts.
Alternatively, you can set the enviroments in the
SonataFlowPlatform, so that every new build instances will use it as a template.Example
SonataFlowPlatforminstanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2.1.6. Changing the base builder image Copy linkLink copied to clipboard!
You can modify the default builder image used by the OpenShift Serverless Logic Operator by editing the logic-operator-rhel8-builder-config config map.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Edit the
logic-operator-rhel8-builder-configconfig map by running the following command:oc edit cm/logic-operator-rhel8-builder-config -n openshift-serverless-logic
$ oc edit cm/logic-operator-rhel8-builder-config -n openshift-serverless-logicCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the dockerfile entry.
In your editor, locate the Dockerfile entry and change the first line to the desired image.
Example
data: Dockerfile: | FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 # Change the image to the desired onedata: Dockerfile: | FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 # Change the image to the desired oneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes.
1.2.2.2. Building and deploying your workflow Copy linkLink copied to clipboard!
You can create a SonataFlow custom resource (CR) on OpenShift Container Platform and OpenShift Serverless Logic Operator builds and deploys the workflow.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Create a workflow YAML file similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
SonataFlowworkflow definition to your OpenShift Container Platform namespace by running the following command:oc apply -f <workflow-name>.yaml -n <your_namespace>
$ oc apply -f <workflow-name>.yaml -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command for the
greetings-workflow.yamlfile:oc apply -f greetings-workflow.yaml -n workflows
$ oc apply -f greetings-workflow.yaml -n workflowsCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all the build configurations by running the following command:
oc get buildconfigs -n workflows
$ oc get buildconfigs -n workflowsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the logs of the build process by running the following command:
oc logs buildconfig/<workflow-name> -n <your_namespace>
$ oc logs buildconfig/<workflow-name> -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command for the
greetings-workflow.yamlfile:oc logs buildconfig/greeting -n workflows
$ oc logs buildconfig/greeting -n workflowsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify the deployment, list all the pods by running the following command:
oc get pods -n <your_namespace>
$ oc get pods -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the pod corresponding to your workflow is running.
Check the running pods and their logs by running the following command:
oc logs pod/<pod-name> -n workflows
$ oc logs pod/<pod-name> -n workflowsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2.3. Verifying workflow deployment Copy linkLink copied to clipboard!
You can verify that your OpenShift Serverless Logic workflow is running by performing a test HTTP call from the workflow pod.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Create a workflow
YAMLfile similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a route for the workflow service by running the following command:
oc expose svc/<workflow-service-name> -n workflows
$ oc expose svc/<workflow-service-name> -n workflowsCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command creates a public URL to access the workflow service.
Set an environment variable for the public URL by running the following command:
WORKFLOW_SVC=$(oc get route/<workflow-service-name> -n <namespace> --template='{{.spec.host}}')$ WORKFLOW_SVC=$(oc get route/<workflow-service-name> -n <namespace> --template='{{.spec.host}}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make an HTTP call to the workflow to send a POST request to the service by running the following command:
curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{<"your": "json_payload">}' http://$WORKFLOW_SVC/<endpoint>$ curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{<"your": "json_payload">}' http://$WORKFLOW_SVC/<endpoint>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output shows an example of the expected response if the workflow is running.
1.2.2.4. Restarting a build Copy linkLink copied to clipboard!
To restart a build, you can add or edit the sonataflow.org/restartBuild: true annotation in the SonataFlowBuild instance. Restarting a build is necessary if there is a problem with your workflow or the initial build revision.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Check if the
SonataFlowBuildinstance exists by running the following command:oc get sonataflowbuild <name> -n <namespace>
$ oc get sonataflowbuild <name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
SonataFlowBuildinstance by running the following command:oc edit sonataflowbuild/<name> -n <namespace>
$ oc edit sonataflowbuild/<name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<name>with the name of yourSonataFlowBuildinstance and<namespace>with the namespace where your workflow is deployed.Add the
sonataflow.org/restartBuild: trueannotation to restart the build.Copy to Clipboard Copied! Toggle word wrap Toggle overflow This action triggers the OpenShift Serverless Logic Operator to start a new build of the workflow.
To monitor the build process, check the build logs by running the following command:
oc logs buildconfig/<name> -n <namespace>
$ oc logs buildconfig/<name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<name>with the name of yourSonataFlowBuildinstance and<namespace>with the namespace where your workflow is deployed.
1.2.3. Editing a workflow Copy linkLink copied to clipboard!
When the OpenShift Serverless Logic Operator deploys a workflow service, it creates two config maps to store runtime properties:
-
User properties: Defined in a
ConfigMapnamed after theSonataFlowobject with the suffix-props. For example, if your workflow name isgreeting, then theConfigMapname isgreeting-props. -
Managed properties: Defined in a
ConfigMapnamed after theSonataFlowobject with the suffix-managed-props. For example, if your workflow name isgreeting, then theConfigMapname isgreeting-managed-props.
Managed properties always override any user property with the same key name and cannot be edited by the user. Any change would be overwritten by the Operator at the next reconciliation cycle.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Open and edit the
ConfigMapby running the following command:oc edit cm <workflow_name>-props -n <namespace>
$ oc edit cm <workflow_name>-props -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<workflow_name>with the name of your workflow and<namespace>with the namespace where your workflow is deployed.Add the properties in the
application.propertiessection.Example of a workflow properties stored within a
ConfigMap:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the properties are correctly formatted to prevent the Operator from replacing your configuration with the default one.
- After making the necessary changes, save the file and exit the editor.
1.2.4. Testing a workflow Copy linkLink copied to clipboard!
To verify that your OpenShift Serverless Logic workflow is running correctly, you can perform a test HTTP call from the relevant pod.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
To create a route for the specified service in your namespace by running the following command:
oc expose svc <service_name> -n <namespace>
$ oc expose svc <service_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the URL for the newly exposed service by running the following command:
WORKFLOW_SVC=$(oc get route/<service_name> --template='{{.spec.host}}')$ WORKFLOW_SVC=$(oc get route/<service_name> --template='{{.spec.host}}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Perform a test HTTP call and send a
POSTrequest by running the following command:curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '<request_body>' http://$WORKFLOW_SVC/<endpoint>
$ curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '<request_body>' http://$WORKFLOW_SVC/<endpoint>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the response to ensure the workflow is functioning as expected.
1.2.5. Troubleshooting a workflow Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator deploys its pod with health check probes to ensure the Workflow runs in a healthy state. If changes cause these health checks to fail, the pod will stop responding.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
Check the workflow status by running the following command:
oc get workflow <name> -o jsonpath={.status.conditions} | jq .$ oc get workflow <name> -o jsonpath={.status.conditions} | jq .Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch and analyze the the logs from the workflow’s deployment, run the following command:
oc logs deployment/<workflow_name> -f
$ oc logs deployment/<workflow_name> -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.6. Deleting a workflow Copy linkLink copied to clipboard!
You can use the oc delete command to delete your OpenShift Serverless Logic workflow in your current directory.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc).
Procedure
-
Verify that you have the correct file that defines the Workflow you want to delete. For example,
workflow.yaml. Run the
oc deletecommand to remove the Workflow from your specified namespace:oc delete -f <your_file> -n <your_namespace>
$ oc delete -f <your_file> -n <your_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<your_file>with the name of your Workflow file and<your_namespace>with your namespace.
Chapter 2. Managing services Copy linkLink copied to clipboard!
2.1. Configuring OpenAPI services Copy linkLink copied to clipboard!
The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface for HTTP APIs. You can understand a service’s capabilities without access to the source code, additional documentation, or network traffic inspection. When you define a service by using the OpenAPI, you can understand and interact with it using minimal implementation logic. Just as interface descriptions simplify lower-level programming, the OpenAPI Specification eliminates guesswork in calling a service.
2.1.1. OpenAPI function definition Copy linkLink copied to clipboard!
OpenShift Serverless Logic allows the workflows to interact with remote services using an OpenAPI specfication reference in a function.
Example OpenAPI function definition
The operation attribute is a string composed of the following parameters:
-
URI: The engine uses this to locate the specification file, such asclasspath. - Operation identifier: You can find this identifier in the OpenAPI specification file.
OpenShift Serverless Logic supports the following URI schemes:
-
classpath: Use this for files located in the
src/main/resourcesfolder of the application project.classpathis the default URI scheme. If you do not define a URI scheme, the file location issrc/main/resources/myopenapifile.yaml. - file: Use this for files located in the file system.
- http or https: Use these for remotely located files.
Ensure the OpenAPI specification files are available during build time. OpenShift Serverless Logic uses an internal code generation feature to send requests at runtime. After you build the application image, OpenShift Serverless Logic will not have access to these files.
If the OpenAPI service you want to add to the workflow does not have a specification file, you can either create one or update the service to generate and expose the file.
2.1.2. Sending REST requests based on the OpenAPI specification Copy linkLink copied to clipboard!
To send REST requests that are based on the OpenAPI specification files, you must perform the following procedures:
- Define the function references
- Access the defined functions in the workflow states
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
To define the OpenAPI functions:
- Identify and access the OpenAPI specification files for the services you intend to invoke.
Copy the OpenAPI specification files into your workflow service directory, such as
<project_application_dir>/specs.The following example shows the OpenAPI specification for the multiplication REST service:
Example multiplication REST service OpenAPI specification
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To define functions in the workflow, use the
operationIdfrom the OpenAPI specification to reference the desired operations in your function definitions.Example function definitions in the temperature conversion application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure that your function definitions reference the correct paths to the OpenAPI files stored in the
<project_application_dir>/specsdirectory.
To access the defined functions in the workflow states:
- Define workflow actions to call the function definitions you added. Ensure each action references a function defined earlier.
Use the
functionRefattribute to refer to the specific function by its name. Map the arguments in thefunctionRefusing the parameters defined in the OpenAPI specification.The following example shows about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping function arguments in workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Check the
Operation Objectsection of the OpenAPI specification to understand how to structure parameters in the request. -
Use
jqexpressions to extract data from the payload and map it to the required parameters. Ensure the engine maps parameter names according to the OpenAPI specification. For operations requiring parameters in the request path instead of the body, refer to the parameter definitions in the OpenAPI specification.
For more information about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping path parameters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Following is an example invocation of a function, in which only one parameter named
petIdis added in the request path:Example of calling the PetStore function
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.3. Configuring the endpoint URL of OpenAPI services Copy linkLink copied to clipboard!
After accessing the function definitions in workflow states, you can configure the endpoint URL of OpenAPI services.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have created your OpenShift Serverless Logic project.
- You have access to the OpenAPI specification files.
- You have defined the function definitions in the workflow.
- You have the access to the defined functions in the workflow states.
Procedure
-
Locate the OpenAPI specification file you want to configure. For example,
substraction.yaml. -
Convert the file name into a valid configuration key by replacing special characters, such as
., with underscores and converting letters to lowercase. For example, changesubstraction.yamltosubstraction_yaml. To define the configuration key, use the converted file name as the REST client configuration key. Set this key as an environment variable, as shown in the following example:
quarkus.rest-client.subtraction_yaml.url=http://myserver.com
quarkus.rest-client.subtraction_yaml.url=http://myserver.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow To prevent hardcoding URLs in the
application.propertiesfile, use environment variable substitution, as shown in the following example:quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example:
-
Configuration Key:
quarkus.rest-client.subtraction_yaml.url - Environment variable: SUBTRACTION_URL
-
Fallback URL:
http://myserver.com
-
Configuration Key:
-
Ensure that the
(SUBTRACTION_URL)environment variable is set in your system or deployment environment. If the variable is not found, the application uses the fallback URL(http://myserver.com). Add the configuration key and URL substitution to the
application.propertiesfile:quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deploy or restart your application to apply the new configuration settings.
2.2. Configuring OpenAPI services endpoints Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses the kogito.sw.operationIdStrategy property to generate the REST client for invoking services defined in OpenAPI documents. This property determines how the configuration key is derived for the REST client configuration.
The kogito.sw.operationIdStrategy property supports the following values: FILE_NAME, FULL_URI, FUNCTION_NAME, and SPEC_TITLE.
FILE_NAMEOpenShift Serverless Logic uses the OpenAPI document file name to create the configuration key. The key is based on the file name, where special characters are replaced with underscores.
Example configuration:
quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/
quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenAPI File Path is
<project_application_dir>/specs/stock-portfolio-svc.yaml. The generated key that configures the URL for the REST client isstock_portfolio_svc_yaml
FULL_URIOpenShift Serverless Logic uses the complete URI path of the OpenAPI document as the configuration key. The full URI is sanitized to form the key.
Example for Serverless Workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration:
quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/
quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The URI path is
https://my.remote.host/apicatalog/apis/123/document. The generated key that configures the URL for the REST client isapicatalog_apis_123_document.
FUNCTION_NAMEOpenShift Serverless Logic combines the workflow ID and the function name referencing the OpenAPI document to generate the configuration key.
Example for Serverless Workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration:
quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/
quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The workflow ID is
myworkflow. The function name ismyfunction. The generated key that configures the URL for the REST client ismyworkflow_myfunction.
SPEC_TITLEOpenShift Serverless Logic uses the
info.titlevalue from the OpenAPI document to create the configuration key. The title is sanitized to form the key.Example for OpenAPI document
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration:
quarkus.rest-client.stock-service_API.url=http://localhost:8282/
quarkus.rest-client.stock-service_API.url=http://localhost:8282/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenAPI document title is
stock-service API. The generated key that configures the URL for the REST client isstock-service_API.
2.2.1. Using URI alias Copy linkLink copied to clipboard!
As an alternative to the kogito.sw.operationIdStrategy property, you can assign an alias to a URI by using the workflow-uri-definitions custom extension. This alias simplifies the configuration process and can be used as a configuration key in REST client settings and function definitions.
The workflow-uri-definitions extension allows you to map a URI to an alias, which you can reference throughout the workflow and in your configuration files. This approach provides a centralized way to manage URIs and their configurations.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
Add the
workflow-uri-definitionsextension to your workflow. Within this extension, create aliases for your URIs.Example workflow
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Set the extension ID to
workflow-uri-definitions. - 2
- Set the alias definition by mapping the
remoteCatalogalias to a URI, for example,https://my.remote.host/apicatalog/apis/123/documentURI. - 3
- Set the function operations by using the
remoteCatalogalias with the operation identifiers, for example,operation1andoperation2operation identifiers.In the
application.propertiesfile, configure the REST client by using the alias defined in the workflow.Example property
quarkus.rest-client.remoteCatalog.url=http://localhost:8282/
quarkus.rest-client.remoteCatalog.url=http://localhost:8282/Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the previous example, the configuration key is set to
quarkus.rest-client.remoteCatalog.url, and the URL is set tohttp://localhost:8282/, which the REST clients use by referring to theremoteCatalogalias.In your workflow, use the alias when defining functions that operate on the URI.
Example Workflow (continued):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Troubleshooting services Copy linkLink copied to clipboard!
Efficient troubleshooting of the HTTP-based function invocations, such as those using OpenAPI functions, is crucial for maintaining workflow orchestrations.
To diagnose issues, you can trace HTTP requests and responses.
2.3.1. Tracing HTTP requests and responses Copy linkLink copied to clipboard!
OpenShift Serverless Logic uses the Apache HTTP client to the trace HTTP requests and responses.
Prerequisites
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
- You have access to the workflow definition and instance IDs for correlating HTTP requests and responses.
- You have access to the log configuration of the application where the HTTP service invocations are occurring
Procedure
To trace HTTP requests and responses, OpenShift Serverless Logic uses the Apache HTTP client by setting the following property:
# Turning HTTP tracing on quarkus.log.category."org.apache.http".level=DEBUG
# Turning HTTP tracing on quarkus.log.category."org.apache.http".level=DEBUGCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following configuration to your application’s
application.propertiesfile to turn on debugging for the Apache HTTP Client:quarkus.log.category."org.apache.http".level=DEBUG
quarkus.log.category."org.apache.http".level=DEBUGCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart your application to propagate the log configuration changes.
After restarting, check the logs for HTTP request traces.
Example logs of a traced HTTP request
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the logs for HTTP response traces following the request logs.
Example logs of a traced HTTP response
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Supporting services Copy linkLink copied to clipboard!
3.1. Job service Copy linkLink copied to clipboard!
The Job service schedules and executes tasks in a cloud environment. Independent services implement these tasks, which can be initiated through any of the supported interaction modes, including HTTP calls or Knative Events delivery.
In OpenShift Serverless Logic, the Job service is responsible for controlling the execution of the time-triggered actions. Therefore, all the time-based states that you can use in a workflow, are handled by the interaction between the workflow and the Job service.
For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job service, and when the timeout is met, an HTTP callback is executed to notify the workflow.
The main goal of the Job service is to manage active jobs, such as scheduled jobs that need to be executed. When a job reaches its final state, the Job service removes it. To retain jobs information in a permanent repository, the Job service produces status change events that can be recorded by an external service, such as the Data Index Service.
You do not need to manually install or configure the Job service if you are using the OpenShift Serverless Operator to deploy workflows. The Operator handles these tasks automatically and manages all necessary configurations for each workflow to connect with it.
3.1.1. Job service leader election process Copy linkLink copied to clipboard!
The Job service operates as a singleton service, meaning only one active instance can schedule and execute jobs.
To prevent conflicts when the service is deployed in the cloud, where multiple instances might be running, the Job service supports a leader election process. Only the instance that is elected as the leader manages external communication to receive and schedule jobs.
Non-leader instances remain inactive in a standby state but continue attempting to become the leader through the election process. When a new instance starts, it does not immediately assume leadership. Instead, it enters the leader election process to determine if it can take over the leader role.
If the current leader becomes unresponsive or if it is shut down, another running instance takes over as the leader.
This leader election mechanism uses the underlying persistence backend, which is currently supported only in the PostgreSQL implementation.
3.2. Data Index service Copy linkLink copied to clipboard!
The Data Index service is a dedicated supporting service that stores the data related to the workflow instances and their associated jobs. This service provides a GraphQL endpoint allowing users to query that data.
The Data Index service processes data received through events, which can originate from any workflow or directly from the Job service.
Data Index supports Apache Kafka or Knative Eventing to consume CloudEvents messages from workflows. It indexes and stores this event data in a database, making it accessible through GraphQL. These events provide detailed information about the workflow execution. The Data Index service is central to OpenShift Serverless Logic search, insights, and management capabilities.
The key features of the Data Index service are as follows:
- A flexible data structure
- A distributable, cloud-ready format
- Message-based communication with workflows via Apache Kafka, Knative, and CloudEvents
- A powerful GraphQL-based querying API
When you are using the OpenShift Serverless Operator to deploy workflows, you do not need to manually install or configure the Data Index service. The Operator automatically manages all the necessary configurations for each workflow to connect with it.
3.2.1. GraphQL queries for workflow instances and jobs Copy linkLink copied to clipboard!
To retrieve data about workflow instances and jobs, you can use GraphQL queries.
3.2.1.1. Retrieve data from workflow instances Copy linkLink copied to clipboard!
You can retrieve information about a specific workflow instance by using the following query example:
3.2.1.2. Retrieve data from jobs Copy linkLink copied to clipboard!
You can retrieve data from a specific job instance by using the following query example:
3.2.1.3. Filter query results by using the where parameter Copy linkLink copied to clipboard!
You can filter query results by using the where parameter, allowing multiple combinations based on workflow attributes.
Example query to filter by state
Example query to filter by ID
By default, filters are combined using the AND Operator. You can modify this behavior by combining filters with the AND or OR operators.
Example query to combine filters with the OR Operator
Example query to combine filters with the AND and OR Operators
Depending on the attribute type, you can use the following avaialable Operators:
| Attribute type | Available Operators |
|---|---|
| String array |
|
| String |
|
| ID |
|
| Boolean |
|
| Numeric |
|
| Date |
|
3.2.1.4. Sort query results by using the orderBy parameter Copy linkLink copied to clipboard!
You can sort query results based on workflow attributes by using the orderBy parameter. You can also specify the sorting direction in an ascending (ASC) or a descending (DESC) order. Multiple attributes are applied in the order you specified.
Example query to sort by the start time in an ASC order
3.2.1.5. Limit the number of results by using the pagination parameter Copy linkLink copied to clipboard!
You can control the number of returned results and specify an offset by using the pagination parameter.
Example query to limit results to 10, starting from offset 0
3.3. Managing supporting services Copy linkLink copied to clipboard!
This section provides an overview of the supporting services essential for OpenShift Serverless Logic. It specifically focuses on configuring and deploying the Data Index service and Job Service supporting services using the OpenShift Serverless Logic Operator.
In a typical OpenShift Serverless Logic installation, you must deploy both services to ensure successful workflow execution. The Data Index service allows for efficient data management, while the Job Service ensures reliable job handling.
3.3.1. Supporting services and workflow integration Copy linkLink copied to clipboard!
When you deploy a supporting service in a given namespace, you can choose between an enabled or disabled deployment. An enabled deployment signals the OpenShift Serverless Logic Operator to automatically intercept workflow deployments using the preview or gitops profile within the namespace and configure them to connect with the service.
For example, when the Data Index service is enabled, workflows are automatically configured to send status change events to it. Similarly, enabling the Job Service ensures that a job is created whenever a workflow requires a timeout. The OpenShift Serverless Logic Operator also configures the Job Service to send events to the Data Index service, facilitating seamless integration between the services.
The OpenShift Serverless Logic Operator does not just deploy supporting services, it also manages other necessary configurations to ensure successful workflow execution. All these configurations are handled automatically. You only need to provide the supporting services configuration in the SonataFlowPlatform CR.
Deploying only one of the supporting services or using a disabled deployment are advanced use cases. In a standard installation, you must enable both services to ensure smooth workflow execution.
3.3.2. Supporting services deployment with the SonataFlowPlatform CR Copy linkLink copied to clipboard!
To deploy supporting services, configure the dataIndex and jobService subfields within the spec.services section of the SonataFlowPlatform custom resource (CR). This configuration instructs the OpenShift Serverless Logic Operator to deploy each service when the SonataFlowPlatform CR is applied.
Each configuration of a service is handled independently, allowing you to customize these settings alongside other configurations in the SonataFlowPlatform CR.
See the following scaffold example configuration for deploying supporting services:
- 1
- Data Index service configuration field.
- 2
- Setting
enabled: truedeploys the Data Index service. If set tofalseor omitted, the deployment will be disabled. The default value isfalse. - 3
- Job Service configuration field.
- 4
- Setting
enabled: truedeploys the Job Service. If set tofalseor omitted, the deployment will be disabled. The default value isfalse.
3.3.3. Supporting services scope Copy linkLink copied to clipboard!
The SonataFlowPlatform custom resource (CR) enables the deployment of supporting services within a specific namespace. This means all automatically configured supporting services and workflow communications are restricted to the namespace of the deployed platform.
This feature is particularly useful when separate instances of supporting services are required for different sets of workflows. For example, you can deploy an application in isolation with its workflows and supporting services, ensuring they remain independent from other deployments.
3.3.4. Supporting services persistence configurations Copy linkLink copied to clipboard!
The persistence configuration for supporting services in OpenShift Serverless Logic can be either ephemeral or PostgreSQL, depending on needs of your environment. Ephemeral persistence is ideal for development and testing, while PostgreSQL persistence is recommended for production environments.
3.3.4.1. Ephemeral persistence configuration Copy linkLink copied to clipboard!
The ephemeral persistence uses an embedded PostgreSQL database that is dedicated to each service. The OpenShift Serverless Logic Operator recreates this database with every service restart, making it suitable only for development and testing purposes. You do not need any additional configuration other than the following SonataFlowPlatform CR:
3.3.4.2. PostgreSQL persistence configuration Copy linkLink copied to clipboard!
For PostgreSQL persistence, you must set up a PostgreSQL server instance on your cluster. The administration of this instance remains independent of the OpenShift Serverless Logic Operator control. To connect a supporting service with the PostgreSQL server, you must configure the appropriate database connection parameters.
You can configure PostgreSQL persistence in the SonataFlowPlatform CR by using the following example:
Example of PostgreSQL persistence configuration
- 1
- Name of the service to connect with the PostgreSQL database server.
- 2
- Optional: Defines the namespace of the PostgreSQL Service. Defaults to the SonataFlowPlatform namespace.
- 3
- Defines the name of the PostgreSQL database for storing supporting service data.
- 4
- Optional: Specifies the schema for storing supporting service data. Default value is
SonataFlowPlatformname, suffixed with-data-index-serviceor-jobs-service. For example,sonataflow-platform-example-data-index-service. - 5
- Optional: Port number to connect with the PostgreSQL Service. Default value is
5432. - 6
- Defines the name of the secret containing the username and password for database access.
- 7
- Defines the name of the key in the secret that contains the username to connect with the database.
- 8
- Defines the name of the key in the secret that contains the password to connect with the database.
You can configure each service’s persistence independently by using the respective persistence field.
Create the secrets to access PostgreSQL by running the following command:
oc create secret generic <postgresql_secret_name> \ --from-literal=POSTGRESQL_USER=<user> \ --from-literal=POSTGRESQL_PASSWORD=<password> \ -n <namespace>
$ oc create secret generic <postgresql_secret_name> \
--from-literal=POSTGRESQL_USER=<user> \
--from-literal=POSTGRESQL_PASSWORD=<password> \
-n <namespace>
3.3.4.3. Common PostgreSQL persistence configuration Copy linkLink copied to clipboard!
The OpenShift Serverless Logic Operator automatically connects supporting services to the common PostgreSQL server configured in the spec.persistence field.
For rules, the following precedence is applicable:
-
If you configure a specific persistence for a supporting service, for example,
services.dataIndex.persistence, it uses that configuration. - If you do not configure persistence for a service, the system uses the common persistence configuration from the current platform.
When using a common PostgreSQL configuration, each service schema is automatically set as the SonataFlowPlatform name, suffixed with -data-index-service or -jobs-service, for example, sonataflow-platform-example-data-index-service.
3.3.5. Advanced supporting services configurations Copy linkLink copied to clipboard!
In scenarios where you must apply advanced configurations for supporting services, use the podTemplate field in the SonataFlowPlatform custom resource (CR). This field allows you to customize the service pod deployment by specifying configurations like the number of replicas, environment variables, container images, and initialization options.
You can configure advanced settings for the service by using the following example:
Advanced configurations example for the Data Index service
You can set the 'services' field to either 'dataIndex' or 'jobService' depending on your requirement. The rest of the configuration remains the same.
- 1
- Defines the number of replicas. Default value is
1. In the case ofjobService, this value is always overridden to1because it operates as a singleton service. - 2
- Holds specific configurations for the container running the service.
- 3
- Allows you to fine-tune service properties by specifying environment variables.
- 4
- Configures the container image for the service, useful if you need to update or customize the image.
- 5
- Configures init containers for the pod, useful for setting up prerequisites before the main container starts.
The podTemplate field provides flexibility for tailoring the deployment of each supporting service. It follows the standard PodSpec API, meaning the same API validation rules apply to these fields.
3.3.6. Cluster scoped supporting services Copy linkLink copied to clipboard!
You can define a cluster-wide set of supporting services that can be consumed by workflows across different namespaces, by using the SonataFlowClusterPlatform custom resource (CR). By referencing an existing namespace-specific SonataFlowPlatform CR, you can extend the use of these services cluster-wide.
You can use the following example of a basic configuration that enables workflows deployed in any namespace to utilize supporting services deployed in a specific namespace, such as example-namespace:
Example of a SonataFlowClusterPlatform CR
You can override these cluster-wide services within any namespace by configuring that namespace in SonataFlowPlatform.spec.services.