Serverless Logic
Introduction to OpenShift Serverless Logic
Abstract
Chapter 1. Getting started
1.1. Creating and running workflows with Knative Workflow plugin
You can create and run the OpenShift Serverless Logic workflows locally.
1.1.1. Creating a workflow
You can use the create
command with kn workflow
to set up a new OpenShift Serverless Logic project in your current directory.
Prerequisites
-
You have installed the OpenShift Serverless Logic
kn-workflow
CLI plugin.
Procedure
Create a new OpenShift Serverless Logic workflow project by running the following command:
$ kn workflow create
By default, the generated project name is
new-project
. You can change the project name by using the[-n|--name]
flag as follows:Example command
$ kn workflow create --name my-project
1.1.2. Running a workflow locally
You can use the run
command with kn workflow
to build and run your OpenShift Serverless Logic workflow project in your current directory.
Prerequisites
- You have installed Podman on your local machine.
-
You have installed the OpenShift Serverless Logic
kn-workflow
CLI plugin. - You have created an OpenShift Serverless Logic workflow project.
Procedure
Run the following command to build and run your OpenShift Serverless Logic workflow project:
$ kn workflow run
When the project is ready, the Development UI automatically opens in your browser on
localhost:8080/q/dev-ui
and you will find the Serverless Workflow Tools tile available. Alternatively, you can access the tool directly usinghttp://localhost:8080/q/dev-ui/org.apache.kie.sonataflow.sonataflow-quarkus-devui/workflows
.
You can execute a workflow locally using a container that runs on your machine. Stop the container with Ctrl+C.
1.2. Deploying workflows
You can deploy the Serverless Logic workflows on the cluster in two modes: Dev mode and Preview mode.
1.2.1. Deploying workflows in Dev mode
You can deploy your local workflow on OpenShift Container Platform in Dev mode. You can use this deployment to experiment and modify your workflow directly on the cluster, seeing changes almost immediately. Dev mode is designed for development and testing purposes. It is ideal for initial development stages and for testing new changes.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Create the workflow configuration YAML file.
Example
workflow-dev.yaml
fileapiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: greeting 1 annotations: sonataflow.org/description: Greeting example on k8s! sonataflow.org/version: 0.0.1 sonataflow.org/profile: dev 2 spec: flow: start: ChooseOnLanguage functions: - name: greetFunction type: custom operation: sysout states: - name: ChooseOnLanguage type: switch dataConditions: - condition: "${ .language == \"English\" }" transition: GreetInEnglish - condition: "${ .language == \"Spanish\" }" transition: GreetInSpanish defaultCondition: GreetInEnglish - name: GreetInEnglish type: inject data: greeting: "Hello from JSON Workflow, " transition: GreetPerson - name: GreetInSpanish type: inject data: greeting: "Saludos desde JSON Workflow, " transition: GreetPerson - name: GreetPerson type: operation actions: - name: greetAction functionRef: refName: greetFunction arguments: message: ".greeting + .name" end: true
To deploy the application, apply the YAML file by entering the following command:
$ oc apply -f <filename> -n <your_namespace>
Verify the deployment and check the status of the deployed workflow by entering the following command:
$ oc get workflow -n <your_namespace> -w
Ensure that your workflow is listed and the status is
Running
orCompleted
.Edit the workflow directly in the cluster by entering the following command:
$ oc edit sonataflow <workflow_name> -n <your_namespace>
- After editing, save the changes. The OpenShift Serverless Logic Operator detects the changes and updates the workflow accordingly.
Verification
To ensure the changes are applied correctly, verify the status and logs of the workflow by entering the following commands:
View the status of the workflow by running the following command:
$ oc get sonataflows -n <your_namespace>
View the workflow logs by running the following command:
$ oc logs <workflow_pod_name> -n <your_namespace>
Next steps
After completing the testing, delete the resources to avoid unnecessary usage by running the following command:
$ oc delete sonataflow <workflow_name> -n <your_namespace>
1.2.2. Deploying workflows in Preview mode
You can deploy your local workflow on OpenShift Container Platform in Preview mode. This allows you to experiment and modify your workflow directly on the cluster, seeing changes almost immediately. Preview mode is used for final testing and validation before deploying to production. It also ensures that workflows will run smoothly in a production-like setting.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
To deploy a workflow in Preview mode, OpenShift Serverless Logic Operator uses the build system on OpenShift Container Platform, which automatically creates the image for deploying your workflow.
The following sections explain how to build and deploy your workflow on a cluster using the OpenShift Serverless Logic Operator with a SonataFlow
custom resource.
1.2.2.1. Configuring workflows in Preview mode
1.2.2.1.1. Configuring the workflow base builder image
If your scenario requires strict policies for image usage, such as security or hardening constraints, replace the default image used by the OpenShift Serverless Logic Operator to build the final workflow container image.
By default, the OpenShift Serverless Logic Operator uses the image distributed in the official Red Hat Registry to build workflows. If your scenario requires strict policies for image use, such as security or hardening constraints, you can replace the default image.
To change this image, you edit the SonataFlowPlatform
custom resource (CR) in the namespace where you deployed your workflows.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
List the
SonataFlowPlatform
resources in your namespace by running the following command:$ oc get sonataflowplatform -n <your_namespace> 1
- 1
- Replace
<your_namespace>
with the name of your namespace.
Patch the
SonataFlowPlatform
resource with the new builder image by running the following command:$ oc patch sonataflowplatform <name> --patch 'spec:\n build:\n config:\n baseImage: <your_new_image_full_name_with_tag>' -n <your_namespace>
Verification
Verify that the
SonataFlowPlatform
CR has been patched correctly by running the following command:$ oc describe sonataflowplatform <name> -n <your_namespace> 1
- 1
- Replace
<name>
with the name of yourSonataFlowPlatform
resource and<your_namespace>
with the name of your namespace.
Ensure that the
baseImage
field underspec.build.config
reflects the new image.
1.2.2.1.2. Customization for the base builder Dockerfile
The OpenShift Serverless Logic Operator uses the logic-operator-rhel8-builder-config
config map custom resource (CR) in its openshift-serverless-logic
OpenShift Serverless Logic Operator installation namespace to configure and run the workflow build process. You can change the Dockerfile entry in this config map to adjust the Dockerfile to your needs.
Modifying the Dockerfile can break the build process.
This example is for reference only. The actual version might be slightly different. Do not use this example for your installation.
Example logic-operator-rhel8-builder-config
config map CR
apiVersion: v1 data: DEFAULT_WORKFLOW_EXTENSION: .sw.json Dockerfile: | FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 AS builder # Variables that can be overridden by the builder # To add a Quarkus extension to your application ARG QUARKUS_EXTENSIONS # Args to pass to the Quarkus CLI add extension command ARG QUARKUS_ADD_EXTENSION_ARGS # Additional java/mvn arguments to pass to the builder ARG MAVEN_ARGS_APPEND # Copy from build context to skeleton resources project COPY --chown=1001 . ./resources RUN /home/kogito/launch/build-app.sh ./resources #============================= # Runtime Run #============================= FROM registry.access.redhat.com/ubi9/openjdk-17:latest ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' # We make four distinct layers so if there are application changes, the library layers can be re-used COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/lib/ /deployments/lib/ COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/*.jar /deployments/ COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/app/ /deployments/app/ COPY --from=builder --chown=185 /home/kogito/serverless-workflow-project/target/quarkus-app/quarkus/ /deployments/quarkus/ EXPOSE 8080 USER 185 ENV AB_JOLOKIA_OFF="" ENV JAVA_OPTS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager" ENV JAVA_APP_JAR="/deployments/quarkus-run.jar" kind: ConfigMap metadata: name: sonataflow-operator-builder-config namespace: sonataflow-operator-system
1.2.2.1.3. Changing resource requirements
You can specify resource requirements for the internal builder pods, by creating or editing a SonataFlowPlatform
resource in the workflow namespace.
Example SonataFlowPlatform
resource
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform spec: build: template: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
Only one SonataFlowPlatform
resource is allowed per namespace. Fetch and edit the resource that the OpenShift Serverless Logic Operator created for you instead of trying to create another resource.
You can fine-tune the resource requirements for a particular workflow. Each workflow instance has a SonataFlowBuild
instance created with the same name as the workflow. You can edit the SonataFlowBuild
custom resource (CR) and specify the parameters as follows:
Example of SonataFlowBuild
CR
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: my-workflow spec: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
These parameters apply only to new build instances.
1.2.2.1.4. Passing arguments to the internal builder
You can customize the build process by passing build arguments to the SonataFlowBuild
instance or setting default build arguments in the SonataFlowPlatform
resource.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check for the existing
SonataFlowBuild
instance by running the following command:$ oc get sonataflowbuild <name> -n <namespace> 1
- 1
- Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with your namespace.
Add build arguments to the
SonataFlowBuild
instance by running the following command:$ oc edit sonataflowbuild <name> -n <namespace>
Add the desired build arguments under the
.spec.buildArgs
field of theSonataFlowBuild
instance:apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name> 1 spec: buildArgs: - name: <argument_1> value: <value_1> - name: <argument_2> value: <value_2>
- 1
- The name of the existing
SonataFlowBuild
instance.
Save the file and exit.
A new build with the updated configuration starts.
Set the default build arguments in the
SonataFlowPlatform
resource by running the following command:$ oc edit sonataflowplatform <name> -n <namespace>
Add the desired build arguments under the
.spec.buildArgs
field of theSonataFlowPlatform
resource:apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: <name> 1 spec: build: template: buildArgs: - name: <argument_1> value: <value_1> - name: <argument_2> value: <value_2>
- 1
- The name of the existing
SonataFlowPlatform
resource.
- Save the file and exit.
1.2.2.1.5. Setting environment variables in the internal builder
You can set environment variables to the SonataFlowBuild
internal builder pod. These variables are valid for the build context only and are not set on the final built workflow image.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check for existing
SonataFlowBuild
instance by running the following command:$ oc get sonataflowbuild <name> -n <namespace>
Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with your namespace.Edit the
SonataFlowBuild
instance by running the following command:$ oc edit sonataflowbuild <name> -n <namespace>
Example
SonataFlowBuild
instanceapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name> spec: envs: - name: <env_variable_1> value: <value_1> - name: <env_variable_2> value: <value_2>
Save the file and exit.
A new with the updated configuration starts.
Alternatively, you can set the enviroments in the
SonataFlowPlatform
, so that every new build instances will use it as a template.Example
SonataFlowPlatform
instanceapiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: <name> spec: build: template: envs: - name: <env_variable_1> value: <value_1> - name: <env_variable_2> value: <value_2>
1.2.2.1.6. Changing the base builder image
You can modify the default builder image used by the OpenShift Serverless Logic Operator by editing the logic-operator-rhel8-builder-config
config map.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Edit the
logic-operator-rhel8-builder-config
config map by running the following command:$ oc edit cm/logic-operator-rhel8-builder-config -n openshift-serverless-logic
Modify the dockerfile entry.
In your editor, locate the Dockerfile entry and change the first line to the desired image.
Example
data: Dockerfile: | FROM registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.33.0 # Change the image to the desired one
- Save the changes.
1.2.2.2. Building and deploying your workflow
You can create a SonataFlow
custom resource (CR) on OpenShift Container Platform and OpenShift Serverless Logic Operator builds and deploys the workflow.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Create a workflow YAML file similar to the following:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: greeting annotations: sonataflow.org/description: Greeting example on k8s! sonataflow.org/version: 0.0.1 spec: flow: start: ChooseOnLanguage functions: - name: greetFunction type: custom operation: sysout states: - name: ChooseOnLanguage type: switch dataConditions: - condition: "${ .language == \"English\" }" transition: GreetInEnglish - condition: "${ .language == \"Spanish\" }" transition: GreetInSpanish defaultCondition: GreetInEnglish - name: GreetInEnglish type: inject data: greeting: "Hello from JSON Workflow, " transition: GreetPerson - name: GreetInSpanish type: inject data: greeting: "Saludos desde JSON Workflow, " transition: GreetPerson - name: GreetPerson type: operation actions: - name: greetAction functionRef: refName: greetFunction arguments: message: ".greeting+.name" end: true
Apply the
SonataFlow
workflow definition to your OpenShift Container Platform namespace by running the following command:$ oc apply -f <workflow-name>.yaml -n <your_namespace>
Example command for the
greetings-workflow.yaml
file:$ oc apply -f greetings-workflow.yaml -n workflows
List all the build configurations by running the following command:
$ oc get buildconfigs -n workflows
Get the logs of the build process by running the following command:
$ oc logs buildconfig/<workflow-name> -n <your_namespace>
Example command for the
greetings-workflow.yaml
file:$ oc logs buildconfig/greeting -n workflows
Verification
To verify the deployment, list all the pods by running the following command:
$ oc get pods -n <your_namespace>
Ensure that the pod corresponding to your workflow is running.
Check the running pods and their logs by running the following command:
$ oc logs pod/<pod-name> -n workflows
1.2.2.3. Verifying workflow deployment
You can verify that your OpenShift Serverless Logic workflow is running by performing a test HTTP call from the workflow pod.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Create a workflow
YAML
file similar to the following:apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: name: greeting annotations: sonataflow.org/description: Greeting example on k8s! sonataflow.org/version: 0.0.1 spec: flow: start: ChooseOnLanguage functions: - name: greetFunction type: custom operation: sysout states: - name: ChooseOnLanguage type: switch dataConditions: - condition: "${ .language == \"English\" }" transition: GreetInEnglish - condition: "${ .language == \"Spanish\" }" transition: GreetInSpanish defaultCondition: GreetInEnglish - name: GreetInEnglish type: inject data: greeting: "Hello from JSON Workflow, " transition: GreetPerson - name: GreetInSpanish type: inject data: greeting: "Saludos desde JSON Workflow, " transition: GreetPerson - name: GreetPerson type: operation actions: - name: greetAction functionRef: refName: greetFunction arguments: message: ".greeting+.name" end: true
Create a route for the workflow service by running the following command:
$ oc expose svc/<workflow-service-name> -n workflows
This command creates a public URL to access the workflow service.
Set an environment variable for the public URL by running the following command:
$ WORKFLOW_SVC=$(oc get route/<workflow-service-name> -n <namespace> --template='{{.spec.host}}')
Make an HTTP call to the workflow to send a POST request to the service by running the following command:
$ curl -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{<"your": "json_payload">}' http://$WORKFLOW_SVC/<endpoint>
Example output
{ "id": "b5fbfaa3-b125-4e6c-9311-fe5a3577efdd", "workflowdata": { "name": "John", "language": "English", "greeting": "Hello from JSON Workflow, " } }
This output shows an example of the expected response if the workflow is running.
1.2.2.4. Restarting a build
To restart a build, you can add or edit the sonataflow.org/restartBuild: true
annotation in the SonataFlowBuild
instance. Restarting a build is necessary if there is a problem with your workflow or the initial build revision.
Prerequisites
- You have an OpenShift Serverless Logic Operator installed on your cluster.
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check if the
SonataFlowBuild
instance exists by running the following command:$ oc get sonataflowbuild <name> -n <namespace>
Edit the
SonataFlowBuild
instance by running the following command:$ oc edit sonataflowbuild/<name> -n <namespace>
Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with the namespace where your workflow is deployed.Add the
sonataflow.org/restartBuild: true
annotation to restart the build.apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowBuild metadata: name: <name> annotations: sonataflow.org/restartBuild: true
This action triggers the OpenShift Serverless Logic Operator to start a new build of the workflow.
To monitor the build process, check the build logs by running the following command:
$ oc logs buildconfig/<name> -n <namespace>
Replace
<name>
with the name of yourSonataFlowBuild
instance and<namespace>
with the namespace where your workflow is deployed.
1.2.3. Editing a workflow
When the OpenShift Serverless Logic Operator deploys a workflow service, it creates two config maps to store runtime properties:
-
User properties: Defined in a
ConfigMap
named after theSonataFlow
object with the suffix-props
. For example, if your workflow name isgreeting
, then theConfigMap
name isgreeting-props
. -
Managed properties: Defined in a
ConfigMap
named after theSonataFlow
object with the suffix-managed-props
. For example, if your workflow name isgreeting
, then theConfigMap
name isgreeting-managed-props
.
Managed properties always override any user property with the same key name and cannot be edited by the user. Any change would be overwritten by the Operator at the next reconciliation cycle.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Open and edit the
ConfigMap
by running the following command:$ oc edit cm <workflow_name>-props -n <namespace>
Replace
<workflow_name>
with the name of your workflow and<namespace>
with the namespace where your workflow is deployed.Add the properties in the
application.properties
section.Example of a workflow properties stored within a
ConfigMap
:apiVersion: v1 kind: ConfigMap metadata: labels: app: greeting name: greeting-props namespace: default data: application.properties: | my.properties.key = any-value
Ensure the properties are correctly formatted to prevent the Operator from replacing your configuration with the default one.
- After making the necessary changes, save the file and exit the editor.
1.2.4. Testing a workflow
To verify that your OpenShift Serverless Logic workflow is running correctly, you can perform a test HTTP call from the relevant pod.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
To create a route for the specified service in your namespace by running the following command:
$ oc expose svc <service_name> -n <namespace>
To fetch the URL for the newly exposed service by running the following command:
$ WORKFLOW_SVC=$(oc get route/<service_name> --template='{{.spec.host}}')
Perform a test HTTP call and send a
POST
request by running the following command:$ curl -X POST -H 'Content-Type:application/json' -H 'Accept:application/json' -d '<request_body>' http://$WORKFLOW_SVC/<endpoint>
- Verify the response to ensure the workflow is functioning as expected.
1.2.5. Troubleshooting a workflow
The OpenShift Serverless Logic Operator deploys its pod with health check probes to ensure the Workflow runs in a healthy state. If changes cause these health checks to fail, the pod will stop responding.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
Check the workflow status by running the following command:
$ oc get workflow <name> -o jsonpath={.status.conditions} | jq .
To fetch and analyze the the logs from the workflow’s deployment, run the following command:
$ oc logs deployment/<workflow_name> -f
1.2.6. Deleting a workflow
You can use the oc delete
command to delete your OpenShift Serverless Logic workflow in your current directory.
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift CLI
(oc)
.
Procedure
-
Verify that you have the correct file that defines the Workflow you want to delete. For example,
workflow.yaml
. Run the
oc delete
command to remove the Workflow from your specified namespace:$ oc delete -f <your_file> -n <your_namespace>
Replace
<your_file>
with the name of your Workflow file and<your_namespace>
with your namespace.
Chapter 2. Managing services
2.1. Configuring OpenAPI services
The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface for HTTP APIs. You can understand a service’s capabilities without access to the source code, additional documentation, or network traffic inspection. When you define a service by using the OpenAPI, you can understand and interact with it using minimal implementation logic. Just as interface descriptions simplify lower-level programming, the OpenAPI Specification eliminates guesswork in calling a service.
2.1.1. OpenAPI function definition
OpenShift Serverless Logic allows the workflows to interact with remote services using an OpenAPI specfication reference in a function.
Example OpenAPI function definition
{ "functions": [ { "name": "myFunction1", "operation": "classpath:/myopenapi-file.yaml#myFunction1" } ] }
The operation
attribute is a string
composed of the following parameters:
-
URI
: The engine uses this to locate the specification file, such asclasspath
. - Operation identifier: You can find this identifier in the OpenAPI specification file.
OpenShift Serverless Logic supports the following URI schemes:
-
classpath: Use this for files located in the
src/main/resources
folder of the application project.classpath
is the default URI scheme. If you do not define a URI scheme, the file location issrc/main/resources/myopenapifile.yaml
. - file: Use this for files located in the file system.
- http or https: Use these for remotely located files.
Ensure the OpenAPI specification files are available during build time. OpenShift Serverless Logic uses an internal code generation feature to send requests at runtime. After you build the application image, OpenShift Serverless Logic will not have access to these files.
If the OpenAPI service you want to add to the workflow does not have a specification file, you can either create one or update the service to generate and expose the file.
2.1.2. Sending REST requests based on the OpenAPI specification
To send REST requests that are based on the OpenAPI specification files, you must perform the following procedures:
- Define the function references
- Access the defined functions in the workflow states
Prerequisites
- You have OpenShift Serverless Logic Operator installed on your cluster.
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
To define the OpenAPI functions:
- Identify and access the OpenAPI specification files for the services you intend to invoke.
Copy the OpenAPI specification files into your workflow service directory, such as
src/main/resources/specs
.The following example shows the OpenAPI specification for the multiplication REST service:
Example multiplication REST service OpenAPI specification
openapi: 3.0.3 info: title: Generated API version: "1.0" paths: /: post: operationId: doOperation parameters: - in: header name: notUsed schema: type: string required: false requestBody: content: application/json: schema: $ref: '#/components/schemas/MultiplicationOperation' responses: "200": description: OK content: application/json: schema: type: object properties: product: format: float type: number components: schemas: MultiplicationOperation: type: object properties: leftElement: format: float type: number rightElement: format: float type: number
To define functions in the workflow, use the
operationId
from the OpenAPI specification to reference the desired operations in your function definitions.Example function definitions in the temperature conversion application
{ "functions": [ { "name": "multiplication", "operation": "specs/multiplication.yaml#doOperation" }, { "name": "subtraction", "operation": "specs/subtraction.yaml#doOperation" } ] }
-
Ensure that your function definitions reference the correct paths to the OpenAPI files stored in the
src/main/resources/specs
directory.
To access the defined functions in the workflow states:
- Define workflow actions to call the function definitions you added. Ensure each action references a function defined earlier.
Use the
functionRef
attribute to refer to the specific function by its name. Map the arguments in thefunctionRef
using the parameters defined in the OpenAPI specification.The following example shows about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping function arguments in workflow
{ "states": [ { "name": "SetConstants", "type": "inject", "data": { "subtractValue": 32.0, "multiplyValue": 0.5556 }, "transition": "Computation" }, { "name": "Computation", "actionMode": "sequential", "type": "operation", "actions": [ { "name": "subtract", "functionRef": { "refName": "subtraction", "arguments": { "leftElement": ".fahrenheit", "rightElement": ".subtractValue" } } }, { "name": "multiply", "functionRef": { "refName": "multiplication", "arguments": { "leftElement": ".difference", "rightElement": ".multiplyValue" } } } ], "end": { "terminate": true } } ] }
-
Check the
Operation Object
section of the OpenAPI specification to understand how to structure parameters in the request. -
Use
jq
expressions to extract data from the payload and map it to the required parameters. Ensure the engine maps parameter names according to the OpenAPI specification. For operations requiring parameters in the request path instead of the body, refer to the parameter definitions in the OpenAPI specification.
For more information about mapping parameters in the request path instead of request body, you can refer to the following PetStore API example:
Example for mapping path parameters
{ "/pet/{petId}": { "get": { "tags": ["pet"], "summary": "Find pet by ID", "description": "Returns a single pet", "operationId": "getPetById", "parameters": [ { "name": "petId", "in": "path", "description": "ID of pet to return", "required": true, "schema": { "type": "integer", "format": "int64" } } ] } } }
Following is an example invocation of a function, in which only one parameter named
petId
is added in the request path:Example of calling the PetStore function
{ "name": "CallPetStore", 1 "actionMode": "sequential", "type": "operation", "actions": [ { "name": "getPet", "functionRef": { "refName": "getPetById", 2 "arguments": { 3 "petId": ".petId" } } } ] }
2.1.3. Configuring the endpoint URL of OpenAPI services
After accessing the function definitions in workflow states, you can configure the endpoint URL of OpenAPI services.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have created your OpenShift Serverless Logic project.
- You have access to the OpenAPI specification files.
- You have defined the function definitions in the workflow.
- You have the access to the defined functions in the workflow states.
Procedure
-
Locate the OpenAPI specification file you want to configure. For example,
substraction.yaml
. -
Convert the file name into a valid configuration key by replacing special characters, such as
.
, with underscores and converting letters to lowercase. For example, changesubstraction.yaml
tosubstraction_yaml
. To define the configuration key, use the converted file name as the REST client configuration key. Set this key as an environment variable, as shown in the following example:
quarkus.rest-client.subtraction_yaml.url=http://myserver.com
To prevent hardcoding URLs in the
application.properties
file, use environment variable substitution, as shown in the following example:quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
In this example:
-
Configuration Key:
quarkus.rest-client.subtraction_yaml.url
- Environment variable: SUBTRACTION_URL
-
Fallback URL:
http://myserver.com
-
Configuration Key:
-
Ensure that the
(SUBTRACTION_URL)
environment variable is set in your system or deployment environment. If the variable is not found, the application uses the fallback URL(http://myserver.com)
. Add the configuration key and URL substitution to the
application.properties
file:quarkus.rest-client.subtraction_yaml.url=${SUBTRACTION_URL:http://myserver.com}
- Deploy or restart your application to apply the new configuration settings.
2.2. Configuring OpenAPI services endpoints
OpenShift Serverless Logic uses the kogito.sw.operationIdStrategy
property to generate the REST client for invoking services defined in OpenAPI documents. This property determines how the configuration key is derived for the REST client configuration.
The kogito.sw.operationIdStrategy
property supports the following values: FILE_NAME
, FULL_URI
, FUNCTION_NAME
, and SPEC_TITLE
.
FILE_NAME
OpenShift Serverless Logic uses the OpenAPI document file name to create the configuration key. The key is based on the file name, where special characters are replaced with underscores.
Example configuration:
quarkus.rest-client.stock_portfolio_svc_yaml.url=http://localhost:8282/ 1
- 1
- The OpenAPI File Path is
src/main/resources/openapi/stock-portfolio-svc.yaml
. The generated key that configures the URL for the REST client isstock_portfolio_svc_yaml
FULL_URI
OpenShift Serverless Logic uses the complete URI path of the OpenAPI document as the configuration key. The full URI is sanitized to form the key.
Example for Serverless Workflow
{ "id": "myworkflow", "functions": [ { "name": "myfunction", "operation": "https://my.remote.host/apicatalog/apis/123/document" } ] ... }
Example configuration:
quarkus.rest-client.apicatalog_apis_123_document.url=http://localhost:8282/ 1
- 1
- The URI path is
https://my.remote.host/apicatalog/apis/123/document
. The generated key that configures the URL for the REST client isapicatalog_apis_123_document
.
FUNCTION_NAME
OpenShift Serverless Logic combines the workflow ID and the function name referencing the OpenAPI document to generate the configuration key.
Example for Serverless Workflow
{ "id": "myworkflow", "functions": [ { "name": "myfunction", "operation": "https://my.remote.host/apicatalog/apis/123/document" } ] ... }
Example configuration:
quarkus.rest-client.myworkflow_myfunction.url=http://localhost:8282/ 1
- 1
- The workflow ID is
myworkflow
. The function name ismyfunction
. The generated key that configures the URL for the REST client ismyworkflow_myfunction
.
SPEC_TITLE
OpenShift Serverless Logic uses the
info.title
value from the OpenAPI document to create the configuration key. The title is sanitized to form the key.Example for OpenAPI document
openapi: 3.0.3 info: title: stock-service API version: 2.0.0-SNAPSHOT paths: /stock-price/{symbol}: ...
Example configuration:
quarkus.rest-client.stock-service_API.url=http://localhost:8282/ 1
- 1
- The OpenAPI document title is
stock-service API
. The generated key that configures the URL for the REST client isstock-service_API
.
2.2.1. Using URI alias
As an alternative to the kogito.sw.operationIdStrategy
property, you can assign an alias to a URI by using the workflow-uri-definitions
custom extension. This alias simplifies the configuration process and can be used as a configuration key in REST client settings and function definitions.
The workflow-uri-definitions
extension allows you to map a URI to an alias, which you can reference throughout the workflow and in your configuration files. This approach provides a centralized way to manage URIs and their configurations.
Prerequisites
- You have access to a OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
Procedure
Add the
workflow-uri-definitions
extension to your workflow. Within this extension, create aliases for your URIs.Example workflow
{ "extensions": [ { "extensionid": "workflow-uri-definitions", 1 "definitions": { "remoteCatalog": "https://my.remote.host/apicatalog/apis/123/document" 2 } } ], "functions": [ 3 { "name": "operation1", "operation": "remoteCatalog#operation1" }, { "name": "operation2", "operation": "remoteCatalog#operation2" } ] }
- 1
- Set the extension ID to
workflow-uri-definitions
. - 2
- Set the alias definition by mapping the
remoteCatalog
alias to a URI, for example,https://my.remote.host/apicatalog/apis/123/document
URI. - 3
- Set the function operations by using the
remoteCatalog
alias with the operation identifiers, for example,operation1
andoperation2
operation identifiers.In the
application.properties
file, configure the REST client by using the alias defined in the workflow.Example property
quarkus.rest-client.remoteCatalog.url=http://localhost:8282/
In the previous example, the configuration key is set to
quarkus.rest-client.remoteCatalog.url
, and the URL is set tohttp://localhost:8282/
, which the REST clients use by referring to theremoteCatalog
alias.In your workflow, use the alias when defining functions that operate on the URI.
Example Workflow (continued):
{ "functions": [ { "name": "operation1", "operation": "remoteCatalog#operation1" }, { "name": "operation2", "operation": "remoteCatalog#operation2" } ] }
2.3. Troubleshooting services
Efficient troubleshooting of the HTTP-based function invocations, such as those using OpenAPI functions, is crucial for maintaining workflow orchestrations.
To diagnose issues, you can trace HTTP requests and responses.
2.3.1. Tracing HTTP requests and responses
OpenShift Serverless Logic uses the Apache HTTP client to the trace HTTP requests and responses.
Prerequisites
- You have access to an OpenShift Serverless Logic project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have access to the OpenAPI specification files.
- You have access to the workflow definition and instance IDs for correlating HTTP requests and responses.
- You have access to the log configuration of the application where the HTTP service invocations are occurring
Procedure
To trace HTTP requests and responses, OpenShift Serverless Logic uses the Apache HTTP client by setting the following property:
# Turning HTTP tracing on quarkus.log.category."org.apache.http".level=DEBUG
Add the following configuration to your application’s
application.properties
file to turn on debugging for the Apache HTTP Client:quarkus.log.category."org.apache.http".level=DEBUG
- Restart your application to propagate the log configuration changes.
After restarting, check the logs for HTTP request traces.
Example logs of a traced HTTP request
2023-09-25 19:00:55,242 DEBUG Executing request POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> POST /v2/models/yolo-model/infer HTTP/1.1 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Accept: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Type: application/json 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocid: inferencepipeline 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocinstanceid: 85114b2d-9f64-496a-bf1d-d3a0760cde8e 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocist: Active 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoproctype: SW 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> kogitoprocversion: 1.0 2023-09-25 19:00:55,243 DEBUG http-outgoing-0 >> Content-Length: 23177723 2023-09-25 19:00:55,244 DEBUG http-outgoing-0 >> Host: yolo-model-opendatahub-model.apps.trustyai.dzzt.p1.openshiftapps.com
Check the logs for HTTP response traces following the request logs.
Example logs of a traced HTTP response
2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "HTTP/1.1 500 Internal Server Error[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-type: application/json[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "date: Mon, 25 Sep 2023 19:01:00 GMT[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "content-length: 186[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "set-cookie: 276e4597d7fcb3b2cba7b5f037eeacf5=5427fafade21f8e7a4ee1fa6c221cf40; path=/; HttpOnly; Secure; SameSite=None[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "[\r][\n]" 2023-09-25 19:01:00,738 DEBUG http-outgoing-0 << "{"code":13, "message":"Failed to load Model due to adapter error: Error calling stat on model file: stat /models/yolo-model__isvc-1295fd6ba9/yolov5s-seg.onnx: no such file or directory"}"