Chapter 6. OpenShift Serverless Logic overview
OpenShift Serverless Logic enables developers to define declarative workflow models that orchestrate event-driven, serverless applications.
You can write the workflow models in YAML or JSON format, which are ideal for developing and deploying serverless applications in cloud or container environments.
To deploy the workflows in your OpenShift Container Platform, you can use the OpenShift Serverless Logic Operator. The following sections offer an overview of the various OpenShift Serverless Logic concepts.
6.1. Events Copy linkLink copied to clipboard!
An event state includes one or more event definitions that specify the CloudEvent types the state listens to. You can use an event state to start a new workflow instance when it receives a designated CloudEvent, or pause an existing workflow instance until it receives one.
In an event state definition, the onEvents property groups CloudEvent types that trigger the same set of actions. The exclusive property determines how the system matches events. If exclusive is false, the system requires all CloudEvent types in the eventRefs array to match. Otherwise, any referenced CloudEvent type can trigger a match.
The following example shows event definitions, consisting of two CloudEvent types, including noisy and silent and you get an output similar to the following examples:
"events": [
{
"name": "noisyEvent",
"source": "",
"type": "noisy",
"dataOnly" : "false"
},
{
"name": "silentEvent",
"source": "",
"type": "silent"
}
]
You can define an event state with separate onEvent items for noisy and silent CloudEvent types, and set the exclusive property to false to run different actions when both events occur.
{
"name": "waitForEvent",
"type": "event",
"onEvents": [
{
"eventRefs": [
"noisyEvent"
],
"actions": [
{
"functionRef": "letsGetLoud"
}
]
},
{
"eventRefs": [
"silentEvent"
],
"actions": [
{
"functionRef": "beQuiet"
}
]
}
]
,
"exclusive": false
}
6.2. Callbacks Copy linkLink copied to clipboard!
The Callback state performs an action and waits for an event that the action produces before it resumes the workflow. It invokes an asynchronous external service, making it suitable for fire&wait-for-result operations.
From a workflow perspective, an asynchronous service returns control to the caller immediately without waiting for the action to complete. After the action completes, the system publishes a CloudEvent to resume the workflow.
{
"name": "CheckCredit",
"type": "callback",
"action": {
"functionRef": {
"refName": "callCreditCheckMicroservice",
"arguments": {
"customer": "${ .customer }"
}
}
},
"eventRef": "CreditCheckCompletedEvent",
"timeouts": {
"stateExecTimeout": "PT15M"
},
"transition": "EvaluateDecision"
}
name: CheckCredit
type: callback
action:
functionRef:
refName: callCreditCheckMicroservice
arguments:
customer: "${ .customer }"
eventRef: CreditCheckCompletedEvent
timeouts:
stateExecTimeout: PT15M
transition: EvaluateDecision
The action property defines a function call that triggers an external activity or service. After the action executes, the Callback state waits for a CloudEvent, which indicates the completion of the manual decision by the called service.
After the completion callback event is received, the Callback state completes its execution and transitions to the next defined workflow state or completes workflow execution if it is an end state.
6.3. JQ expressions Copy linkLink copied to clipboard!
Each workflow instance uses a data model. The data model consists of a JSON object, regardless of whether the workflow file uses YAML or JSON. The initial content of the JSON object depends on how you start the workflow. If you start the workflow by using a CloudEvent, the workflow reads content from the data property. If you start the workflow through an HTTP POST request, the workflow reads content from the request body.
JSON Query (JQ) expressions interact with the data model. The system supports JsonPath and JQ expression languages, and JQ serves as the default. You can change the expression language to JsonPath by using the expressionLang property.
{
"name": "max",
"type": "expression",
"operation": "{max: .numbers | max_by(.x), min: .numbers | min_by(.y)}"
}
6.4. Error handling Copy linkLink copied to clipboard!
With OpenShift Serverless Logic, you define explicit error handling in your workflow model instead of relying on generic mechanisms. Explicit error handling helps you manage errors that occur during interactions between the workflow and external systems. When an error occurs, it changes the normal workflow sequence. The workflow transitions to an alternative state that can handle the error instead of moving to the predefined state.
Each workflow state defines its own error handling for issues that occur during its execution. Error handling in one state does not handle errors that occur in another state during workflow execution.
If the workflow encounters an unknown error that the definition does not handle explicitly, the runtime reports the error and stops workflow execution.
6.4.1. Error definition Copy linkLink copied to clipboard!
An error definition in a workflow includes the name and code parameters. The name provides a short, natural language description of the error, such as wrong parameter. The code helps the implementation identify the error.
The code parameter is mandatory. The engine uses different strategies to map the value to a runtime exception, including fully qualified class name (FQCN), error message, and status code.
During workflow execution, you must handle known errors in the top-level errors property. You can define this property as a string to reference a reusable JSON or YAML file, or as an array to define errors inline in the workflow.
The following example shows how to reference a reusable JSON error definition file:
{
"errors": "file://documents/reusable/errors.json"
}
The following example shows how to reference a reusable YAML error definition file:
errors: file://documents/reusable/errors.yaml
The following example defines workflow errors inline in a JSON file:
{
"errors": [
{
"name": "Service not found error",
"code": "404",
"description": "Server has not found anything matching the provided service endpoint information"
}
]
}
The following example defines workflow errors inline in a YAML file:
errors:
- name: Service not found error
code: '404'
description: Server has not found anything matching the provided service endpoint information
6.5. Schema definitions Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports two types of schema definitions: input schema definition and output schema definition.
6.5.1. Input schema definition Copy linkLink copied to clipboard!
The dataInputSchema parameter validates workflow data input against a defined JSON schema. You should offer dataInputSchema because the system verifies the input before it executes any workflow states.
You can define a dataInputSchema as follows:
"dataInputSchema": {
"schema": "URL_to_json_schema",
"failOnValidationErrors": false
}
The schema property uses a URI to specify the path to the JSON schema that validates the workflow data input. You can use a classpath URI, a file path, or an HTTP URL. If you specify a classpath URI, place the JSON schema file in the project resources or another directory in the classpath.
The failOnValidationErrors parameter is optional and controls how the system handles invalid input data. If you do not specify this parameter or set it to true, the system throws an exception and stops execution. If you set it to false, the system continues execution and logs validation errors at the warning (WARN) level.
6.5.2. Output schema definition Copy linkLink copied to clipboard!
Output schema definition is applied after workflow execution to verify that the output model has the expected format. It is also useful for Swagger generation purposes.
Similar to Input schema definition, you must specify the URL to the JSON schema, using outputSchema as follows:
Example of outputSchema definition
"extensions" : [ {
"extensionid": "workflow-output-schema",
"outputSchema": {
"schema" : "URL_to_json_schema",
"failOnValidationErrors": false
}
} ]
The same rules described for dataInputSchema are applicable for schema and failOnValidationErrors. The only difference is that the latter flag is applied after workflow execution.
6.6. Custom functions Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports the custom function type, which enables the implementation to extend the function definitions capability. By combining with the operation string, you can use a list of predefined function types.
Custom function types might not be portable across other runtime implementations.
6.6.1. Sysout custom function Copy linkLink copied to clipboard!
You can use the sysout function for logging, as shown in the following example:
{
"functions": [
{
"name": "logInfo",
"type": "custom",
"operation": "sysout:INFO"
}
]
}
The string after the : is optional and is used to indicate the log level. The possible values are TRACE, DEBUG, INFO, WARN, and ERROR. If the value is not present, INFO is the default.
In the state definition, you can call the same sysout function as shown in the following example:
{
"states": [
{
"name": "myState",
"type": "operation",
"actions": [
{
"name": "printAction",
"functionRef": {
"refName": "logInfo",
"arguments": {
"message": "\"Workflow model is \\(.)\""
}
}
}
]
}
]
}
In the earlier example, the message argument can be a jq expression or a jq string using interpolation.
6.6.2. Java custom function Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports the java functions within an Apache Maven project, in which you define your workflow service.
The following example shows the declaration of a java function:
Example of a java function declaration
{
"functions": [
{
"name": "myFunction",
"type": "custom",
"operation": "service:java:com.acme.MyInterfaceOrClass::myMethod"
}
]
}
functions.name-
myFunctionis the function name. functions.type-
customis the function type. functions.operation-
service:java:com.acme.MyInterfaceOrClass::myMethodis the custom operation definition. In the custom operation definition,serviceis the reserved operation keyword, followed by thejavakeyword.com.acme.MyInterfaceOrClassis the Fully Qualified Class Name (FQCN) of the interface or implementation class, followed by the method namemyMethod.
6.6.3. Knative custom function Copy linkLink copied to clipboard!
OpenShift Serverless Logic implements a custom function through the knative-serving add-on to call Knative services. You define a static URI for a Knative service and use it to perform HTTP requests. The system queries the Knative service in the current cluster and translates it into a valid URL.
The following example uses a deployed Knative service:
$ kn service list
NAME URL LATEST AGE CONDITIONS READY REASON
custom-function-knative-service http://custom-function-knative-service.default.10.109.169.193.sslip.io custom-function-knative-service-00001 3h16m 3 OK / 3 True
You can declare a OpenShift Serverless Logic custom function by using the Knative service name, as shown in the following example:
"functions": [
{
"name": "greet",
"type": "custom",
"operation": "knative:services.v1.serving.knative.dev/custom-function-knative-service?path=/plainJsonFunction",
}
]
function.name-
greetis the function name. function.type-
customis the function type. function.operation-
In
operation, you set the coordinates of the Knative service.
This function sends a POST request. If you do not specify a path, OpenShift Serverless Logic uses the root path (/). You can also send GET requests by setting method=GET in the operation. In this case, the arguments are forwarded over a query string.
6.6.4. REST custom function Copy linkLink copied to clipboard!
OpenShift Serverless Logic offers the REST custom type as a shortcut. When you use a custom REST function, you specify the HTTP URI to cal and the HTTP method GET, POST, PATCH, or PUT in the function definition by using the operation string. When you call the function, you pass request arguments as you do with an OpenAPI function.
The following example shows the declaration of a rest function:
{
"functions": [
{
"name": "multiplyAllByAndSum",
"type": "custom",
"operation": "rest:post:/numbers/{multiplier}/multiplyByAndSum"
}
]
}
function.name-
multiplyAllAndSumis the function name. function.type-
customis the function type. function.operation-
rest:post:/numbers/{multiplier}/multiplyByAndSumis the custom operation definition. In the custom operation definition,restis the reserved operation keyword that indicates this is a REST call,postis the HTTP method, and/numbers/{multiplier}/multiplyByAndSumis the relative endpoint.
When using the relative endpoints, you must specify the host as a property. The format of the host property is kogito.sw.functions.<function_name>.host. In this example, kogito.sw.functions.multiplyAllByAndSum.host is the host property key. You can override the default port (80) if needed by specifying the kogito.sw.functions.multiplyAllAndSum.port property.
This endpoint expects as body a JSON object whose field numbers is an array of integers, multiplies each item in the array by multiplier, and returns the sum of all the multiplied items.
6.7. Timeouts Copy linkLink copied to clipboard!
OpenShift Serverless Logic defines several timeouts configurations that you can use to configure maximum times for the workflow execution in different scenarios. You can configure how long a workflow can wait for an event to arrive when it is in a given state or the maximum execution time for the workflow.
Regardless of where you define it, configure a timeout as a duration that starts when the referenced workflow element becomes active. Timeouts use the ISO 8601 date and time standard to specify durations and follow the format PnDTnHnMn.nS, where days equal exactly 24 hours. For example, PT15M sets a duration of 15 minutes, and P2DT3H4M sets a duration of 2 days, 3 hours, and 4 minutes.
Month-based timeouts such as P2M, or period of two months, are not valid since the month duration might vary. In that case, use PT60D instead.
6.7.1. Workflow timeout Copy linkLink copied to clipboard!
To configure the maximum duration for a workflow before cancellation, define workflow timeouts. When the system cancels a workflow after the timeout expires, it marks the workflow as finished and removes access through a GET request. As a result, the workflow behaves as if you set the interrupt property to true by default.
You can define workflow timeouts by using the top-level timeouts property. You can specify this property in two formats: string or object.
-
You can use the
stringtype to give a URI that points to a JSON or YAML file containing the workflow timeout definitions. -
You can use the
objecttype to define the timeout settings inline within the workflow.
For example, to cancel the workflow after an hour of execution, use the following configuration:
{
"id": "workflow_timeouts",
"version": "1.0",
"name": "Workflow Timeouts",
"description": "Simple workflow to show the workflowExecTimeout working",
"start": "PrintStartMessage",
"timeouts": {
"workflowExecTimeout": "PT1H"
} ...
}
6.7.2. Event timeout Copy linkLink copied to clipboard!
When you define a state in a workflow, use the timeouts property to set the maximum time allowed to complete that state. If the state exceeds this time, the system marks it as timed out and continues execution from that state. The execution flow depends on the state type. For example, the workflow might move to the next state.
Event-based states can use the sub-property eventTimeout to configure the maximum time to wait for an event to arrive. This is the only property that is supported in current implementation.
Event timeouts support callback state timeout, switch state timeout, and event state timeout.
6.7.3. Callback state timeout Copy linkLink copied to clipboard!
You can use the Callback state when you need to run an action that calls an external service and wait for an asynchronous response in the form of an event.
After the workflow consumes the response event, it continues execution and typically moves to the next state defined in the transition property.
Because the Callback state halts execution until the event arrives, you can configure an eventTimeout. If the event does not arrive within the configured duration, the workflow continues execution and moves to the next state defined in the transition property.
The following example defines a Callback state with a timeout in JSON format:
{
"name": "CallbackState",
"type": "callback",
"action": {
"name": "callbackAction",
"functionRef": {
"refName": "callbackFunction",
"arguments": {
"input": "${\"callback-state-timeouts: \" + $WORKFLOW.instanceId + \" has executed the callbackFunction.\"}"
}
}
},
"eventRef": "callbackEvent",
"transition": "CheckEventArrival",
"onErrors": [
{
"errorRef": "callbackError",
"transition": "FinalizeWithError"
}
],
"timeouts": {
"eventTimeout": "PT30S"
}
}
6.7.4. Switch state timeout Copy linkLink copied to clipboard!
You can use the Switch state when you need to take an action depending on certain conditions. You can use these conditions based on the workflow data, dataConditions, or on events, eventConditions.
When you use the eventConditions, the workflow execution waits to make a decision until any of the configured events arrives and matches a condition. In this situation, you can configure an event timeout, which controls the maximum time to wait for an event to match the conditions.
If this time expires, the workflow moves to the state defined in the defaultCondition property.
The following example defines a Switch state with a timeout:
{
"name": "ChooseOnEvent",
"type": "switch",
"eventConditions": [
{
"eventRef": "visaApprovedEvent",
"transition": "ApprovedVisa"
},
{
"eventRef": "visaDeniedEvent",
"transition": "DeniedVisa"
}
],
"defaultCondition": {
"transition": "HandleNoVisaDecision"
},
"timeouts": {
"eventTimeout": "PT5S"
}
}
6.7.5. Event state timeout Copy linkLink copied to clipboard!
You can use the Event state to wait for one or more events, run a set of actions, and then continue execution. If the Event state serves as the starting state, the workflow creates a new instance.
You can use the timeouts property in this state to set the maximum time the workflow waits for the configured events to arrive.
If the workflow exceeds this time and does not receive the events, it moves to the next state defined in the transition property. If the state defines an end state, the workflow ends the instance without running any actions.
The following example defines an Event state with a timeout:
{
"name": "WaitForEvent",
"type": "event",
"onEvents": [
{
"eventRefs": [
"event1"
],
"eventDataFilter": {
"data": "${ \"The event1 was received.\" }",
"toStateData": "${ .exitMessage }"
},
"actions": [
{
"name": "printAfterEvent1",
"functionRef": {
"refName": "systemOut",
"arguments": {
"message": "${\"event-state-timeouts: \" + $WORKFLOW.instanceId + \" executing actions for event1.\"}"
}
}
}
]
},
{
"eventRefs": [
"event2"
],
"eventDataFilter": {
"data": "${ \"The event2 was received.\" }",
"toStateData": "${ .exitMessage }"
},
"actions": [
{
"name": "printAfterEvent2",
"functionRef": {
"refName": "systemOut",
"arguments": {
"message": "${\"event-state-timeouts: \" + $WORKFLOW.instanceId + \" executing actions for event2.\"}"
}
}
}
]
}
],
"timeouts": {
"eventTimeout": "PT30S"
},
"transition": "PrintExitMessage"
}
6.8. Parallelism Copy linkLink copied to clipboard!
OpenShift Serverless Logic serializes the execution of parallel tasks. The term parallel does not imply simultaneous execution; it means that branches have no logical dependency on each other. An inactive branch can start or resume a task without waiting for an active branch to complete if the active branch suspends its execution, for example, while waiting for an event.
A parallel state splits the current workflow execution path into many branches, each with its own path. The workflow executes these paths independently and then joins them back into a single path based on the completionType parameter.
The following example shows a parallel workflow in JSON format:
{
"name":"ParallelExec",
"type":"parallel",
"completionType": "allOf",
"branches": [
{
"name": "Branch1",
"actions": [
{
"functionRef": {
"refName": "functionNameOne",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
},
{
"name": "Branch2",
"actions": [
{
"functionRef": {
"refName": "functionNameTwo",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
}
],
"end": true
}
The following example shows a parallel workflow in YAML format:
name: ParallelExec
type: parallel
completionType: allOf
branches:
- name: Branch1
actions:
- functionRef:
refName: functionNameOne
arguments:
order: "${ .someParam }"
- name: Branch2
actions:
- functionRef:
refName: functionNameTwo
arguments:
order: "${ .someParam }"
end: true
In the earlier examples, the allOf defines all branches must complete execution before the state can change or end. This is the default value if this parameter is not set.