Chapter 4. Supporting services
4.1. Job service
The Job service schedules and executes tasks in a cloud environment. Independent services implement these tasks, which can be initiated through any of the supported interaction modes, including HTTP calls or Knative Events delivery.
In OpenShift Serverless Logic, the Job service is responsible for controlling the execution of the time-triggered actions. Therefore, all the time-based states that you can use in a workflow, are handled by the interaction between the workflow and the Job service.
For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job service, and when the timeout is met, an HTTP callback is executed to notify the workflow.
The main goal of the Job service is to manage active jobs, such as scheduled jobs that need to be executed. When a job reaches its final state, the Job service removes it. To retain jobs information in a permanent repository, the Job service produces status change events that can be recorded by an external service, such as the Data Index Service.
You do not need to manually install or configure the Job service if you are using the OpenShift Serverless Operator to deploy workflows. The Operator handles these tasks automatically and manages all necessary configurations for each workflow to connect with it.
4.1.1. Job service leader election process
The Job service operates as a singleton service, meaning only one active instance can schedule and execute jobs.
To prevent conflicts when the service is deployed in the cloud, where multiple instances might be running, the Job service supports a leader election process. Only the instance that is elected as the leader manages external communication to receive and schedule jobs.
Non-leader instances remain inactive in a standby state but continue attempting to become the leader through the election process. When a new instance starts, it does not immediately assume leadership. Instead, it enters the leader election process to determine if it can take over the leader role.
If the current leader becomes unresponsive or if it is shut down, another running instance takes over as the leader.
This leader election mechanism uses the underlying persistence backend, which is currently supported only in the PostgreSQL implementation.
4.2. Data Index service
The Data Index service is a dedicated supporting service that stores the data related to the workflow instances and their associated jobs. This service provides a GraphQL endpoint allowing users to query that data.
The Data Index service processes data received through events, which can originate from any workflow or directly from the Job service.
Data Index supports Apache Kafka or Knative Eventing to consume CloudEvents messages from workflows. It indexes and stores this event data in a database, making it accessible through GraphQL. These events provide detailed information about the workflow execution. The Data Index service is central to OpenShift Serverless Logic search, insights, and management capabilities.
The key features of the Data Index service are as follows:
- A flexible data structure
- A distributable, cloud-ready format
- Message-based communication with workflows via Apache Kafka, Knative, and CloudEvents
- A powerful GraphQL-based querying API
When you are using the OpenShift Serverless Operator to deploy workflows, you do not need to manually install or configure the Data Index service. The Operator automatically manages all the necessary configurations for each workflow to connect with it.
4.2.1. GraphQL queries for workflow instances and jobs
To retrieve data about workflow instances and jobs, you can use GraphQL queries.
4.2.1.1. Retrieve data from workflow instances
You can retrieve information about a specific workflow instance by using the following query example:
{ ProcessInstances { id processId state parentProcessInstanceId rootProcessId rootProcessInstanceId variables nodes { id name type } } }
4.2.1.2. Retrieve data from jobs
You can retrieve data from a specific job instance by using the following query example:
{ Jobs { id status priority processId processInstanceId executionCounter } }
4.2.1.3. Filter query results by using the where parameter
You can filter query results by using the where
parameter, allowing multiple combinations based on workflow attributes.
Example query to filter by state
{ ProcessInstances(where: {state: {equal: ACTIVE}}) { id processId processName start state variables } }
Example query to filter by ID
{ ProcessInstances(where: {id: {equal: "d43a56b6-fb11-4066-b689-d70386b9a375"}}) { id processId processName start state variables } }
By default, filters are combined using the AND Operator. You can modify this behavior by combining filters with the AND or OR operators.
Example query to combine filters with the OR Operator
{ ProcessInstances(where: {or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}) { id processId processName start end state } }
Example query to combine filters with the AND and OR Operators
{ ProcessInstances(where: {and: {processId: {equal: "travels"}, or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}}) { id processId processName start end state } }
Depending on the attribute type, you can use the following avaialable Operators:
Attribute type | Available Operators |
---|---|
String array |
|
String |
|
ID |
|
Boolean |
|
Numeric |
|
Date |
|
4.2.1.4. Sort query results by using the orderBy parameter
You can sort query results based on workflow attributes by using the orderBy
parameter. You can also specify the sorting direction in an ascending (ASC
) or a descending (DESC
) order. Multiple attributes are applied in the order you specified.
Example query to sort by the start time in an ASC
order
{ ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}) { id processId processName start end state } }
4.2.1.5. Limit the number of results by using the pagination parameter
You can control the number of returned results and specify an offset by using the pagination
parameter.
Example query to limit results to 10, starting from offset 0
{ ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}, pagination: {limit: 10, offset: 0}) { id processId processName start end state } }
4.3. Managing supporting services
This section provides an overview of the supporting services essential for OpenShift Serverless Logic. It specifically focuses on configuring and deploying the Data Index service and Job Service supporting services using the OpenShift Serverless Logic Operator.
In a typical OpenShift Serverless Logic installation, you must deploy both services to ensure successful workflow execution. The Data Index service allows for efficient data management, while the Job Service ensures reliable job handling.
4.3.1. Supporting services and workflow integration
When you deploy a supporting service in a given namespace, you can choose between an enabled or disabled deployment. An enabled deployment signals the OpenShift Serverless Logic Operator to automatically intercept workflow deployments using the preview
or gitops
profile within the namespace and configure them to connect with the service.
For example, when the Data Index service is enabled, workflows are automatically configured to send status change events to it. Similarly, enabling the Job Service ensures that a job is created whenever a workflow requires a timeout. The OpenShift Serverless Logic Operator also configures the Job Service to send events to the Data Index service, facilitating seamless integration between the services.
The OpenShift Serverless Logic Operator does not just deploy supporting services, it also manages other necessary configurations to ensure successful workflow execution. All these configurations are handled automatically. You only need to provide the supporting services configuration in the SonataFlowPlatform
CR.
Deploying only one of the supporting services or using a disabled deployment are advanced use cases. In a standard installation, you must enable both services to ensure smooth workflow execution.
4.3.2. Supporting services deployment with the SonataFlowPlatform CR
To deploy supporting services, configure the dataIndex
and jobService
subfields within the spec.services
section of the SonataFlowPlatform
custom resource (CR). This configuration instructs the OpenShift Serverless Logic Operator to deploy each service when the SonataFlowPlatform
CR is applied.
Each configuration of a service is handled independently, allowing you to customize these settings alongside other configurations in the SonataFlowPlatform
CR.
See the following scaffold example configuration for deploying supporting services:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: 1 enabled: true 2 # Specific configurations for the Data Index Service # might be included here jobService: 3 enabled: true 4 # Specific configurations for the Job Service # might be included here
- 1
- Data Index service configuration field.
- 2
- Setting
enabled: true
deploys the Data Index service. If set tofalse
or omitted, the deployment will be disabled. The default value isfalse
. - 3
- Job Service configuration field.
- 4
- Setting
enabled: true
deploys the Job Service. If set tofalse
or omitted, the deployment will be disabled. The default value isfalse
.
4.3.3. Supporting services scope
The SonataFlowPlatform
custom resource (CR) enables the deployment of supporting services within a specific namespace. This means all automatically configured supporting services and workflow communications are restricted to the namespace of the deployed platform.
This feature is particularly useful when separate instances of supporting services are required for different sets of workflows. For example, you can deploy an application in isolation with its workflows and supporting services, ensuring they remain independent from other deployments.
4.3.4. Supporting services persistence configurations
The persistence configuration for supporting services in OpenShift Serverless Logic can be either ephemeral or PostgreSQL, depending on needs of your environment. Ephemeral persistence is ideal for development and testing, while PostgreSQL persistence is recommended for production environments.
4.3.4.1. Ephemeral persistence configuration
The ephemeral persistence uses an embedded PostgreSQL database that is dedicated to each service. The OpenShift Serverless Logic Operator recreates this database with every service restart, making it suitable only for development and testing purposes. You do not need any additional configuration other than the following SonataFlowPlatform
CR:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: enabled: true # Specific configurations for the Data Index Service # might be included here jobService: enabled: true # Specific configurations for the Job Service # might be included here
4.3.4.2. PostgreSQL persistence configuration
For PostgreSQL persistence, you must set up a PostgreSQL server instance on your cluster. The administration of this instance remains independent of the OpenShift Serverless Logic Operator control. To connect a supporting service with the PostgreSQL server, you must configure the appropriate database connection parameters.
You can configure PostgreSQL persistence in the SonataFlowPlatform
CR by using the following example:
Example of PostgreSQL persistence configuration
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: enabled: true persistence: postgresql: serviceRef: name: postgres-example 1 namespace: postgres-example-namespace 2 databaseName: example-database 3 databaseSchema: data-index-schema 4 port: 1234 5 secretRef: name: postgres-secrets-example 6 userKey: POSTGRESQL_USER 7 passwordKey: POSTGRESQL_PASSWORD 8 jobService: enabled: true persistence: postgresql: # Specific database configuration for the Job Service # might be included here.
- 1
- Name of the service to connect with the PostgreSQL database server.
- 2
- Optional: Defines the namespace of the PostgreSQL Service. Defaults to the SonataFlowPlatform namespace.
- 3
- Defines the name of the PostgreSQL database for storing supporting service data.
- 4
- Optional: Specifies the schema for storing supporting service data. Default value is
SonataFlowPlatform
name, suffixed with-data-index-service
or-jobs-service
. For example,sonataflow-platform-example-data-index-service
. - 5
- Optional: Port number to connect with the PostgreSQL Service. Default value is
5432
. - 6
- Defines the name of the secret containing the username and password for database access.
- 7
- Defines the name of the key in the secret that contains the username to connect with the database.
- 8
- Defines the name of the key in the secret that contains the password to connect with the database.
You can configure each service’s persistence independently by using the respective persistence field.
Create the secrets to access PostgreSQL by running the following command:
$ oc create secret generic <postgresql_secret_name> \ --from-literal=POSTGRESQL_USER=<user> \ --from-literal=POSTGRESQL_PASSWORD=<password> \ -n <namespace>
4.3.4.3. Common PostgreSQL persistence configuration
The OpenShift Serverless Logic Operator automatically connects supporting services to the common PostgreSQL server configured in the spec.persistence
field.
For rules, the following precedence is applicable:
-
If you configure a specific persistence for a supporting service, for example,
services.dataIndex.persistence
, it uses that configuration. - If you do not configure persistence for a service, the system uses the common persistence configuration from the current platform.
When using a common PostgreSQL configuration, each service schema is automatically set as the SonataFlowPlatform
name, suffixed with -data-index-service
or -jobs-service
, for example, sonataflow-platform-example-data-index-service
.
4.3.5. Supporting services eventing system configurations
For a OpenShift Serverless Logic installation, the following types of events are generated:
- Outgoing and incoming events related to workflow business logic.
- Events sent from workflows to the Data Index and Job Service.
- Events sent from the Job Service to the Data Index Service.
The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these events and services, ensuring efficient and reliable event handling.
4.3.5.1. Platform-scoped eventing system configuration
To configure a platform-scoped eventing system, you can use the spec.eventing.broker.ref
field in the SonataFlowPlatform
CR to reference a Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the supporting services to produce and consume events by using the specified broker.
A workflow deployed in the same namespace with the preview
or gitops
profile and without a custom eventing system configuration, automatically links to a specified broker.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays how to configure the SonataFlowPlatform
CR for a platform-scoped eventing system:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: eventing: broker: ref: name: example-broker 1 namespace: example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker
4.3.5.2. Service-scoped eventing system configuration
A service-scoped eventing system configuration allows for fine-grained control over the eventing system, specifically for the Data Index or the Job Service.
For a OpenShift Serverless Logic installation, consider using a platform-scoped eventing system configuration. The service-scoped configuration is intended for advanced use cases only.
4.3.5.3. Data Index eventing system configuration
To configure a service-scoped eventing system for the Data Index, you must use the spec.services.dataIndex.source.ref
field in the SonataFlowPlatform
CR to refer to a specific Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the Data Index to consume SonataFlow system events from that Broker.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays the Data Index eventing system configuration:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: services: dataIndex: source: ref: name: data-index-source-example-broker 1 namespace: data-index-source-example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker
- 1
- Specifies the Knative Eventing Broker from which the Data Index consumes events.
- 2
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatform
namespace. Consider creating the broker in the same namespace asSonataFlowPlatform
.
4.3.5.4. Job Service eventing system configuration
To configure a service-scoped eventing system for the Job Service, you must use the spec.services.jobService.source.ref
and spec.services.jobService.sink.ref
fields in the SonataFlowPlatform
CR. These fields instruct the OpenShift Serverless Logic Operator to automatically link the Job Service to consume and produce SonataFlow system events, based on the provided configuration.
In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability.
The following example displays the Job Service eventing system configuration:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: services: jobService: source: ref: name: jobs-service-source-example-broker 1 namespace: jobs-service-source-example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker sink: ref: name: jobs-service-sink-example-broker 3 namespace: jobs-service-sink-example-broker-namespace 4 apiVersion: eventing.knative.dev/v1 kind: Broker
- 1
- Specifies the Knative Eventing Broker from which the Job Service consumes events.
- 2
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatform
namespace. Consider creating the Broker in the same namespace asSonataFlowPlatform
. - 3
- Specifies the Knative Eventing Broker on which the Job Service produces events.
- 4
- Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the
SonataFlowPlatform
namespace. Consider creating the Broker in the same namespace asSonataFlowPlatform
.
4.3.5.5. Cluster-scoped eventing system configuration for supporting services
When you deploy cluster-scoped supporting services, the supporting services automatically link to the Broker specified in the SonataFlowPlatform
CR, which is referenced by the SonataFlowClusterPlatform
CR.
4.3.5.6. Eventing system configuration precedence rules for supporting services
The OpenShift Serverless Logic Operator follows a defined order of precedence to configure the eventing system for a supporting service.
Eventing system configuration precedence rules are as follows:
- If the supporting service has its own eventing system configuration, using either the Data Index eventing system or the Job Service eventing system configuration, then supporting service configuration takes precedence.
-
If the
SonataFlowPlatform
CR enclosing the supporting service is configured with a platform-scoped eventing system, that configuration takes precedence. - If the current cluster is configured with a cluster-scoped eventing system, that configuration takes precedence.
- f none of the previous configurations exist, the supporting service delivers events by direct HTTP calls.
4.3.5.7. Eventing system linking configuration
The OpenShift Serverless Logic Operator automatically creates Knative Eventing, SinkBindings
, and triggers to link supporting services with the eventing system. These objects enable the production and consumption of events by the supporting services.
The following example displays the Knative Native eventing objects created for the SonataFlowPlatform
CR:
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: eventing: broker: ref: name: example-broker 1 apiVersion: eventing.knative.dev/v1 kind: Broker services: dataIndex: 2 enabled: true jobService: 3 enabled: true
The following example displays how to configure a Knative Kafka Broker for use with the SonataFlowPlatform
CR:
Example of Knative Kafka Broker example used by the SonataFlowPlatform CR
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
annotations:
eventing.knative.dev/broker.class: Kafka 1
name: example-broker
namespace: example-namespace
spec:
config:
apiVersion: v1
kind: ConfigMap
name: kafka-broker-config
namespace: knative-eventing
- 1
- Use the Kafka class to create a Kafka Knative Broker.
The following command displays the list of triggers set up for the Data Index and Job Service events, showing which services are subscribed to the events:
$ oc get triggers -n example-namespace
Example output
NAME BROKER SINK AGE CONDITIONS READY REASON data-index-jobs-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-definition-e48b4e4bf73e22b90ecf7e093ff6b1eaf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-error-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-instance-mul35f055c67a626f51bb8d2752606a6b54 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-node-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-state-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-variable-ac727d6051750888dedb72f697737c0dfbf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - jobs-service-create-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True - jobs-service-delete-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True -
To see the SinkBinding
resource for the Job Service, use the following command:
$ oc get sources -n example-namespace
Example output
NAME TYPE RESOURCE SINK READY sonataflow-platform-example-jobs-service-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True
4.3.6. Advanced supporting services configurations
In scenarios where you must apply advanced configurations for supporting services, use the podTemplate
field in the SonataFlowPlatform
custom resource (CR). This field allows you to customize the service pod deployment by specifying configurations like the number of replicas, environment variables, container images, and initialization options.
You can configure advanced settings for the service by using the following example:
Advanced configurations example for the Data Index service
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: # This can be either 'dataIndex' or 'jobService' dataIndex: enabled: true podTemplate: replicas: 2 1 container: 2 env: 3 - name: <any_advanced_config_property> value: <any_value> image: 4 initContainers: 5
You can set the 'services' field to either 'dataIndex' or 'jobService' depending on your requirement. The rest of the configuration remains the same.
- 1
- Defines the number of replicas. Default value is
1
. In the case ofjobService
, this value is always overridden to1
because it operates as a singleton service. - 2
- Holds specific configurations for the container running the service.
- 3
- Allows you to fine-tune service properties by specifying environment variables.
- 4
- Configures the container image for the service, useful if you need to update or customize the image.
- 5
- Configures init containers for the pod, useful for setting up prerequisites before the main container starts.
The podTemplate
field provides flexibility for tailoring the deployment of each supporting service. It follows the standard PodSpec
API, meaning the same API validation rules apply to these fields.
4.3.7. Cluster scoped supporting services
You can define a cluster-wide set of supporting services that can be consumed by workflows across different namespaces, by using the SonataFlowClusterPlatform
custom resource (CR). By referencing an existing namespace-specific SonataFlowPlatform
CR, you can extend the use of these services cluster-wide.
You can use the following example of a basic configuration that enables workflows deployed in any namespace to utilize supporting services deployed in a specific namespace, such as example-namespace
:
Example of a SonataFlowClusterPlatform
CR
apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowClusterPlatform metadata: name: cluster-platform spec: platformRef: name: sonataflow-platform-example 1 namespace: example-namespace 2
You can override these cluster-wide services within any namespace by configuring that namespace in SonataFlowPlatform.spec.services
.