About OpenShift Serverless
Introduction to OpenShift Serverless
Abstract
Chapter 1. Release notes Copy linkLink copied to clipboard!
Release notes contain information about new features, deprecated features, breaking changes, and known issues. The following release notes apply to the most recent OpenShift Serverless releases on OpenShift Container Platform.
1.1. About API versions Copy linkLink copied to clipboard!
API versions are an important measure of the development status of certain features and custom resources in OpenShift Serverless. Creating resources on your cluster that do not use the correct API version can cause issues in your deployment.
The OpenShift Serverless Operator automatically upgrades older resources that use deprecated versions of APIs to use the latest version. For example, if you have created resources on your cluster that use older versions of the ApiServerSource API, such as v1beta1, the OpenShift Serverless Operator automatically updates these resources to use the v1 version of the API when this is available and the v1beta1 version is deprecated.
After they have been deprecated, older versions of APIs might be removed in any upcoming release. Using deprecated versions of APIs does not cause resources to fail. However, if you try to use a version of an API that has been removed, it will cause resources to fail. Ensure that your manifests are updated to use the latest version to avoid issues.
1.2. Generally Available and Technology Preview features Copy linkLink copied to clipboard!
Features that are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. See the Technology Preview scope of support on the Red Hat Customer Portal for more information about TP features.
The following table provides information about which OpenShift Serverless features are GA and which are TP:
| Feature | 1.36 |
|---|---|
| Authorization policies for Knative Eventing | TP |
| Service Mesh 3.x integration | TP |
|
| TP |
|
Automatic | TP |
|
| TP |
| Eventing Transport encryption | GA |
| Serving Transport encryption | TP |
| OpenShift Serverless Logic | GA |
| ARM64 support | GA |
| Custom Metrics Autoscaler Operator (KEDA) | TP |
| kn event plugin | GA |
| Pipelines-as-code | TP |
| Advanced trigger filters | GA |
| Go function using S2I builder | GA |
| Installing and using Serverless on single-node OpenShift | GA |
| Using Service Mesh to isolate network traffic with Serverless | TP |
|
Overriding | GA |
|
| GA |
| Quarkus functions | GA |
| Node.js functions | GA |
| TypeScript functions | GA |
| Python functions | GA |
| Service Mesh mTLS | GA |
|
| GA |
| HTTPS redirection | GA |
| Kafka broker | GA |
| Kafka sink | GA |
| Init containers support for Knative services | GA |
| PVC support for Knative services | GA |
|
| GA |
1.3. Deprecated and removed features Copy linkLink copied to clipboard!
Some features that were Generally Available (GA) or a Technology Preview (TP) in previous releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Serverless and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality deprecated and removed within OpenShift Serverless, refer to the following table:
| Feature | 1.36 |
|---|---|
|
Knative client | Deprecated |
|
EventTypes | Deprecated |
|
| Removed |
| Red Hat OpenShift Service Mesh with Serverless when Kourier is enabled | Deprecated |
| Namespace-scoped Kafka brokers | Deprecated |
|
| Deprecated |
|
Serving and Eventing | Removed |
|
| Removed |
|
| Removed |
1.4. Red Hat OpenShift Serverless 1.36.1 Copy linkLink copied to clipboard!
OpenShift Serverless 1.36.1 is now available. This release of OpenShift Serverless addresses identified Common Vulnerabilities and Exposures (CVEs) to enhance security and reliability. Fixed issues and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes:
1.4.1. Fixed issues Copy linkLink copied to clipboard!
-
Before this update, the OpenShift Serverless Functions client failed to build remotely with Red Hat OpenShift Pipelines version 1.19, causing pipeline runs to remain in the
Pendingstate on thefetch-sourcestask and report admission webhook errors. With this release, the issue is resolved, and remote builds complete successfully.
1.4.2. Known issues Copy linkLink copied to clipboard!
-
Deploying a Quarkus function with the
kn func deploy --remotecommand on an OpenShift Container Platform s390x cluster triggers a known issue that causes the build task to hang. As a result, the build process does not complete.
1.5. Red Hat OpenShift Serverless 1.36 Copy linkLink copied to clipboard!
OpenShift Serverless 1.36 is now available. New features, updates, fixed issues, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in the following notes:
1.5.1. New features Copy linkLink copied to clipboard!
1.5.1.1. OpenShift Serverless Eventing Copy linkLink copied to clipboard!
- OpenShift Serverless now uses Knative Eventing 1.16.
- OpenShift Serverless now uses Knative for Apache Kafka 1.16.
-
IntegrationSourceandIntegrationSinkare now available as a Technology Preview. These are Knative Eventing custom resources that support selected Kamelets from the Apache Camel project. Kamelets enables you to connect to third-party systems for improved connectivity, acting as either sources (event producers) or sinks (event consumers). - Knative Eventing can now automatically discover and register EventTypes based on the structure of incoming events. This feature simplifies the configuration and management of EventTypes, reducing the need for manual definitions. This feature is available as a Technology Preview.
OpenShift Serverless Eventing introduces
EventTransform, a new API resource that you can use to declaratively transform JSON events without writing custom code. WithEventTransform, you can modify attributes, extract or reshape data, and streamline event flows across systems. Common use cases include event enrichment, format conversion, and request-response transformation.EventTransformintegrates seamlessly with Knative sources, triggers, and brokers, enhancing interoperability in event-driven architectures. This feature is now available as a Technology Preview.See the following key features of
EventTransform:- Define transformations declaratively using Kubernetes-native resources
- Use JSONata expressions for advanced and flexible event data manipulation
- Easily insert transformations at any point within event-driven workflows
- Support for transforming both sink-bound and reply events for better routing control
-
The
sinks.knative.devAPI group has now been added to theClusterRolesnamespace in Knative Eventing. Developers now have permissions toget,list, andwatchresources in this API group, improving accessibility and integration with sink resources. - Transport encryption for Knative Eventing is now available as a Generally Available (GA) feature.
- Knative Eventing now supports the ability to define authorization policies that restrict which entities can send events to Eventing custom resources. This enables greater control and security within event-driven architectures. This functionality is available as a Developer Preview.
- Knative Eventing catalog is now integrated into the Red Hat Developer Hub through the Event Catalog plugin for Backstage. This integration enables users to discover and explore Knative Eventing resources directly within the Red Hat Developer Hub interface. This functionality is available as a Developer Preview.
-
The
KafkaSourceAPI has now been promoted to versionv1, signaling its stability and readiness for production use. - OpenShift Serverless now supports deployment on ARM architecture as a Generally Available (GA) feature.
-
The
kn eventplugin is now available as a GA feature. You can use this plugin to send events directly from the command line to various destinations, streamlining event-driven application development and testing workflows.
1.5.1.2. OpenShift Serverless Serving Copy linkLink copied to clipboard!
- OpenShift Serverless now uses Knative Serving 1.16.
- OpenShift Serverless now uses Kourier 1.16.
-
OpenShift Serverless now uses Knative (
kn) CLI 1.16.
1.5.1.3. OpenShift Serverless Functions Copy linkLink copied to clipboard!
-
The
kn funcCLI plugin now usesfunc1.16. - OpenShift Serverless Functions support integration with Cert Manager, enabling automated certificate management for the function workloads. This functionality is available as a Developer Preview.
1.5.1.4. OpenShift Serverless Logic Copy linkLink copied to clipboard!
When starting a workflow via HTTP, you can now include additional properties alongside the
workflowdatafield in the request body. These extra fields are ignored by the runtime but are available in the Data Index as process variables as shown in the following example:{"workflowdata": {"name": "John"}, "groupKey": "follower"}You can now filter workflow instances by the content of workflow variables using GraphQL queries on
ProcessInstances.variables. For example, the following query retrieves process instances where thelanguagefield inworkflowdataequalsSpanish:ProcessInstances (where:{variables:{workflowdata:{language:{equal:Spanish}}}}) { variables, state, lastUpdate, nodes { name } }- OpenShift Serverless Logic Data Index now supports filtering queries by using workflow definition metadata.
- OpenShift Serverless Logic Operator now emits events to the Data Index to indicate when a workflow definition becomes available or unavailable.
1.5.2. Fixed issues Copy linkLink copied to clipboard!
1.5.2.1. OpenShift Serverless Eventing Copy linkLink copied to clipboard!
Previously, the Knative Kafka dispatcher could stop consuming events if a Kafka consumer group rebalance occurred while a sink was processing events out of order. This behavior triggered the following errors:
-
SEVERE: Unhandled exception -
java.lang.IndexOutOfBoundsException: bitIndex < 0 -
Repeated logs like
Request joining group due to: group is already rebalancing
This issue is now fixed. The dispatcher correctly handles out-of-order event consumption during rebalancing and continues processing events without interruption.
-
-
Previously, a KafkaSource remained in a
Readystate even whenKafkaSource.spec.net.tls.keyfailed to load due to the use of unsupported TLS certificates in PKCS #1 format. This issue is now fixed. An appropriate error is now reported when attempting to create aKafkaBroker,KafkaChannel,KafkaSource, orKafkaSinkusing TLS certificates in an unsupported format.
1.5.3. Known issues Copy linkLink copied to clipboard!
1.5.3.1. OpenShift Serverless Logic Copy linkLink copied to clipboard!
-
If the
swf-dev-modeimage is started with a broken or invalid workflow definition, the container might enter a stuck state. -
When deploying a workflow in the
previewprofile on OpenShift Container Platform, if the initial build fails and is later corrected, the Operator does not create the corresponding workflow deployment. As a result, the deployment remains missing and theSonataFlowstatus is not updated, even after the build is fixed. -
The OpenShift Serverless Logic builder image consistently downloads the
plexus-utils-1.1artifact during the build process, regardless of local caching or dependency resolution settings. - When running images in disconnected or restricted network environments, the Maven wrapper might experience timeouts while attempting to download required components.
-
The
openshift-serverless-1/logic-swf-builder-rhel8:1.35.0andopenshift-serverless-1/logic-swf-builder-rhel8:1.36.0images are currently downloading the persistence extensions from Maven during the build process.
Chapter 2. OpenShift Serverless overview Copy linkLink copied to clipboard!
OpenShift Serverless provides Kubernetes-native building blocks for creating and deploying serverless, event-driven applications on OpenShift Container Platform. These applications scale up and down (to zero) on-demand and respond to events from several sources. OpenShift Serverless uses the open source Knative project to deliver portability and consistency across hybrid and multicloud environments.
The following sections describe the core components of OpenShift Serverless:
2.1. About Knative Serving Copy linkLink copied to clipboard!
Knative Serving builds on Kubernetes to support deploying and serving of applications and functions as serverless containers. Serving simplifies the application deployment, dynamically scales based on in incoming traffic, and supports custom rollout strategies with traffic splitting.
Knative Serving includes the following features:
- Simplified deployment of serverless containers
- Traffic-based auto-scaling, including scale-to-zero
- Routing and network programming
- Point-in-time application snapshots and their configurations
2.2. About Knative Eventing Copy linkLink copied to clipboard!
Knative Eventing provides a platform that offers composable primitives to enable late-binding event sources and event consumers.
Knative Eventing supports the following architectural cloud-native concepts:
- Services are loosely coupled during development and deployed independently to production.
- A producer can generate events before a consumer starts listening, and a consumer can express interest in events or event types that no producer generates yet.
- You can connect services to create new applications without modifying the producer or consumer, and select a specific subset of events from a particular producer.
2.3. About OpenShift Serverless Functions Copy linkLink copied to clipboard!
You can write OpenShift Serverless Functions and deploy them as Knative Services, using Knative Serving and Eventing.
OpenShift Serverless Functions includes the following features:
Support for the following build strategies:
- Source-to-Image (S2I)
- Buildpacks
- Multiple runtimes
-
Local developer experience through the Knative (
kn) CLI - Project templates
-
Support for receiving
CloudEventsand plain HTTP requests
2.4. About OpenShift Serverless Logic Copy linkLink copied to clipboard!
With OpenShift Serverless Logic, you define declarative workflow models by using YAML or JSON files to orchestrate event-driven, serverless applications. You can visualize workflow execution to simplify debugging and optimization. Built-in error handling and fault tolerance help you manage errors and exceptions during workflow execution.
OpenShift Serverless Logic implements the Cloud Native Computing Foundation (CNCF) Serverless Workflow specification.
2.5. About Knative CLI Copy linkLink copied to clipboard!
You can use the Knative (kn) CLI to create Knative resources from the command line or within shell scripts. Its extensive help pages and autocompletion reduce the need to memorize detailed Knative resource schemas.
The Knative (kn) CLI includes the following features:
| Category | Features |
|---|---|
| Knative Serving |
Services |
| Knative Eventing |
Sources |
| Extensibility |
Plugin architecture based on the Kubernetes ( |
| Integration | Integration of Knative into Tekton pipelines |
Chapter 3. Knative Serving overview Copy linkLink copied to clipboard!
Knative Serving helps developers create, deploy, and manage cloud-native applications. It provides Kubernetes custom resource definitions (CRDs) that define and control serverless workloads on an OpenShift Container Platform cluster. Developers use these CRDs to create custom resources (CRs) as building blocks for complex use cases such as rapidly deploying serverless containers or automatically scaling pods.
3.1. Knative Eventing use cases Copy linkLink copied to clipboard!
Knative Serving defines a set of resources that manage the lifecycle, configuration, and traffic routing of serverless applications on a Kubernetes cluster.
- Service
-
The
service.serving.knative.devcustom resource definition (CRD) manages the lifecycle of your workload and ensures that the application runs and remains reachable through the network. It creates a route, a configuration, and a new revision for each change to a user-created service, or custom resource. Developers interact with Knative primarily by modifying services. - Revision
-
The
revision.serving.knative.devcustom resource definition (CRD) represents a point-in-time snapshot of the code and configuration for each modification to the workload. Revisions are immutable objects and you can retain them, if needed. - Route
- The route.serving.knative.dev CRD maps a network endpoint to one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes.
- Configuration
- The configuration.serving.knative.dev CRD maintains the required state for your deployment. It provides a clean separation between code and configuration. Modifying a configuration creates a new revision.
Chapter 4. Knative Eventing overview Copy linkLink copied to clipboard!
Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers and event consumers.
Event producers create events, and event sinks, or consumers, receive events. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.
4.1. Knative Eventing use cases Copy linkLink copied to clipboard!
Knative Eventing supports common event-driven use cases, including publishing and consuming events independently. It also introduces generic resource interfaces that define how components receive and process, and routed events within the system.
Knative Eventing supports the following use cases:
- Publish an event without creating a consumer
- You can send events to a broker as an HTTP POST and use binding to decouple the destination configuration from your application that produces events.
- Consume an event without creating a publisher
- You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST.
To enable delivery to many types of sinks, Knative Eventing defines the following generic interfaces that many Kubernetes resources can implement:
- Addressable resources
-
Able to receive and acknowledge an event delivered over HTTP to an address defined in the
status.address.urlfield of the event. The KubernetesServiceresource also satisfies the addressable interface. - Callable resources
-
Able to receive an event delivered over HTTP and transform it, returning
0or1new events in the HTTP response payload. The system can process these returned events further in the same way it processes events from an external event source.
4.2. Using the Knative broker for Apache Kafka Copy linkLink copied to clipboard!
The Knative broker implementation for Apache Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.
Knative broker for Apache Kafka provides additional options, such as:
- Kafka source
- Kafka channel
- Kafka broker
- Kafka sink
Chapter 5. OpenShift Serverless functions overview Copy linkLink copied to clipboard!
Developers use OpenShift Serverless Functions to create and deploy stateless, event-driven functions as Knative Services on OpenShift Container Platform. The Knative kn CLI includes the kn func plugin. You can use the kn func CLI to create, build, and deploy container images as Knative Services on the cluster.
OpenShift Serverless Functions provides templates for creating basic functions in Quarkus, Node.js, and TypeScript runtimes.
Chapter 6. OpenShift Serverless Logic overview Copy linkLink copied to clipboard!
OpenShift Serverless Logic enables developers to define declarative workflow models that orchestrate event-driven, serverless applications.
You can write the workflow models in YAML or JSON format, which are ideal for developing and deploying serverless applications in cloud or container environments.
To deploy the workflows in your OpenShift Container Platform, you can use the OpenShift Serverless Logic Operator. The following sections offer an overview of the various OpenShift Serverless Logic concepts.
6.1. Events Copy linkLink copied to clipboard!
An event state includes one or more event definitions that specify the CloudEvent types the state listens to. You can use an event state to start a new workflow instance when it receives a designated CloudEvent, or pause an existing workflow instance until it receives one.
In an event state definition, the onEvents property groups CloudEvent types that trigger the same set of actions. The exclusive property determines how the system matches events. If exclusive is false, the system requires all CloudEvent types in the eventRefs array to match. Otherwise, any referenced CloudEvent type can trigger a match.
The following example shows event definitions, consisting of two CloudEvent types, including noisy and silent and you get an output similar to the following examples:
"events": [
{
"name": "noisyEvent",
"source": "",
"type": "noisy",
"dataOnly" : "false"
},
{
"name": "silentEvent",
"source": "",
"type": "silent"
}
]
You can define an event state with separate onEvent items for noisy and silent CloudEvent types, and set the exclusive property to false to run different actions when both events occur.
{
"name": "waitForEvent",
"type": "event",
"onEvents": [
{
"eventRefs": [
"noisyEvent"
],
"actions": [
{
"functionRef": "letsGetLoud"
}
]
},
{
"eventRefs": [
"silentEvent"
],
"actions": [
{
"functionRef": "beQuiet"
}
]
}
]
,
"exclusive": false
}
6.2. Callbacks Copy linkLink copied to clipboard!
The Callback state performs an action and waits for an event that the action produces before it resumes the workflow. It invokes an asynchronous external service, making it suitable for fire&wait-for-result operations.
From a workflow perspective, an asynchronous service returns control to the caller immediately without waiting for the action to complete. After the action completes, the system publishes a CloudEvent to resume the workflow.
{
"name": "CheckCredit",
"type": "callback",
"action": {
"functionRef": {
"refName": "callCreditCheckMicroservice",
"arguments": {
"customer": "${ .customer }"
}
}
},
"eventRef": "CreditCheckCompletedEvent",
"timeouts": {
"stateExecTimeout": "PT15M"
},
"transition": "EvaluateDecision"
}
name: CheckCredit
type: callback
action:
functionRef:
refName: callCreditCheckMicroservice
arguments:
customer: "${ .customer }"
eventRef: CreditCheckCompletedEvent
timeouts:
stateExecTimeout: PT15M
transition: EvaluateDecision
The action property defines a function call that triggers an external activity or service. After the action executes, the Callback state waits for a CloudEvent, which indicates the completion of the manual decision by the called service.
After the completion callback event is received, the Callback state completes its execution and transitions to the next defined workflow state or completes workflow execution if it is an end state.
6.3. JQ expressions Copy linkLink copied to clipboard!
Each workflow instance uses a data model. The data model consists of a JSON object, regardless of whether the workflow file uses YAML or JSON. The initial content of the JSON object depends on how you start the workflow. If you start the workflow by using a CloudEvent, the workflow reads content from the data property. If you start the workflow through an HTTP POST request, the workflow reads content from the request body.
JSON Query (JQ) expressions interact with the data model. The system supports JsonPath and JQ expression languages, and JQ serves as the default. You can change the expression language to JsonPath by using the expressionLang property.
{
"name": "max",
"type": "expression",
"operation": "{max: .numbers | max_by(.x), min: .numbers | min_by(.y)}"
}
6.4. Error handling Copy linkLink copied to clipboard!
With OpenShift Serverless Logic, you define explicit error handling in your workflow model instead of relying on generic mechanisms. Explicit error handling helps you manage errors that occur during interactions between the workflow and external systems. When an error occurs, it changes the normal workflow sequence. The workflow transitions to an alternative state that can handle the error instead of moving to the predefined state.
Each workflow state defines its own error handling for issues that occur during its execution. Error handling in one state does not handle errors that occur in another state during workflow execution.
If the workflow encounters an unknown error that the definition does not handle explicitly, the runtime reports the error and stops workflow execution.
6.4.1. Error definition Copy linkLink copied to clipboard!
An error definition in a workflow includes the name and code parameters. The name provides a short, natural language description of the error, such as wrong parameter. The code helps the implementation identify the error.
The code parameter is mandatory. The engine uses different strategies to map the value to a runtime exception, including fully qualified class name (FQCN), error message, and status code.
During workflow execution, you must handle known errors in the top-level errors property. You can define this property as a string to reference a reusable JSON or YAML file, or as an array to define errors inline in the workflow.
The following example shows how to reference a reusable JSON error definition file:
{
"errors": "file://documents/reusable/errors.json"
}
The following example shows how to reference a reusable YAML error definition file:
errors: file://documents/reusable/errors.yaml
The following example defines workflow errors inline in a JSON file:
{
"errors": [
{
"name": "Service not found error",
"code": "404",
"description": "Server has not found anything matching the provided service endpoint information"
}
]
}
The following example defines workflow errors inline in a YAML file:
errors:
- name: Service not found error
code: '404'
description: Server has not found anything matching the provided service endpoint information
6.5. Schema definitions Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports two types of schema definitions: input schema definition and output schema definition.
6.5.1. Input schema definition Copy linkLink copied to clipboard!
The dataInputSchema parameter validates workflow data input against a defined JSON schema. You should offer dataInputSchema because the system verifies the input before it executes any workflow states.
You can define a dataInputSchema as follows:
"dataInputSchema": {
"schema": "URL_to_json_schema",
"failOnValidationErrors": false
}
The schema property uses a URI to specify the path to the JSON schema that validates the workflow data input. You can use a classpath URI, a file path, or an HTTP URL. If you specify a classpath URI, place the JSON schema file in the project resources or another directory in the classpath.
The failOnValidationErrors parameter is optional and controls how the system handles invalid input data. If you do not specify this parameter or set it to true, the system throws an exception and stops execution. If you set it to false, the system continues execution and logs validation errors at the warning (WARN) level.
6.5.2. Output schema definition Copy linkLink copied to clipboard!
Output schema definition is applied after workflow execution to verify that the output model has the expected format. It is also useful for Swagger generation purposes.
Similar to Input schema definition, you must specify the URL to the JSON schema, using outputSchema as follows:
Example of outputSchema definition
"extensions" : [ {
"extensionid": "workflow-output-schema",
"outputSchema": {
"schema" : "URL_to_json_schema",
"failOnValidationErrors": false
}
} ]
The same rules described for dataInputSchema are applicable for schema and failOnValidationErrors. The only difference is that the latter flag is applied after workflow execution.
6.6. Custom functions Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports the custom function type, which enables the implementation to extend the function definitions capability. By combining with the operation string, you can use a list of predefined function types.
Custom function types might not be portable across other runtime implementations.
6.6.1. Sysout custom function Copy linkLink copied to clipboard!
You can use the sysout function for logging, as shown in the following example:
{
"functions": [
{
"name": "logInfo",
"type": "custom",
"operation": "sysout:INFO"
}
]
}
The string after the : is optional and is used to indicate the log level. The possible values are TRACE, DEBUG, INFO, WARN, and ERROR. If the value is not present, INFO is the default.
In the state definition, you can call the same sysout function as shown in the following example:
{
"states": [
{
"name": "myState",
"type": "operation",
"actions": [
{
"name": "printAction",
"functionRef": {
"refName": "logInfo",
"arguments": {
"message": "\"Workflow model is \\(.)\""
}
}
}
]
}
]
}
In the earlier example, the message argument can be a jq expression or a jq string using interpolation.
6.6.2. Java custom function Copy linkLink copied to clipboard!
OpenShift Serverless Logic supports the java functions within an Apache Maven project, in which you define your workflow service.
The following example shows the declaration of a java function:
Example of a java function declaration
{
"functions": [
{
"name": "myFunction",
"type": "custom",
"operation": "service:java:com.acme.MyInterfaceOrClass::myMethod"
}
]
}
functions.name-
myFunctionis the function name. functions.type-
customis the function type. functions.operation-
service:java:com.acme.MyInterfaceOrClass::myMethodis the custom operation definition. In the custom operation definition,serviceis the reserved operation keyword, followed by thejavakeyword.com.acme.MyInterfaceOrClassis the Fully Qualified Class Name (FQCN) of the interface or implementation class, followed by the method namemyMethod.
6.6.3. Knative custom function Copy linkLink copied to clipboard!
OpenShift Serverless Logic implements a custom function through the knative-serving add-on to call Knative services. You define a static URI for a Knative service and use it to perform HTTP requests. The system queries the Knative service in the current cluster and translates it into a valid URL.
The following example uses a deployed Knative service:
$ kn service list
NAME URL LATEST AGE CONDITIONS READY REASON
custom-function-knative-service http://custom-function-knative-service.default.10.109.169.193.sslip.io custom-function-knative-service-00001 3h16m 3 OK / 3 True
You can declare a OpenShift Serverless Logic custom function by using the Knative service name, as shown in the following example:
"functions": [
{
"name": "greet",
"type": "custom",
"operation": "knative:services.v1.serving.knative.dev/custom-function-knative-service?path=/plainJsonFunction",
}
]
function.name-
greetis the function name. function.type-
customis the function type. function.operation-
In
operation, you set the coordinates of the Knative service.
This function sends a POST request. If you do not specify a path, OpenShift Serverless Logic uses the root path (/). You can also send GET requests by setting method=GET in the operation. In this case, the arguments are forwarded over a query string.
6.6.4. REST custom function Copy linkLink copied to clipboard!
OpenShift Serverless Logic offers the REST custom type as a shortcut. When you use a custom REST function, you specify the HTTP URI to cal and the HTTP method GET, POST, PATCH, or PUT in the function definition by using the operation string. When you call the function, you pass request arguments as you do with an OpenAPI function.
The following example shows the declaration of a rest function:
{
"functions": [
{
"name": "multiplyAllByAndSum",
"type": "custom",
"operation": "rest:post:/numbers/{multiplier}/multiplyByAndSum"
}
]
}
function.name-
multiplyAllAndSumis the function name. function.type-
customis the function type. function.operation-
rest:post:/numbers/{multiplier}/multiplyByAndSumis the custom operation definition. In the custom operation definition,restis the reserved operation keyword that indicates this is a REST call,postis the HTTP method, and/numbers/{multiplier}/multiplyByAndSumis the relative endpoint.
When using the relative endpoints, you must specify the host as a property. The format of the host property is kogito.sw.functions.<function_name>.host. In this example, kogito.sw.functions.multiplyAllByAndSum.host is the host property key. You can override the default port (80) if needed by specifying the kogito.sw.functions.multiplyAllAndSum.port property.
This endpoint expects as body a JSON object whose field numbers is an array of integers, multiplies each item in the array by multiplier, and returns the sum of all the multiplied items.
6.7. Timeouts Copy linkLink copied to clipboard!
OpenShift Serverless Logic defines several timeouts configurations that you can use to configure maximum times for the workflow execution in different scenarios. You can configure how long a workflow can wait for an event to arrive when it is in a given state or the maximum execution time for the workflow.
Regardless of where you define it, configure a timeout as a duration that starts when the referenced workflow element becomes active. Timeouts use the ISO 8601 date and time standard to specify durations and follow the format PnDTnHnMn.nS, where days equal exactly 24 hours. For example, PT15M sets a duration of 15 minutes, and P2DT3H4M sets a duration of 2 days, 3 hours, and 4 minutes.
Month-based timeouts such as P2M, or period of two months, are not valid since the month duration might vary. In that case, use PT60D instead.
6.7.1. Workflow timeout Copy linkLink copied to clipboard!
To configure the maximum duration for a workflow before cancellation, define workflow timeouts. When the system cancels a workflow after the timeout expires, it marks the workflow as finished and removes access through a GET request. As a result, the workflow behaves as if you set the interrupt property to true by default.
You can define workflow timeouts by using the top-level timeouts property. You can specify this property in two formats: string or object.
-
You can use the
stringtype to give a URI that points to a JSON or YAML file containing the workflow timeout definitions. -
You can use the
objecttype to define the timeout settings inline within the workflow.
For example, to cancel the workflow after an hour of execution, use the following configuration:
{
"id": "workflow_timeouts",
"version": "1.0",
"name": "Workflow Timeouts",
"description": "Simple workflow to show the workflowExecTimeout working",
"start": "PrintStartMessage",
"timeouts": {
"workflowExecTimeout": "PT1H"
} ...
}
6.7.2. Event timeout Copy linkLink copied to clipboard!
When you define a state in a workflow, use the timeouts property to set the maximum time allowed to complete that state. If the state exceeds this time, the system marks it as timed out and continues execution from that state. The execution flow depends on the state type. For example, the workflow might move to the next state.
Event-based states can use the sub-property eventTimeout to configure the maximum time to wait for an event to arrive. This is the only property that is supported in current implementation.
Event timeouts support callback state timeout, switch state timeout, and event state timeout.
6.7.3. Callback state timeout Copy linkLink copied to clipboard!
You can use the Callback state when you need to run an action that calls an external service and wait for an asynchronous response in the form of an event.
After the workflow consumes the response event, it continues execution and typically moves to the next state defined in the transition property.
Because the Callback state halts execution until the event arrives, you can configure an eventTimeout. If the event does not arrive within the configured duration, the workflow continues execution and moves to the next state defined in the transition property.
The following example defines a Callback state with a timeout in JSON format:
{
"name": "CallbackState",
"type": "callback",
"action": {
"name": "callbackAction",
"functionRef": {
"refName": "callbackFunction",
"arguments": {
"input": "${\"callback-state-timeouts: \" + $WORKFLOW.instanceId + \" has executed the callbackFunction.\"}"
}
}
},
"eventRef": "callbackEvent",
"transition": "CheckEventArrival",
"onErrors": [
{
"errorRef": "callbackError",
"transition": "FinalizeWithError"
}
],
"timeouts": {
"eventTimeout": "PT30S"
}
}
6.7.4. Switch state timeout Copy linkLink copied to clipboard!
You can use the Switch state when you need to take an action depending on certain conditions. You can use these conditions based on the workflow data, dataConditions, or on events, eventConditions.
When you use the eventConditions, the workflow execution waits to make a decision until any of the configured events arrives and matches a condition. In this situation, you can configure an event timeout, which controls the maximum time to wait for an event to match the conditions.
If this time expires, the workflow moves to the state defined in the defaultCondition property.
The following example defines a Switch state with a timeout:
{
"name": "ChooseOnEvent",
"type": "switch",
"eventConditions": [
{
"eventRef": "visaApprovedEvent",
"transition": "ApprovedVisa"
},
{
"eventRef": "visaDeniedEvent",
"transition": "DeniedVisa"
}
],
"defaultCondition": {
"transition": "HandleNoVisaDecision"
},
"timeouts": {
"eventTimeout": "PT5S"
}
}
6.7.5. Event state timeout Copy linkLink copied to clipboard!
You can use the Event state to wait for one or more events, run a set of actions, and then continue execution. If the Event state serves as the starting state, the workflow creates a new instance.
You can use the timeouts property in this state to set the maximum time the workflow waits for the configured events to arrive.
If the workflow exceeds this time and does not receive the events, it moves to the next state defined in the transition property. If the state defines an end state, the workflow ends the instance without running any actions.
The following example defines an Event state with a timeout:
{
"name": "WaitForEvent",
"type": "event",
"onEvents": [
{
"eventRefs": [
"event1"
],
"eventDataFilter": {
"data": "${ \"The event1 was received.\" }",
"toStateData": "${ .exitMessage }"
},
"actions": [
{
"name": "printAfterEvent1",
"functionRef": {
"refName": "systemOut",
"arguments": {
"message": "${\"event-state-timeouts: \" + $WORKFLOW.instanceId + \" executing actions for event1.\"}"
}
}
}
]
},
{
"eventRefs": [
"event2"
],
"eventDataFilter": {
"data": "${ \"The event2 was received.\" }",
"toStateData": "${ .exitMessage }"
},
"actions": [
{
"name": "printAfterEvent2",
"functionRef": {
"refName": "systemOut",
"arguments": {
"message": "${\"event-state-timeouts: \" + $WORKFLOW.instanceId + \" executing actions for event2.\"}"
}
}
}
]
}
],
"timeouts": {
"eventTimeout": "PT30S"
},
"transition": "PrintExitMessage"
}
6.8. Parallelism Copy linkLink copied to clipboard!
OpenShift Serverless Logic serializes the execution of parallel tasks. The term parallel does not imply simultaneous execution; it means that branches have no logical dependency on each other. An inactive branch can start or resume a task without waiting for an active branch to complete if the active branch suspends its execution, for example, while waiting for an event.
A parallel state splits the current workflow execution path into many branches, each with its own path. The workflow executes these paths independently and then joins them back into a single path based on the completionType parameter.
The following example shows a parallel workflow in JSON format:
{
"name":"ParallelExec",
"type":"parallel",
"completionType": "allOf",
"branches": [
{
"name": "Branch1",
"actions": [
{
"functionRef": {
"refName": "functionNameOne",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
},
{
"name": "Branch2",
"actions": [
{
"functionRef": {
"refName": "functionNameTwo",
"arguments": {
"order": "${ .someParam }"
}
}
}
]
}
],
"end": true
}
The following example shows a parallel workflow in YAML format:
name: ParallelExec
type: parallel
completionType: allOf
branches:
- name: Branch1
actions:
- functionRef:
refName: functionNameOne
arguments:
order: "${ .someParam }"
- name: Branch2
actions:
- functionRef:
refName: functionNameTwo
arguments:
order: "${ .someParam }"
end: true
In the earlier examples, the allOf defines all branches must complete execution before the state can change or end. This is the default value if this parameter is not set.
Chapter 7. OpenShift Serverless support Copy linkLink copied to clipboard!
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. You can use the Red Hat Customer Portal to search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products. You can also submit a support case to Red Hat Global Support Services (GSS), or access other product documentation.
7.1. About the Red Hat Knowledgebase Copy linkLink copied to clipboard!
The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat’s products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps.
7.2. Searching the Red Hat Knowledgebase Copy linkLink copied to clipboard!
In case of an OpenShift Container Platform issue, you can perform an initial search to find if a solution already exists within the Red Hat Knowledgebase.
Prerequisites
- You have a Red Hat Customer Portal account.
Procedure
- Log in to the Red Hat Customer Portal.
In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including:
- OpenShift Container Platform components (such as etcd)
- Related procedure (such as installation)
- Warnings, error messages, and other outputs related to explicit failures
- Click Search.
- Select the OpenShift Container Platform product filter.
- Select the Knowledgebase content type filter.
7.3. Submitting a support case Copy linkLink copied to clipboard!
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc). - You have a Red Hat Customer Portal account.
- You have a Red Hat standard or premium Subscription.
Procedure
- Log in to the Red Hat Customer Portal and select SUPPORT CASES → Open a case.
- Select the appropriate category for your issue (such as Defect / Bug), product (OpenShift Container Platform), and product version if this is not already autofilled).
- Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue.
- Enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations.
- Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue.
- Ensure that the account information presented is as expected, and if not, change accordingly.
Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID.
To manually obtain your cluster ID using the OpenShift Container Platform web console:
- Navigate to Home → Dashboards → Overview.
- Find the value in the Cluster ID field of the Details section.
It is also possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled.
- From the toolbar, navigate to (?) Help → Open Support Case.
- The Cluster ID value is autofilled.
To obtain your cluster ID using the OpenShift CLI (
oc), run the following command:$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
Complete the following questions where prompted and then click Continue:
- Where are you experiencing the behavior? What environment?
- When does the behavior occur? Frequency? Repeatedly? At certain times?
- What information can you give around time-frames and the business impact?
- Upload relevant diagnostic data files and click Continue.
It is recommended to include data gathered using the oc adm must-gather command as a starting point, plus any issue specific data that is not collected by that command.
- Input relevant case management details and click Continue.
- Preview the case details and click Submit.
7.4. Collecting diagnostic information for support Copy linkLink copied to clipboard!
When you open a support case, share debugging information about your cluster with Red Hat Support. You can use the must-gather tool to collect diagnostic information about your OpenShift Container Platform cluster, including data related to OpenShift Serverless. For faster support, give diagnostic information for both OpenShift Container Platform and OpenShift Serverless.
7.5. About collecting OpenShift Serverless data Copy linkLink copied to clipboard!
You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with OpenShift Serverless. To collect OpenShift Serverless data with must-gather, you must specify the OpenShift Serverless image and the image tag for your installed version of OpenShift Serverless.
Prerequisites
-
Install the OpenShift CLI (
oc).
Procedure
Collect data by using the
oc adm must-gathercommand:$ oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8:<image_version_tag>Example command
$ oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/serverless-must-gather-rhel8:1.35.0