Search

Serverless

download PDF
OpenShift Container Platform 4.9

OpenShift Serverless installation, usage, and release notes

Red Hat OpenShift Documentation Team

Abstract

This document provides information on how to use OpenShift Serverless in OpenShift Container Platform.

Chapter 1. Release notes

Note

For additional information about the OpenShift Serverless life cycle and supported platforms, refer to the Platform Life Cycle Policy.

Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Serverless releases on OpenShift Container Platform.

For an overview of OpenShift Serverless functionality, see About OpenShift Serverless.

Note

OpenShift Serverless is based on the open source Knative project.

For details about the latest Knative component releases, see the Knative blog.

1.1. About API versions

API versions are an important measure of the development status of certain features and custom resources in OpenShift Serverless. Creating resources on your cluster that do not use the correct API version can cause issues in your deployment.

The OpenShift Serverless Operator automatically upgrades older resources that use deprecated versions of APIs to use the latest version. For example, if you have created resources on your cluster that use older versions of the ApiServerSource API, such as v1beta1, the OpenShift Serverless Operator automatically updates these resources to use the v1 version of the API when this is available and the v1beta1 version is deprecated.

After they have been deprecated, older versions of APIs might be removed in any upcoming release. Using deprecated versions of APIs does not cause resources to fail. However, if you try to use a version of an API that has been removed, it will cause resources to fail. Ensure that your manifests are updated to use the latest version to avoid issues.

1.2. Generally Available and Technology Preview features

Features which are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. See the Technology Preview scope of support on the Red Hat Customer Portal for more information about TP features.

The following table provides information about which OpenShift Serverless features are GA and which are TP:

Table 1.1. Generally Available and Technology Preview features tracker
Feature1.261.271.28

kn func

GA

GA

GA

Quarkus functions

GA

GA

GA

Node.js functions

TP

TP

GA

TypeScript functions

TP

TP

GA

Python functions

-

-

TP

Service Mesh mTLS

GA

GA

GA

emptyDir volumes

GA

GA

GA

HTTPS redirection

GA

GA

GA

Kafka broker

GA

GA

GA

Kafka sink

GA

GA

GA

Init containers support for Knative services

GA

GA

GA

PVC support for Knative services

GA

GA

GA

TLS for internal traffic

TP

TP

TP

Namespace-scoped brokers

-

TP

TP

multi-container support

-

-

TP

1.3. Deprecated and removed features

Some features that were Generally Available (GA) or a Technology Preview (TP) in previous releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Serverless and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality deprecated and removed within OpenShift Serverless, refer to the following table:

Table 1.2. Deprecated and removed features tracker
Feature1.201.211.22 to 1.261.271.28

KafkaBinding API

Deprecated

Deprecated

Removed

Removed

Removed

kn func emit (kn func invoke in 1.21+)

Deprecated

Removed

Removed

Removed

Removed

Serving and Eventing v1alpha1 API

-

-

-

Deprecated

Deprecated

enable-secret-informer-filtering annotation

-

-

-

-

Deprecated

1.4. Release notes for Red Hat OpenShift Serverless 1.28

OpenShift Serverless 1.28 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.4.1. New features

  • OpenShift Serverless now uses Knative Serving 1.7.
  • OpenShift Serverless now uses Knative Eventing 1.7.
  • OpenShift Serverless now uses Kourier 1.7.
  • OpenShift Serverless now uses Knative (kn) CLI 1.7.
  • OpenShift Serverless now uses Knative Kafka 1.7.
  • The kn func CLI plug-in now uses func 1.9.1 version.
  • Node.js and TypeScript runtimes for OpenShift Serverless Functions are now Generally Available (GA).
  • Python runtime for OpenShift Serverless Functions is now available as a Technology Preview.
  • Multi-container support for Knative Serving is now available as a Technology Preview. This feature allows you to use a single Knative service to deploy a multi-container pod.
  • In OpenShift Serverless 1.29 or later, the following components of Knative Eventing will be scaled down from two pods to one:

    • imc-controller
    • imc-dispatcher
    • mt-broker-controller
    • mt-broker-filter
    • mt-broker-ingress
  • The serverless.openshift.io/enable-secret-informer-filtering annotation for the Serving CR is now deprecated. The annotation is valid only for Istio, and not for Kourier.

    With OpenShift Serverless 1.28, the OpenShift Serverless Operator allows injecting the environment variable ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID for both net-istio and net-kourier.

    If you enable secret filtering, all of your secrets need to be labeled with networking.internal.knative.dev/certificate-uid: "<id>". Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets.

    In one of the following OpenShift Serverless releases, secret filtering will become enabled by default. To prevent failures, label your secrets in advance.

1.4.2. Known issues

  • Currently, runtimes for Python are not supported for OpenShift Serverless Functions on IBM Power, IBM zSystems, and IBM® LinuxONE.

    Node.js, TypeScript, and Quarkus functions are supported on these architectures.

  • On the Windows platform, Python functions cannot be locally built, run, or deployed using the Source-to-Image builder due to the app.sh file permissions.

    To work around this problem, use the Windows Subsystem for Linux.

1.5. Release notes for Red Hat OpenShift Serverless 1.27

OpenShift Serverless 1.27 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

Important

OpenShift Serverless 1.26 is the earliest release that is fully supported on OpenShift Container Platform 4.12. OpenShift Serverless 1.25 and older does not deploy on OpenShift Container Platform 4.12.

For this reason, before upgrading OpenShift Container Platform to version 4.12, first upgrade OpenShift Serverless to version 1.26 or 1.27.

1.5.1. New features

  • OpenShift Serverless now uses Knative Serving 1.6.
  • OpenShift Serverless now uses Knative Eventing 1.6.
  • OpenShift Serverless now uses Kourier 1.6.
  • OpenShift Serverless now uses Knative (kn) CLI 1.6.
  • OpenShift Serverless now uses Knative Kafka 1.6.
  • The kn func CLI plug-in now uses func 1.8.1.
  • Namespace-scoped brokers are now available as a Technology Preview. Such brokers can be used, for instance, to implement role-based access control (RBAC) policies.
  • KafkaSink now uses the CloudEvent binary content mode by default. The binary content mode is more efficient than the structured mode because it uses headers in its body instead of a CloudEvent. For example, for the HTTP protocol, it uses HTTP headers.
  • You can now use the gRPC framework over the HTTP/2 protocol for external traffic using the OpenShift Route on OpenShift Container Platform 4.10 and later. This improves efficiency and speed of the communications between the client and server.
  • API version v1alpha1 of the Knative Operator Serving and Eventings CRDs is deprecated in 1.27. It will be removed in future versions. Red Hat strongly recommends to use the v1beta1 version instead. This does not affect the existing installations, because CRDs are updated automatically when upgrading the Serverless Operator.
  • The delivery timeout feature is now enabled by default. It allows you to specify the timeout for each sent HTTP request. The feature remains a Technology Preview.

1.5.2. Fixed issues

  • Previously, Knative services sometimes did not get into the Ready state, reporting waiting for the load balancer to be ready. This issue has been fixed.

1.5.3. Known issues

  • Integrating OpenShift Serverless with Red Hat OpenShift Service Mesh causes the net-kourier pod to run out of memory on startup when too many secrets are present on the cluster.
  • Namespace-scoped brokers might leave ClusterRoleBindings in the user namespace even after deletion of namespace-scoped brokers.

    If this happens, delete the ClusterRoleBinding named rbac-proxy-reviews-prom-rb-knative-kafka-broker-data-plane-{{.Namespace}} in the user namespace.

  • If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true, Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless.

    To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true.

1.6. Release notes for Red Hat OpenShift Serverless 1.26

OpenShift Serverless 1.26 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.6.1. New features

  • OpenShift Serverless Functions with Quarkus is now GA.
  • OpenShift Serverless now uses Knative Serving 1.5.
  • OpenShift Serverless now uses Knative Eventing 1.5.
  • OpenShift Serverless now uses Kourier 1.5.
  • OpenShift Serverless now uses Knative (kn) CLI 1.5.
  • OpenShift Serverless now uses Knative Kafka 1.5.
  • OpenShift Serverless now uses Knative Operator 1.3.
  • The kn func CLI plugin now uses func 1.8.1.
  • Persistent volume claims (PVCs) are now GA. PVCs provide permanent data storage for your Knative services.
  • The new trigger filters feature is now available as a Developer Preview. It allows users to specify a set of filter expressions, where each expression evaluates to either true or false for each event.

    To enable new trigger filters, add the new-trigger-filters: enabled entry in the section of the KnativeEventing type in the operator config map:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    ...
    ...
    spec:
      config:
        features:
          new-trigger-filters: enabled
    ...
  • Knative Operator 1.3 adds the updated v1beta1 version of the API for operator.knative.dev.

    To update from v1alpha1 to v1beta1 in your KnativeServing and KnativeEventing custom resource config maps, edit the apiVersion key:

    Example KnativeServing custom resource config map

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    ...

    Example KnativeEventing custom resource config map

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    ...

1.6.2. Fixed issues

  • Previously, Federal Information Processing Standards (FIPS) mode was disabled for Kafka broker, Kafka source, and Kafka sink. This has been fixed, and FIPS mode is now available.

1.6.3. Known issues

  • If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true, Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless.

    To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true.

1.7. Release notes for Red Hat OpenShift Serverless 1.25.0

OpenShift Serverless 1.25.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.7.1. New features

  • OpenShift Serverless now uses Knative Serving 1.4.
  • OpenShift Serverless now uses Knative Eventing 1.4.
  • OpenShift Serverless now uses Kourier 1.4.
  • OpenShift Serverless now uses Knative (kn) CLI 1.4.
  • OpenShift Serverless now uses Knative Kafka 1.4.
  • The kn func CLI plugin now uses func 1.7.0.
  • Integrated development environment (IDE) plugins for creating and deploying functions are now available for Visual Studio Code and IntelliJ.
  • Knative Kafka broker is now GA. Knative Kafka broker is a highly performant implementation of the Knative broker API, directly targeting Apache Kafka.

    It is recommended to not use the MT-Channel-Broker, but the Knative Kafka broker instead.

  • Knative Kafka sink is now GA. A KafkaSink takes a CloudEvent and sends it to an Apache Kafka topic. Events can be specified in either structured or binary content modes.
  • Enabling TLS for internal traffic is now available as a Technology Preview.

1.7.2. Fixed issues

  • Previously, Knative Serving had an issue where the readiness probe failed if the container was restarted after a liveness probe fail. This issue has been fixed.

1.7.3. Known issues

  • The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink.
  • The SinkBinding object does not support custom revision names for services.
  • The Knative Serving Controller pod adds a new informer to watch secrets in the cluster. The informer includes the secrets in the cache, which increases memory consumption of the controller pod.

    If the pod runs out of memory, you can work around the issue by increasing the memory limit for the deployment.

  • If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true, Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless.

    To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true.

Additional resources

1.8. Release notes for Red Hat OpenShift Serverless 1.24.0

OpenShift Serverless 1.24.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.8.1. New features

  • OpenShift Serverless now uses Knative Serving 1.3.
  • OpenShift Serverless now uses Knative Eventing 1.3.
  • OpenShift Serverless now uses Kourier 1.3.
  • OpenShift Serverless now uses Knative kn CLI 1.3.
  • OpenShift Serverless now uses Knative Kafka 1.3.
  • The kn func CLI plugin now uses func 0.24.
  • Init containers support for Knative services is now generally available (GA).
  • OpenShift Serverless logic is now available as a Developer Preview. It enables defining declarative workflow models for managing serverless applications.
  • You can now use the cost management service with OpenShift Serverless.

1.8.2. Fixed issues

  • Integrating OpenShift Serverless with Red Hat OpenShift Service Mesh causes the net-istio-controller pod to run out of memory on startup when too many secrets are present on the cluster.

    It is now possible to enable secret filtering, which causes net-istio-controller to consider only secrets with a networking.internal.knative.dev/certificate-uid label, thus reducing the amount of memory needed.

  • The OpenShift Serverless Functions Technology Preview now uses Cloud Native Buildpacks by default to build container images.

1.8.3. Known issues

  • The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink.
  • In OpenShift Serverless 1.23, support for KafkaBindings and the kafka-binding webhook were removed. However, an existing kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might remain, pointing to the kafka-source-webhook service, which no longer exists.

    For certain specifications of KafkaBindings on the cluster, kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might be configured to pass any create and update events to various resources, such as Deployments, Knative Services, or Jobs, through the webhook, which would then fail.

    To work around this issue, manually delete kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration from the cluster after upgrading to OpenShift Serverless 1.23:

    $ oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev
  • If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true, Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless.

    To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true.

1.9. Release notes for Red Hat OpenShift Serverless 1.23.0

OpenShift Serverless 1.23.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.9.1. New features

  • OpenShift Serverless now uses Knative Serving 1.2.
  • OpenShift Serverless now uses Knative Eventing 1.2.
  • OpenShift Serverless now uses Kourier 1.2.
  • OpenShift Serverless now uses Knative (kn) CLI 1.2.
  • OpenShift Serverless now uses Knative Kafka 1.2.
  • The kn func CLI plugin now uses func 0.24.
  • It is now possible to use the kafka.eventing.knative.dev/external.topic annotation with the Kafka broker. This annotation makes it possible to use an existing externally managed topic instead of the broker creating its own internal topic.
  • The kafka-ch-controller and kafka-webhook Kafka components no longer exist. These components have been replaced by the kafka-webhook-eventing component.
  • The OpenShift Serverless Functions Technology Preview now uses Source-to-Image (S2I) by default to build container images.

1.9.2. Known issues

  • The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink.
  • If you delete a namespace that includes a Kafka broker, the namespace finalizer may fail to be removed if the broker’s auth.secret.ref.name secret is deleted before the broker.
  • Running OpenShift Serverless with a large number of Knative services can cause Knative activator pods to run close to their default memory limits of 600MB. These pods might be restarted if memory consumption reaches this limit. Requests and limits for the activator deployment can be configured by modifying the KnativeServing custom resource:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      deployments:
      - name: activator
        resources:
        - container: activator
          requests:
            cpu: 300m
            memory: 60Mi
          limits:
            cpu: 1000m
            memory: 1000Mi
  • If you are using Cloud Native Buildpacks as the local build strategy for a function, kn func is unable to automatically start podman or use an SSH tunnel to a remote daemon. The workaround for these issues is to have a Docker or podman daemon already running on the local development computer before deploying a function.
  • On-cluster function builds currently fail for Quarkus and Golang runtimes. They work correctly for Node, Typescript, Python, and Springboot runtimes.
  • If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true, Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless.

    To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true.

Additional resources

1.10. Release notes for Red Hat OpenShift Serverless 1.22.0

OpenShift Serverless 1.22.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.10.1. New features

  • OpenShift Serverless now uses Knative Serving 1.1.
  • OpenShift Serverless now uses Knative Eventing 1.1.
  • OpenShift Serverless now uses Kourier 1.1.
  • OpenShift Serverless now uses Knative (kn) CLI 1.1.
  • OpenShift Serverless now uses Knative Kafka 1.1.
  • The kn func CLI plugin now uses func 0.23.
  • Init containers support for Knative services is now available as a Technology Preview.
  • Persistent volume claim (PVC) support for Knative services is now available as a Technology Preview.
  • The knative-serving, knative-serving-ingress, knative-eventing and knative-kafka system namespaces now have the knative.openshift.io/part-of: "openshift-serverless" label by default.
  • The Knative Eventing - Kafka Broker/Trigger dashboard has been added, which allows visualizing Kafka broker and trigger metrics in the web console.
  • The Knative Eventing - KafkaSink dashboard has been added, which allows visualizing KafkaSink metrics in the web console.
  • The Knative Eventing - Broker/Trigger dashboard is now called Knative Eventing - Channel-based Broker/Trigger.
  • The knative.openshift.io/part-of: "openshift-serverless" label has substituted the knative.openshift.io/system-namespace label.
  • Naming style in Knative Serving YAML configuration files changed from camel case (ExampleName) to hyphen style (example-name). Beginning with this release, use the hyphen style notation when creating or editing Knative Serving YAML configuration files.

1.10.2. Known issues

  • The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink.

1.11. Release notes for Red Hat OpenShift Serverless 1.21.0

OpenShift Serverless 1.21.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.11.1. New features

  • OpenShift Serverless now uses Knative Serving 1.0
  • OpenShift Serverless now uses Knative Eventing 1.0.
  • OpenShift Serverless now uses Kourier 1.0.
  • OpenShift Serverless now uses Knative (kn) CLI 1.0.
  • OpenShift Serverless now uses Knative Kafka 1.0.
  • The kn func CLI plugin now uses func 0.21.
  • The Kafka sink is now available as a Technology Preview.
  • The Knative open source project has begun to deprecate camel-cased configuration keys in favor of using kebab-cased keys consistently. As a result, the defaultExternalScheme key, previously mentioned in the OpenShift Serverless 1.18.0 release notes, is now deprecated and replaced by the default-external-scheme key. Usage instructions for the key remain the same.

1.11.2. Fixed issues

  • In OpenShift Serverless 1.20.0, there was an event delivery issue affecting the use of kn event send to send events to a service. This issue is now fixed.
  • In OpenShift Serverless 1.20.0 (func 0.20), TypeScript functions created with the http template failed to deploy on the cluster. This issue is now fixed.
  • In OpenShift Serverless 1.20.0 (func 0.20), deploying a function using the gcr.io registry failed with an error. This issue is now fixed.
  • In OpenShift Serverless 1.20.0 (func 0.20), creating a Springboot function project directory with the kn func create command and then running the kn func build command failed with an error message. This issue is now fixed.
  • In OpenShift Serverless 1.19.0 (func 0.19), some runtimes were unable to build a function by using podman. This issue is now fixed.

1.11.3. Known issues

  • Currently, the domain mapping controller cannot process the URI of a broker, which contains a path that is currently not supported.

    This means that, if you want to use a DomainMapping custom resource (CR) to map a custom domain to a broker, you must configure the DomainMapping CR with the broker’s ingress service, and append the exact path of the broker to the custom domain:

    Example DomainMapping CR

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
      name: <domain-name>
      namespace: knative-eventing
    spec:
      ref:
        name: broker-ingress
        kind: Service
        apiVersion: v1

    The URI for the broker is then <domain-name>/<broker-namespace>/<broker-name>.

1.12. Release notes for Red Hat OpenShift Serverless 1.20.0

OpenShift Serverless 1.20.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.12.1. New features

  • OpenShift Serverless now uses Knative Serving 0.26.
  • OpenShift Serverless now uses Knative Eventing 0.26.
  • OpenShift Serverless now uses Kourier 0.26.
  • OpenShift Serverless now uses Knative (kn) CLI 0.26.
  • OpenShift Serverless now uses Knative Kafka 0.26.
  • The kn func CLI plugin now uses func 0.20.
  • The Kafka broker is now available as a Technology Preview.

    Important

    The Kafka broker, which is currently in Technology Preview, is not supported on FIPS.

  • The kn event plugin is now available as a Technology Preview.
  • The --min-scale and --max-scale flags for the kn service create command have been deprecated. Use the --scale-min and --scale-max flags instead.

1.12.2. Known issues

  • OpenShift Serverless deploys Knative services with a default address that uses HTTPS. When sending an event to a resource inside the cluster, the sender does not have the cluster certificate authority (CA) configured. This causes event delivery to fail, unless the cluster uses globally accepted certificates.

    For example, an event delivery to a publicly accessible address works:

    $ kn event send --to-url https://ce-api.foo.example.com/

    On the other hand, this delivery fails if the service uses a public address with an HTTPS certificate issued by a custom CA:

    $ kn event send --to Service:serving.knative.dev/v1:event-display

    Sending an event to other addressable objects, such as brokers or channels, is not affected by this issue and works as expected.

  • The Kafka broker currently does not work on a cluster with Federal Information Processing Standards (FIPS) mode enabled.
  • If you create a Springboot function project directory with the kn func create command, subsequent running of the kn func build command fails with this error message:

    [analyzer] no stack metadata found at path ''
    [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/ca-certificates@3.0.2': buildpack API version '0.7' is incompatible with the lifecycle

    As a workaround, you can change the builder property to gcr.io/paketo-buildpacks/builder:base in the function configuration file func.yaml.

  • Deploying a function using the gcr.io registry fails with this error message:

    Error: failed to get credentials: failed to verify credentials: status code: 404

    As a workaround, use a different registry than gcr.io, such as quay.io or docker.io.

  • TypeScript functions created with the http template fail to deploy on the cluster.

    As a workaround, in the func.yaml file, replace the following section:

    buildEnvs: []

    with this:

    buildEnvs:
    - name: BP_NODE_RUN_SCRIPTS
      value: build
  • In func version 0.20, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following:

    ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF
    • The following workaround exists for this issue:

      1. Update the podman service by adding --time=0 to the service ExecStart definition:

        Example service configuration

        ExecStart=/usr/bin/podman $LOGGING system service --time=0

      2. Restart the podman service by running the following commands:

        $ systemctl --user daemon-reload
        $ systemctl restart --user podman.socket
    • Alternatively, you can expose the podman API by using TCP:

      $ podman system service --time=0 tcp:127.0.0.1:5534 &
      export DOCKER_HOST=tcp://127.0.0.1:5534

1.13. Release notes for Red Hat OpenShift Serverless 1.19.0

OpenShift Serverless 1.19.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.13.1. New features

  • OpenShift Serverless now uses Knative Serving 0.25.
  • OpenShift Serverless now uses Knative Eventing 0.25.
  • OpenShift Serverless now uses Kourier 0.25.
  • OpenShift Serverless now uses Knative (kn) CLI 0.25.
  • OpenShift Serverless now uses Knative Kafka 0.25.
  • The kn func CLI plugin now uses func 0.19.
  • The KafkaBinding API is deprecated in OpenShift Serverless 1.19.0 and will be removed in a future release.
  • HTTPS redirection is now supported and can be configured either globally for a cluster or per each Knative service.

1.13.2. Fixed issues

  • In previous releases, the Kafka channel dispatcher waited only for the local commit to succeed before responding, which might have caused lost events in the case of an Apache Kafka node failure. The Kafka channel dispatcher now waits for all in-sync replicas to commit before responding.

1.13.3. Known issues

  • In func version 0.19, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following:

    ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF
    • The following workaround exists for this issue:

      1. Update the podman service by adding --time=0 to the service ExecStart definition:

        Example service configuration

        ExecStart=/usr/bin/podman $LOGGING system service --time=0

      2. Restart the podman service by running the following commands:

        $ systemctl --user daemon-reload
        $ systemctl restart --user podman.socket
    • Alternatively, you can expose the podman API by using TCP:

      $ podman system service --time=0 tcp:127.0.0.1:5534 &
      export DOCKER_HOST=tcp://127.0.0.1:5534

1.14. Release notes for Red Hat OpenShift Serverless 1.18.0

OpenShift Serverless 1.18.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic.

1.14.1. New features

  • OpenShift Serverless now uses Knative Serving 0.24.0.
  • OpenShift Serverless now uses Knative Eventing 0.24.0.
  • OpenShift Serverless now uses Kourier 0.24.0.
  • OpenShift Serverless now uses Knative (kn) CLI 0.24.0.
  • OpenShift Serverless now uses Knative Kafka 0.24.7.
  • The kn func CLI plugin now uses func 0.18.0.
  • In the upcoming OpenShift Serverless 1.19.0 release, the URL scheme of external routes will default to HTTPS for enhanced security.

    If you do not want this change to apply for your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR):

    ...
    spec:
      config:
        network:
          defaultExternalScheme: "http"
    ...

    If you want the change to apply in 1.18.0 already, add the following YAML:

    ...
    spec:
      config:
        network:
          defaultExternalScheme: "https"
    ...
  • In the upcoming OpenShift Serverless 1.19.0 release, the default service type by which the Kourier Gateway is exposed will be ClusterIP and not LoadBalancer.

    If you do not want this change to apply to your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR):

    ...
    spec:
      ingress:
        kourier:
          service-type: LoadBalancer
    ...
  • You can now use emptyDir volumes with OpenShift Serverless. See the OpenShift Serverless documentation about Knative Serving for details.
  • Rust templates are now available when you create a function using kn func.

1.14.2. Fixed issues

  • The prior 1.4 version of Camel-K was not compatible with OpenShift Serverless 1.17.0. The issue in Camel-K has been fixed, and Camel-K version 1.4.1 can be used with OpenShift Serverless 1.17.0.
  • Previously, if you created a new subscription for a Kafka channel, or a new Kafka source, a delay was possible in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reported a ready status.

    As a result, messages that were sent during the time when the data plane was not reporting a ready status, might not have been delivered to the subscriber or sink.

    In OpenShift Serverless 1.18.0, the issue is fixed and the initial messages are no longer lost. For more information about the issue, see Knowledgebase Article #6343981.

1.14.3. Known issues

  • Older versions of the Knative kn CLI might use older versions of the Knative Serving and Knative Eventing APIs. For example, version 0.23.2 of the kn CLI uses the v1alpha1 API version.

    On the other hand, newer releases of OpenShift Serverless might no longer support older API versions. For example, OpenShift Serverless 1.18.0 no longer supports version v1alpha1 of the kafkasources.sources.knative.dev API.

    Consequently, using an older version of the Knative kn CLI with a newer OpenShift Serverless might produce an error because the kn cannot find the outdated API. For example, version 0.23.2 of the kn CLI does not work with OpenShift Serverless 1.18.0.

    To avoid issues, use the latest kn CLI version available for your OpenShift Serverless release. For OpenShift Serverless 1.18.0, use Knative kn CLI 0.24.0.

Chapter 2. About Serverless

2.1. OpenShift Serverless overview

OpenShift Serverless provides Kubernetes native building blocks that enable developers to create and deploy serverless, event-driven applications on OpenShift Container Platform. OpenShift Serverless is based on the open source Knative project, which provides portability and consistency for hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform.

Note

Because OpenShift Serverless releases on a different cadence from OpenShift Container Platform, the OpenShift Serverless documentation does not maintain separate documentation sets for minor versions of the product. The current documentation set applies to all currently supported versions of OpenShift Serverless unless version-specific limitations are called out in a particular topic or for a particular feature.

For additional information about the OpenShift Serverless life cycle and supported platforms, refer to the Platform Life Cycle Policy.

2.1.1. Additional resources

2.2. Knative Serving

Knative Serving supports developers who want to create, deploy, and manage cloud-native applications. It provides a set of objects as Kubernetes custom resource definitions (CRDs) that define and control the behavior of serverless workloads on an OpenShift Container Platform cluster.

Developers use these CRDs to create custom resource (CR) instances that can be used as building blocks to address complex use cases. For example:

  • Rapidly deploying serverless containers.
  • Automatically scaling pods.

2.2.1. Knative Serving resources

Service
The service.serving.knative.dev CRD automatically manages the life cycle of your workload to ensure that the application is deployed and reachable through the network. It creates a route, a configuration, and a new revision for each change to a user created service, or custom resource. Most developer interactions in Knative are carried out by modifying services.
Revision
The revision.serving.knative.dev CRD is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as necessary.
Route
The route.serving.knative.dev CRD maps a network endpoint to one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes.
Configuration
The configuration.serving.knative.dev CRD maintains the desired state for your deployment. It provides a clean separation between code and configuration. Modifying a configuration creates a new revision.

2.3. Knative Eventing

Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers and event consumers.

Event producers create events, and event sinks, or consumers, receive events. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.

Knative Eventing supports the following use cases:

Publish an event without creating a consumer
You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events.
Consume an event without creating a publisher
You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST.

To enable delivery to multiple types of sinks, Knative Eventing defines the following generic interfaces that can be implemented by multiple Kubernetes resources:

Addressable resources
Able to receive and acknowledge an event delivered over HTTP to an address defined in the status.address.url field of the event. The Kubernetes Service resource also satisfies the addressable interface.
Callable resources
Able to receive an event delivered over HTTP and transform it, returning 0 or 1 new events in the HTTP response payload. These returned events may be further processed in the same way that events from an external event source are processed.

2.3.1. Using Knative Kafka

Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.

Note

Knative Kafka is not currently supported for IBM Z and IBM Power.

Knative Kafka provides additional options, such as:

  • Kafka source
  • Kafka channel
  • Kafka broker
  • Kafka sink

2.3.2. Additional resources

2.4. About OpenShift Serverless Functions

OpenShift Serverless Functions enables developers to create and deploy stateless, event-driven functions as a Knative service on OpenShift Container Platform. The kn func CLI is provided as a plugin for the Knative kn CLI. You can use the kn func CLI to create, build, and deploy the container image as a Knative service on the cluster.

2.4.1. Included runtimes

OpenShift Serverless Functions provides templates that can be used to create basic functions for the following runtimes:

2.4.2. Next steps

Chapter 3. Installing Serverless

3.1. Preparing to install OpenShift Serverless

Read the following information about supported configurations and prerequisites before you install OpenShift Serverless.

  • OpenShift Serverless is supported for installation in a restricted network environment.
  • OpenShift Serverless currently cannot be used in a multi-tenant configuration on a single cluster.

3.1.1. Supported configurations

The set of supported features, configurations, and integrations for OpenShift Serverless, current and past versions, are available at the Supported Configurations page.

3.1.2. Scalability and performance

OpenShift Serverless has been tested with a configuration of 3 main nodes and 3 worker nodes, each of which has 64 CPUs, 457 GB of memory, and 394 GB of storage each.

The maximum number of Knative services that can be created using this configuration is 3,000. This corresponds to the OpenShift Container Platform Kubernetes services limit of 10,000, since 1 Knative service creates 3 Kubernetes services.

The average scale from zero response time was approximately 3.4 seconds, with a maximum response time of 8 seconds, and a 99.9th percentile of 4.5 seconds for a simple Quarkus application. These times might vary depending on the application and the runtime of the application.

3.1.3. Defining cluster size requirements

To install and use OpenShift Serverless, the OpenShift Container Platform cluster must be sized correctly.

Note

The following requirements relate only to the pool of worker machines of the OpenShift Container Platform cluster. Control plane nodes are not used for general scheduling and are omitted from the requirements.

The minimum requirement to use OpenShift Serverless is a cluster with 10 CPUs and 40GB memory. By default, each pod requests ~400m of CPU, so the minimum requirements are based on this value.

The total size requirements to run OpenShift Serverless are dependent on the components that are installed and the applications that are deployed, and might vary depending on your deployment.

3.1.4. Scaling your cluster using compute machine sets

You can use the OpenShift Container Platform MachineSet API to manually scale your cluster up to the desired size. The minimum requirements usually mean that you must scale up one of the default compute machine sets by two additional machines. See Manually scaling a compute machine set.

3.1.4.1. Additional requirements for advanced use-cases

For more advanced use-cases such as logging or metering on OpenShift Container Platform, you must deploy more resources. Recommended requirements for such use-cases are 24 CPUs and 96GB of memory.

If you have high availability (HA) enabled on your cluster, this requires between 0.5 - 1.5 cores and between 200MB - 2GB of memory for each replica of the Knative Serving control plane. HA is enabled for some Knative Serving components by default. You can disable HA by following the documentation on "Configuring high availability replicas".

3.1.5. Additional resources

3.2. Installing the OpenShift Serverless Operator

Installing the OpenShift Serverless Operator enables you to install and use Knative Serving, Knative Eventing, and Knative Kafka on a OpenShift Container Platform cluster. The OpenShift Serverless Operator manages Knative custom resource definitions (CRDs) for your cluster and enables you to configure them without directly modifying individual config maps for each component.

3.2.1. Installing the OpenShift Serverless Operator from the web console

You can install the OpenShift Serverless Operator from the OperatorHub by using the OpenShift Container Platform web console. Installing this Operator enables you to install and use Knative components.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have logged in to the OpenShift Container Platform web console.

Procedure

  1. In the OpenShift Container Platform web console, navigate to the OperatorsOperatorHub page.
  2. Scroll, or type the keyword Serverless into the Filter by keyword box to find the OpenShift Serverless Operator.
  3. Review the information about the Operator and click Install.
  4. On the Install Operator page:

    1. The Installation Mode is All namespaces on the cluster (default). This mode installs the Operator in the default openshift-serverless namespace to watch and be made available to all namespaces in the cluster.
    2. The Installed Namespace is openshift-serverless.
    3. Select the stable channel as the Update Channel. The stable channel will enable installation of the latest stable release of the OpenShift Serverless Operator.
    4. Select Automatic or Manual approval strategy.
  5. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.
  6. From the CatalogOperator Management page, you can monitor the OpenShift Serverless Operator subscription’s installation and upgrade progress.

    1. If you selected a Manual approval strategy, the subscription’s upgrade status will remain Upgrading until you review and approve its install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.

Verification

After the Subscription’s upgrade status is Up to date, select CatalogInstalled Operators to verify that the OpenShift Serverless Operator eventually shows up and its Status ultimately resolves to InstallSucceeded in the relevant namespace.

If it does not:

  1. Switch to the CatalogOperator Management page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
  2. Check the logs in any pods in the openshift-serverless project on the WorkloadsPods page that are reporting issues to troubleshoot further.
Important

If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless, you must install and configure Red Hat OpenShift distributed tracing before you install Knative Serving or Knative Eventing.

3.2.2. Installing the OpenShift Serverless Operator from the CLI

You can install the OpenShift Serverless Operator from the OperatorHub by using the CLI. Installing this Operator enables you to install and use Knative components.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually.
  • You have logged in to the OpenShift Container Platform cluster.

Procedure

  1. Create a YAML file containing Namespace, OperatorGroup, and Subscription objects to subscribe a namespace to the OpenShift Serverless Operator. For example, create the file serverless-subscription.yaml with the following content:

    Example subscription

    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-serverless
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: serverless-operators
      namespace: openshift-serverless
    spec: {}
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: serverless-operator
      namespace: openshift-serverless
    spec:
      channel: stable 1
      name: serverless-operator 2
      source: redhat-operators 3
      sourceNamespace: openshift-marketplace 4

    1
    The channel name of the Operator. The stable channel enables installation of the most recent stable version of the OpenShift Serverless Operator.
    2
    The name of the Operator to subscribe to. For the OpenShift Serverless Operator, this is always serverless-operator.
    3
    The name of the CatalogSource that provides the Operator. Use redhat-operators for the default OperatorHub catalog sources.
    4
    The namespace of the CatalogSource. Use openshift-marketplace for the default OperatorHub catalog sources.
  2. Create the Subscription object:

    $ oc apply -f serverless-subscription.yaml

Verification

Check that the cluster service version (CSV) has reached the Succeeded phase:

Example command

$ oc get csv

Example output

NAME                          DISPLAY                        VERSION   REPLACES                      PHASE
serverless-operator.v1.25.0   Red Hat OpenShift Serverless   1.25.0    serverless-operator.v1.24.0   Succeeded

Important

If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless, you must install and configure Red Hat OpenShift distributed tracing before you install Knative Serving or Knative Eventing.

3.2.3. Global configuration

The OpenShift Serverless Operator manages the global configuration of a Knative installation, including propagating values from the KnativeServing and KnativeEventing custom resources to system config maps. Any updates to config maps which are applied manually are overwritten by the Operator. However, modifying the Knative custom resources allows you to set values for these config maps.

Knative has multiple config maps that are named with the prefix config-. All Knative config maps are created in the same namespace as the custom resource that they apply to. For example, if the KnativeServing custom resource is created in the knative-serving namespace, all Knative Serving config maps are also created in this namespace.

The spec.config in the Knative custom resources have one <name> entry for each config map, named config-<name>, with a value which is be used for the config map data.

3.2.4. Additional resources

3.2.5. Next steps

3.3. Installing the Knative CLI

The Knative (kn) CLI does not have its own login mechanism. To log in to the cluster, you must install the OpenShift CLI (oc) and use the oc login command. Installation options for the CLIs may vary depending on your operating system.

For more information on installing the OpenShift CLI (oc) for your operating system and logging in with oc, see the OpenShift CLI getting started documentation.

OpenShift Serverless cannot be installed using the Knative (kn) CLI. A cluster administrator must install the OpenShift Serverless Operator and set up the Knative components, as described in the Installing the OpenShift Serverless Operator documentation.

Important

If you try to use an older version of the Knative (kn) CLI with a newer OpenShift Serverless release, the API is not found and an error occurs.

For example, if you use the 1.23.0 release of the Knative (kn) CLI, which uses version 1.2, with the 1.24.0 OpenShift Serverless release, which uses the 1.3 versions of the Knative Serving and Knative Eventing APIs, the CLI does not work because it continues to look for the outdated 1.2 API versions.

Ensure that you are using the latest Knative (kn) CLI version for your OpenShift Serverless release to avoid issues.

3.3.1. Installing the Knative CLI using the OpenShift Container Platform web console

Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to install the Knative (kn) CLI. After the OpenShift Serverless Operator is installed, you will see a link to download the Knative (kn) CLI for Linux (amd64, s390x, ppc64le), macOS, or Windows from the Command Line Tools page in the OpenShift Container Platform web console.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console.
  • The OpenShift Serverless Operator and Knative Serving are installed on your OpenShift Container Platform cluster.

    Important

    If libc is not available, you might see the following error when you run CLI commands:

    $ kn: No such file or directory
  • If you want to use the verification steps for this procedure, you must install the OpenShift (oc) CLI.

Procedure

  1. Download the Knative (kn) CLI from the Command Line Tools page. You can access the Command Line Tools page by clicking the question circle icon in the top right corner of the web console and selecting Command Line Tools in the list.
  2. Unpack the archive:

    $ tar -xf <file>
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, run:

    $ echo $PATH

Verification

  • Run the following commands to check that the correct Knative CLI resources and route have been created:

    $ oc get ConsoleCLIDownload

    Example output

    NAME                  DISPLAY NAME                                             AGE
    kn                    kn - OpenShift Serverless Command Line Interface (CLI)   2022-09-20T08:41:18Z
    oc-cli-downloads      oc - OpenShift Command Line Interface (CLI)              2022-09-20T08:00:20Z

    $ oc get route -n openshift-serverless

    Example output

    NAME   HOST/PORT                                  PATH   SERVICES                      PORT       TERMINATION     WILDCARD
    kn     kn-openshift-serverless.apps.example.com          knative-openshift-metrics-3   http-cli   edge/Redirect   None

3.3.2. Installing the Knative CLI for Linux by using an RPM package manager

For Red Hat Enterprise Linux (RHEL), you can install the Knative (kn) CLI as an RPM by using a package manager, such as yum or dnf. This allows the Knative CLI version to be automatically managed by the system. For example, using a command like dnf upgrade upgrades all packages, including kn, if a new version is available.

Prerequisites

  • You have an active OpenShift Container Platform subscription on your Red Hat account.

Procedure

  1. Register with Red Hat Subscription Manager:

    # subscription-manager register
  2. Pull the latest subscription data:

    # subscription-manager refresh
  3. Attach the subscription to the registered system:

    # subscription-manager attach --pool=<pool_id> 1
    1
    Pool ID for an active OpenShift Container Platform subscription
  4. Enable the repositories required by the Knative (kn) CLI:

    • Linux (x86_64, amd64)

      # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-x86_64-rpms"
    • Linux on IBM Z and LinuxONE (s390x)

      # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-s390x-rpms"
    • Linux on IBM Power (ppc64le)

      # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-ppc64le-rpms"
  5. Install the Knative (kn) CLI as an RPM by using a package manager:

    Example yum command

    # yum install openshift-serverless-clients

3.3.3. Installing the Knative CLI for Linux

If you are using a Linux distribution that does not have RPM or another package manager installed, you can install the Knative (kn) CLI as a binary file. To do this, you must download and unpack a tar.gz archive and add the binary to a directory on your PATH.

Prerequisites

  • If you are not using RHEL or Fedora, ensure that libc is installed in a directory on your library path.

    Important

    If libc is not available, you might see the following error when you run CLI commands:

    $ kn: No such file or directory

Procedure

  1. Download the relevant Knative (kn) CLI tar.gz archive:

    You can also download any version of kn by navigating to that version’s corresponding directory in the Serverless client download mirror.

  2. Unpack the archive:

    $ tar -xf <filename>
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, run:

    $ echo $PATH

3.3.4. Installing the Knative CLI for macOS

If you are using macOS, you can install the Knative (kn) CLI as a binary file. To do this, you must download and unpack a tar.gz archive and add the binary to a directory on your PATH.

Procedure

  1. Download the Knative (kn) CLI tar.gz archive.

    You can also download any version of kn by navigating to that version’s corresponding directory in the Serverless client download mirror.

  2. Unpack and extract the archive.
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, open a terminal window and run:

    $ echo $PATH

3.3.5. Installing the Knative CLI for Windows

If you are using Windows, you can install the Knative (kn) CLI as a binary file. To do this, you must download and unpack a ZIP archive and add the binary to a directory on your PATH.

Procedure

  1. Download the Knative (kn) CLI ZIP archive.

    You can also download any version of kn by navigating to that version’s corresponding directory in the Serverless client download mirror.

  2. Extract the archive with a ZIP program.
  3. Move the kn binary to a directory on your PATH.
  4. To check your PATH, open the command prompt and run the command:

    C:\> path

3.4. Installing Knative Serving

Installing Knative Serving allows you to create Knative services and functions on your cluster. It also allows you to use additional functionality such as autoscaling and networking options for your applications.

After you install the OpenShift Serverless Operator, you can install Knative Serving by using the default settings, or configure more advanced settings in the KnativeServing custom resource (CR). For more information about configuration options for the KnativeServing CR, see Global configuration.

Important

If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless, you must install and configure Red Hat OpenShift distributed tracing before you install Knative Serving.

3.4.1. Installing Knative Serving by using the web console

After you install the OpenShift Serverless Operator, install Knative Serving by using the OpenShift Container Platform web console. You can install Knative Serving by using the default settings or configure more advanced settings in the KnativeServing custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have logged in to the OpenShift Container Platform web console.
  • You have installed the OpenShift Serverless Operator.

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators.
  2. Check that the Project dropdown at the top of the page is set to Project: knative-serving.
  3. Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab.
  4. Click Create Knative Serving.
  5. In the Create Knative Serving page, you can install Knative Serving using the default settings by clicking Create.

    You can also modify settings for the Knative Serving installation by editing the KnativeServing object using either the form provided, or by editing the YAML.

    • Using the form is recommended for simpler configurations that do not require full control of KnativeServing object creation.
    • Editing the YAML is recommended for more complex configurations that require full control of KnativeServing object creation. You can access the YAML by clicking the edit YAML link in the top right of the Create Knative Serving page.

      After you complete the form, or have finished modifying the YAML, click Create.

      Note

      For more information about configuration options for the KnativeServing custom resource definition, see the documentation on Advanced installation configuration options.

  6. After you have installed Knative Serving, the KnativeServing object is created, and you are automatically directed to the Knative Serving tab. You will see the knative-serving custom resource in the list of resources.

Verification

  1. Click on knative-serving custom resource in the Knative Serving tab.
  2. You will be automatically directed to the Knative Serving Overview page.

    Installed Operators page
  3. Scroll down to look at the list of Conditions.
  4. You should see a list of conditions with a status of True, as shown in the example image.

    Conditions
    Note

    It may take a few seconds for the Knative Serving resources to be created. You can check their status in the Resources tab.

  5. If the conditions have a status of Unknown or False, wait a few moments and then check again after you have confirmed that the resources have been created.

3.4.2. Installing Knative Serving by using YAML

After you install the OpenShift Serverless Operator, you can install Knative Serving by using the default settings, or configure more advanced settings in the KnativeServing custom resource (CR). You can use the following procedure to install Knative Serving by using YAML files and the oc CLI.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift Serverless Operator.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a file named serving.yaml and copy the following example YAML into it:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
        name: knative-serving
        namespace: knative-serving
  2. Apply the serving.yaml file:

    $ oc apply -f serving.yaml

Verification

  1. To verify the installation is complete, enter the following command:

    $ oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'

    Example output

    DependenciesInstalled=True
    DeploymentsAvailable=True
    InstallSucceeded=True
    Ready=True

    Note

    It may take a few seconds for the Knative Serving resources to be created.

    If the conditions have a status of Unknown or False, wait a few moments and then check again after you have confirmed that the resources have been created.

  2. Check that the Knative Serving resources have been created:

    $ oc get pods -n knative-serving

    Example output

    NAME                                                        READY   STATUS      RESTARTS   AGE
    activator-67ddf8c9d7-p7rm5                                  2/2     Running     0          4m
    activator-67ddf8c9d7-q84fz                                  2/2     Running     0          4m
    autoscaler-5d87bc6dbf-6nqc6                                 2/2     Running     0          3m59s
    autoscaler-5d87bc6dbf-h64rl                                 2/2     Running     0          3m59s
    autoscaler-hpa-77f85f5cc4-lrts7                             2/2     Running     0          3m57s
    autoscaler-hpa-77f85f5cc4-zx7hl                             2/2     Running     0          3m56s
    controller-5cfc7cb8db-nlccl                                 2/2     Running     0          3m50s
    controller-5cfc7cb8db-rmv7r                                 2/2     Running     0          3m18s
    domain-mapping-86d84bb6b4-r746m                             2/2     Running     0          3m58s
    domain-mapping-86d84bb6b4-v7nh8                             2/2     Running     0          3m58s
    domainmapping-webhook-769d679d45-bkcnj                      2/2     Running     0          3m58s
    domainmapping-webhook-769d679d45-fff68                      2/2     Running     0          3m58s
    storage-version-migration-serving-serving-0.26.0--1-6qlkb   0/1     Completed   0          3m56s
    webhook-5fb774f8d8-6bqrt                                    2/2     Running     0          3m57s
    webhook-5fb774f8d8-b8lt5                                    2/2     Running     0          3m57s

  3. Check that the necessary networking components have been installed to the automatically created knative-serving-ingress namespace:

    $ oc get pods -n knative-serving-ingress

    Example output

    NAME                                      READY   STATUS    RESTARTS   AGE
    net-kourier-controller-7d4b6c5d95-62mkf   1/1     Running   0          76s
    net-kourier-controller-7d4b6c5d95-qmgm2   1/1     Running   0          76s
    3scale-kourier-gateway-6688b49568-987qz   1/1     Running   0          75s
    3scale-kourier-gateway-6688b49568-b5tnp   1/1     Running   0          75s

3.4.3. Next steps

3.5. Installing Knative Eventing

To use event-driven architecture on your cluster, install Knative Eventing. You can create Knative components such as event sources, brokers, and channels and then use them to send events to applications or external systems.

After you install the OpenShift Serverless Operator, you can install Knative Eventing by using the default settings, or configure more advanced settings in the KnativeEventing custom resource (CR). For more information about configuration options for the KnativeEventing CR, see Global configuration.

Important

If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless, you must install and configure Red Hat OpenShift distributed tracing before you install Knative Eventing.

3.5.1. Installing Knative Eventing by using the web console

After you install the OpenShift Serverless Operator, install Knative Eventing by using the OpenShift Container Platform web console. You can install Knative Eventing by using the default settings or configure more advanced settings in the KnativeEventing custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have logged in to the OpenShift Container Platform web console.
  • You have installed the OpenShift Serverless Operator.

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators.
  2. Check that the Project dropdown at the top of the page is set to Project: knative-eventing.
  3. Click Knative Eventing in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Eventing tab.
  4. Click Create Knative Eventing.
  5. In the Create Knative Eventing page, you can choose to configure the KnativeEventing object by using either the default form provided, or by editing the YAML.

    • Using the form is recommended for simpler configurations that do not require full control of KnativeEventing object creation.

      Optional. If you are configuring the KnativeEventing object using the form, make any changes that you want to implement for your Knative Eventing deployment.

  6. Click Create.

    • Editing the YAML is recommended for more complex configurations that require full control of KnativeEventing object creation. You can access the YAML by clicking the edit YAML link in the top right of the Create Knative Eventing page.

      Optional. If you are configuring the KnativeEventing object by editing the YAML, make any changes to the YAML that you want to implement for your Knative Eventing deployment.

  7. Click Create.
  8. After you have installed Knative Eventing, the KnativeEventing object is created, and you are automatically directed to the Knative Eventing tab. You will see the knative-eventing custom resource in the list of resources.

Verification

  1. Click on the knative-eventing custom resource in the Knative Eventing tab.
  2. You are automatically directed to the Knative Eventing Overview page.

    Knative Eventing Overview page
  3. Scroll down to look at the list of Conditions.
  4. You should see a list of conditions with a status of True, as shown in the example image.

    Conditions
    Note

    It may take a few seconds for the Knative Eventing resources to be created. You can check their status in the Resources tab.

  5. If the conditions have a status of Unknown or False, wait a few moments and then check again after you have confirmed that the resources have been created.

3.5.2. Installing Knative Eventing by using YAML

After you install the OpenShift Serverless Operator, you can install Knative Eventing by using the default settings, or configure more advanced settings in the KnativeEventing custom resource (CR). You can use the following procedure to install Knative Eventing by using YAML files and the oc CLI.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift Serverless Operator.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a file named eventing.yaml.
  2. Copy the following sample YAML into eventing.yaml:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
        name: knative-eventing
        namespace: knative-eventing
  3. Optional. Make any changes to the YAML that you want to implement for your Knative Eventing deployment.
  4. Apply the eventing.yaml file by entering:

    $ oc apply -f eventing.yaml

Verification

  1. Verify the installation is complete by entering the following command and observing the output:

    $ oc get knativeeventing.operator.knative.dev/knative-eventing \
      -n knative-eventing \
      --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'

    Example output

    InstallSucceeded=True
    Ready=True

    Note

    It may take a few seconds for the Knative Eventing resources to be created.

  2. If the conditions have a status of Unknown or False, wait a few moments and then check again after you have confirmed that the resources have been created.
  3. Check that the Knative Eventing resources have been created by entering:

    $ oc get pods -n knative-eventing

    Example output

    NAME                                   READY   STATUS    RESTARTS   AGE
    broker-controller-58765d9d49-g9zp6     1/1     Running   0          7m21s
    eventing-controller-65fdd66b54-jw7bh   1/1     Running   0          7m31s
    eventing-webhook-57fd74b5bd-kvhlz      1/1     Running   0          7m31s
    imc-controller-5b75d458fc-ptvm2        1/1     Running   0          7m19s
    imc-dispatcher-64f6d5fccb-kkc4c        1/1     Running   0          7m18s

3.5.3. Installing Knative Kafka

Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Knative Kafka functionality is available in an OpenShift Serverless installation if you have installed the KnativeKafka custom resource.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.
  • You have access to a Red Hat AMQ Streams cluster.
  • Install the OpenShift CLI (oc) if you want to use the verification steps.
  • You have cluster administrator permissions on OpenShift Container Platform.
  • You are logged in to the OpenShift Container Platform web console.

Procedure

  1. In the Administrator perspective, navigate to OperatorsInstalled Operators.
  2. Check that the Project dropdown at the top of the page is set to Project: knative-eventing.
  3. In the list of Provided APIs for the OpenShift Serverless Operator, find the Knative Kafka box and click Create Instance.
  4. Configure the KnativeKafka object in the Create Knative Kafka page.

    Important

    To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the enabled switch for the options you want to use to true. These switches are set to false by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers.

    Example KnativeKafka custom resource

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
        name: knative-kafka
        namespace: knative-eventing
    spec:
        channel:
            enabled: true 1
            bootstrapServers: <bootstrap_servers> 2
        source:
            enabled: true 3
        broker:
            enabled: true 4
            defaultConfig:
                bootstrapServers: <bootstrap_servers> 5
                numPartitions: <num_partitions> 6
                replicationFactor: <replication_factor> 7
        sink:
            enabled: true 8

    1
    Enables developers to use the KafkaChannel channel type in the cluster.
    2
    A comma-separated list of bootstrap servers from your AMQ Streams cluster.
    3
    Enables developers to use the KafkaSource event source type in the cluster.
    4
    Enables developers to use the Knative Kafka broker implementation in the cluster.
    5
    A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster.
    6
    Defines the number of partitions of the Kafka topics, backed by the Broker objects. The default is 10.
    7
    Defines the replication factor of the Kafka topics, backed by the Broker objects. The default is 3.
    8
    Enables developers to use a Kafka sink in the cluster.
    Note

    The replicationFactor value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster.

    1. Using the form is recommended for simpler configurations that do not require full control of KnativeKafka object creation.
    2. Editing the YAML is recommended for more complex configurations that require full control of KnativeKafka object creation. You can access the YAML by clicking the Edit YAML link in the top right of the Create Knative Kafka page.
  5. Click Create after you have completed any of the optional configurations for Kafka. You are automatically directed to the Knative Kafka tab where knative-kafka is in the list of resources.

Verification

  1. Click on the knative-kafka resource in the Knative Kafka tab. You are automatically directed to the Knative Kafka Overview page.
  2. View the list of Conditions for the resource and confirm that they have a status of True.

    Kafka Knative Overview page showing Conditions

    If the conditions have a status of Unknown or False, wait a few moments to refresh the page.

  3. Check that the Knative Kafka resources have been created:

    $ oc get pods -n knative-eventing

    Example output

    NAME                                        READY   STATUS    RESTARTS   AGE
    kafka-broker-dispatcher-7769fbbcbb-xgffn    2/2     Running   0          44s
    kafka-broker-receiver-5fb56f7656-fhq8d      2/2     Running   0          44s
    kafka-channel-dispatcher-84fd6cb7f9-k2tjv   2/2     Running   0          44s
    kafka-channel-receiver-9b7f795d5-c76xr      2/2     Running   0          44s
    kafka-controller-6f95659bf6-trd6r           2/2     Running   0          44s
    kafka-source-dispatcher-6bf98bdfff-8bcsn    2/2     Running   0          44s
    kafka-webhook-eventing-68dc95d54b-825xs     2/2     Running   0          44s

3.5.4. Next steps

3.6. Configuring Knative Kafka

Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.

In addition to the Knative Eventing components that are provided as part of a core OpenShift Serverless installation, cluster administrators can install the KnativeKafka custom resource (CR).

Note

Knative Kafka is not currently supported for IBM Z and IBM Power.

The KnativeKafka CR provides users with additional options, such as:

  • Kafka source
  • Kafka channel
  • Kafka broker
  • Kafka sink

3.7. Configuring OpenShift Serverless Functions

To improve the process of deployment of your application code, you can use OpenShift Serverless to deploy stateless, event-driven functions as a Knative service on OpenShift Container Platform. If you want to develop functions, you must complete the set up steps.

3.7.1. Prerequisites

To enable the use of OpenShift Serverless Functions on your cluster, you must complete the following steps:

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.

    Note

    Functions are deployed as a Knative service. If you want to use event-driven architecture with your functions, you must also install Knative Eventing.

  • You have the oc CLI installed.
  • You have the Knative (kn) CLI installed. Installing the Knative CLI enables the use of kn func commands which you can use to create and manage functions.
  • You have installed Docker Container Engine or Podman version 3.4.7 or higher.
  • You have access to an available image registry, such as the OpenShift Container Registry.
  • If you are using Quay.io as the image registry, you must ensure that either the repository is not private, or that you have followed the OpenShift Container Platform documentation on Allowing pods to reference images from other secured registries.
  • If you are using the OpenShift Container Registry, a cluster administrator must expose the registry.

3.7.2. Setting up Podman

To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so, you need to start the Podman service and configure the Knative (kn) CLI to connect to it.

Procedure

  1. Start the Podman service that serves the Docker API on a UNIX socket at ${XDG_RUNTIME_DIR}/podman/podman.sock:

    $ systemctl start --user podman.socket
    Note

    On most systems, this socket is located at /run/user/$(id -u)/podman/podman.sock.

  2. Establish the environment variable that is used to build a function:

    $ export DOCKER_HOST="unix://${XDG_RUNTIME_DIR}/podman/podman.sock"
  3. Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket:

    $ kn func build -v

3.7.3. Setting up Podman on macOS

To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so on macOS, you need to start the Podman machine and configure the Knative (kn) CLI to connect to it.

Procedure

  1. Create the Podman machine:

    $ podman machine init --memory=8192 --cpus=2 --disk-size=20
  2. Start the Podman machine, which serves the Docker API on a UNIX socket:

    $ podman machine start
    Starting machine "podman-machine-default"
    Waiting for VM ...
    Mounting volume... /Users/myuser:/Users/user
    
    [...truncated output...]
    
    You can still connect Docker API clients by setting DOCKER_HOST using the
    following command in your terminal session:
    
    	export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'
    
    Machine "podman-machine-default" started successfully
    Note

    On most macOS systems, this socket is located at /Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock.

  3. Establish the environment variable that is used to build a function:

    $ export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'
  4. Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket:

    $ kn func build -v

3.7.4. Next steps

3.8. Serverless upgrades

OpenShift Serverless should be upgraded without skipping release versions. This section shows how to resolve problems with upgrading.

3.8.1. Resolving an OpenShift Serverless Operator upgrade failure

You might encounter an error when upgrading OpenShift Serverless Operator, for example, when performing manual uninstalls and reinstalls. If you encounter an error, you must manually reinstall OpenShift Serverless Operator.

Procedure

  1. Identify the version of OpenShift Serverless Operator that was installed originally by searching in the OpenShift Serverless Release Notes.

    For example, the error message during attempted upgrade might contain the following string:

    The installed KnativeServing version is v1.5.0.

    In this example, the KnativeServing MAJOR.MINOR version is 1.5, which is covered in the release notes for OpenShift Serverless 1.26: OpenShift Serverless now uses Knative Serving 1.5.

  2. Uninstall OpenShift Serverless Operator and all of its install plans.
  3. Manually install the version of OpenShift Serverless Operator that you discovered in the first step. To install, first create a serverless-subscription.yaml file as shown in the following example:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: serverless-operator
      namespace: openshift-serverless
    spec:
      channel: stable
      name: serverless-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
      startingCSV: serverless-operator.v1.26.0
  4. Then, install the subscription by running the following command:

    $ oc apply -f serverless-subscription.yaml
  5. Upgrade by manually approving the upgrade install plans as they appear.

Chapter 4. Serving

4.1. Getting started with Knative Serving

4.1.1. Serverless applications

Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application using OpenShift Serverless, you must create a Knative Service object.

Example Knative Service object YAML file

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello 1
  namespace: default 2
spec:
  template:
    spec:
      containers:
        - image: docker.io/openshift/hello-openshift 3
          env:
            - name: RESPONSE 4
              value: "Hello Serverless!"

1
The name of the application.
2
The namespace the application uses.
3
The image of the application.
4
The environment variable printed out by the sample application.

You can create a serverless application by using one of the following methods:

  • Create a Knative service from the OpenShift Container Platform web console.

    See Creating applications using the Developer perspective for more information.

  • Create a Knative service by using the Knative (kn) CLI.
  • Create and apply a Knative Service object as a YAML file, by using the oc CLI.
4.1.1.1. Creating serverless applications by using the Knative CLI

Using the Knative (kn) CLI to create serverless applications provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service create command to create a basic serverless application.

Prerequisites

  • OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have installed the Knative (kn) CLI.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  • Create a Knative service:

    $ kn service create <service-name> --image <image> --tag <tag-value>

    Where:

    • --image is the URI of the image for the application.
    • --tag is an optional flag that can be used to add a tag to the initial revision that is created with the service.

      Example command

      $ kn service create event-display \
          --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest

      Example output

      Creating service 'event-display' in namespace 'default':
      
        0.271s The Route is still working to reflect the latest desired specification.
        0.580s Configuration "event-display" is waiting for a Revision to become ready.
        3.857s ...
        3.861s Ingress has not yet been reconciled.
        4.270s Ready to serve.
      
      Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL:
      http://event-display-default.apps-crc.testing

4.1.1.2. Creating serverless applications using YAML

Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a serverless application by using YAML, you must create a YAML file that defines a Knative Service object, then apply it by using oc apply.

After the service is created and the application is deployed, Knative creates an immutable revision for this version of the application. Knative also performs network programming to create a route, ingress, service, and load balancer for your application and automatically scales your pods up and down based on traffic.

Prerequisites

  • OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a YAML file containing the following sample code:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: event-delivery
      namespace: default
    spec:
      template:
        spec:
          containers:
            - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
              env:
                - name: RESPONSE
                  value: "Hello Serverless!"
  2. Navigate to the directory where the YAML file is contained, and deploy the application by applying the YAML file:

    $ oc apply -f <filename>

If you do not want to switch to the Developer perspective in the OpenShift Container Platform web console or use the Knative (kn) CLI or YAML files, you can create Knative components by using the Administator perspective of the OpenShift Container Platform web console.

4.1.1.3. Creating serverless applications using the Administrator perspective

Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application using OpenShift Serverless, you must create a Knative Service object.

Example Knative Service object YAML file

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello 1
  namespace: default 2
spec:
  template:
    spec:
      containers:
        - image: docker.io/openshift/hello-openshift 3
          env:
            - name: RESPONSE 4
              value: "Hello Serverless!"

1
The name of the application.
2
The namespace the application uses.
3
The image of the application.
4
The environment variable printed out by the sample application.

After the service is created and the application is deployed, Knative creates an immutable revision for this version of the application. Knative also performs network programming to create a route, ingress, service, and load balancer for your application and automatically scales your pods up and down based on traffic.

Prerequisites

To create serverless applications using the Administrator perspective, ensure that you have completed the following steps.

  • The OpenShift Serverless Operator and Knative Serving are installed.
  • You have logged in to the web console and are in the Administrator perspective.

Procedure

  1. Navigate to the ServerlessServing page.
  2. In the Create list, select Service.
  3. Manually enter YAML or JSON definitions, or by dragging and dropping a file into the editor.
  4. Click Create.
4.1.1.4. Creating a service using offline mode

You can execute kn service commands in offline mode, so that no changes happen on the cluster, and instead the service descriptor file is created on your local machine. After the descriptor file is created, you can modify the file before propagating changes to the cluster.

Important

The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have installed the Knative (kn) CLI.

Procedure

  1. In offline mode, create a local Knative service descriptor file:

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest \
        --target ./ \
        --namespace test

    Example output

    Service 'event-display' created in namespace 'test'.

    • The --target ./ flag enables offline mode and specifies ./ as the directory for storing the new directory tree.

      If you do not specify an existing directory, but use a filename, such as --target my-service.yaml, then no directory tree is created. Instead, only the service descriptor file my-service.yaml is created in the current directory.

      The filename can have the .yaml, .yml, or .json extension. Choosing .json creates the service descriptor file in the JSON format.

    • The --namespace test option places the new service in the test namespace.

      If you do not use --namespace, and you are logged in to an OpenShift Container Platform cluster, the descriptor file is created in the current namespace. Otherwise, the descriptor file is created in the default namespace.

  2. Examine the created directory structure:

    $ tree ./

    Example output

    ./
    └── test
        └── ksvc
            └── event-display.yaml
    
    2 directories, 1 file

    • The current ./ directory specified with --target contains the new test/ directory that is named after the specified namespace.
    • The test/ directory contains the ksvc directory, named after the resource type.
    • The ksvc directory contains the descriptor file event-display.yaml, named according to the specified service name.
  3. Examine the generated service descriptor file:

    $ cat test/ksvc/event-display.yaml

    Example output

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      creationTimestamp: null
      name: event-display
      namespace: test
    spec:
      template:
        metadata:
          annotations:
            client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
          creationTimestamp: null
        spec:
          containers:
          - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
            name: ""
            resources: {}
    status: {}

  4. List information about the new service:

    $ kn service describe event-display --target ./ --namespace test

    Example output

    Name:       event-display
    Namespace:  test
    Age:
    URL:
    
    Revisions:
    
    Conditions:
      OK TYPE    AGE REASON

    • The --target ./ option specifies the root directory for the directory structure containing namespace subdirectories.

      Alternatively, you can directly specify a YAML or JSON filename with the --target option. The accepted file extensions are .yaml, .yml, and .json.

    • The --namespace option specifies the namespace, which communicates to kn the subdirectory that contains the necessary service descriptor file.

      If you do not use --namespace, and you are logged in to an OpenShift Container Platform cluster, kn searches for the service in the subdirectory that is named after the current namespace. Otherwise, kn searches in the default/ subdirectory.

  5. Use the service descriptor file to create the service on the cluster:

    $ kn service create -f test/ksvc/event-display.yaml

    Example output

    Creating service 'event-display' in namespace 'test':
    
      0.058s The Route is still working to reflect the latest desired specification.
      0.098s ...
      0.168s Configuration "event-display" is waiting for a Revision to become ready.
     23.377s ...
     23.419s Ingress has not yet been reconciled.
     23.534s Waiting for load balancer to be ready
     23.723s Ready to serve.
    
    Service 'event-display' created to latest revision 'event-display-00001' is available at URL:
    http://event-display-test.apps.example.com

4.1.1.5. Additional resources

4.1.2. Verifying your serverless application deployment

To verify that your serverless application has been deployed successfully, you must get the application URL created by Knative, and then send a request to that URL and observe the output. OpenShift Serverless supports the use of both HTTP and HTTPS URLs, however the output from oc get ksvc always prints URLs using the http:// format.

4.1.2.1. Verifying your serverless application deployment

To verify that your serverless application has been deployed successfully, you must get the application URL created by Knative, and then send a request to that URL and observe the output. OpenShift Serverless supports the use of both HTTP and HTTPS URLs, however the output from oc get ksvc always prints URLs using the http:// format.

Prerequisites

  • OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have installed the oc CLI.
  • You have created a Knative service.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Find the application URL:

    $ oc get ksvc <service_name>

    Example output

    NAME            URL                                        LATESTCREATED         LATESTREADY           READY   REASON
    event-delivery   http://event-delivery-default.example.com   event-delivery-4wsd2   event-delivery-4wsd2   True

  2. Make a request to your cluster and observe the output.

    Example HTTP request

    $ curl http://event-delivery-default.example.com

    Example HTTPS request

    $ curl https://event-delivery-default.example.com

    Example output

    Hello Serverless!

  3. Optional. If you receive an error relating to a self-signed certificate in the certificate chain, you can add the --insecure flag to the curl command to ignore the error:

    $ curl https://event-delivery-default.example.com --insecure

    Example output

    Hello Serverless!

    Important

    Self-signed certificates must not be used in a production deployment. This method is only for testing purposes.

  4. Optional. If your OpenShift Container Platform cluster is configured with a certificate that is signed by a certificate authority (CA) but not yet globally configured for your system, you can specify this with the curl command. The path to the certificate can be passed to the curl command by using the --cacert flag:

    $ curl https://event-delivery-default.example.com --cacert <file>

    Example output

    Hello Serverless!

4.2. Autoscaling

4.2.1. Autoscaling

Knative Serving provides automatic scaling, or autoscaling, for applications to match incoming demand. For example, if an application is receiving no traffic, and scale-to-zero is enabled, Knative Serving scales the application down to zero replicas. If scale-to-zero is disabled, the application is scaled down to the minimum number of replicas configured for applications on the cluster. Replicas can also be scaled up to meet demand if traffic to the application increases.

Autoscaling settings for Knative services can be global settings that are configured by cluster administrators, or per-revision settings that are configured for individual services.

You can modify per-revision settings for your services by using the OpenShift Container Platform web console, by modifying the YAML file for your service, or by using the Knative (kn) CLI.

Note

Any limits or targets that you set for a service are measured against a single instance of your application. For example, setting the target annotation to 50 configures the autoscaler to scale the application so that each revision handles 50 requests at a time.

4.2.2. Scale bounds

Scale bounds determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs.

4.2.2.1. Minimum scale bounds

The minimum number of replicas that can serve an application is determined by the min-scale annotation. If scale to zero is not enabled, the min-scale value defaults to 1.

The min-scale value defaults to 0 replicas if the following conditions are met:

  • The min-scale annotation is not set
  • Scaling to zero is enabled
  • The class KPA is used

Example service spec with min-scale annotation

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example-service
  namespace: default
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/min-scale: "0"
...

4.2.2.1.1. Setting the min-scale annotation by using the Knative CLI

Using the Knative (kn) CLI to set the min-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-min flag to create or modify the min-scale value for a service.

Prerequisites

  • Knative Serving is installed on the cluster.
  • You have installed the Knative (kn) CLI.

Procedure

  • Set the minimum number of replicas for the service by using the --scale-min flag:

    $ kn service create <service_name> --image <image_uri> --scale-min <integer>

    Example command

    $ kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-min 2

4.2.2.2. Maximum scale bounds

The maximum number of replicas that can serve an application is determined by the max-scale annotation. If the max-scale annotation is not set, there is no upper limit for the number of replicas created.

Example service spec with max-scale annotation

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example-service
  namespace: default
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/max-scale: "10"
...

4.2.2.2.1. Setting the max-scale annotation by using the Knative CLI

Using the Knative (kn) CLI to set the max-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-max flag to create or modify the max-scale value for a service.

Prerequisites

  • Knative Serving is installed on the cluster.
  • You have installed the Knative (kn) CLI.

Procedure

  • Set the maximum number of replicas for the service by using the --scale-max flag:

    $ kn service create <service_name> --image <image_uri> --scale-max <integer>

    Example command

    $ kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-max 10

4.2.3. Concurrency

Concurrency determines the number of simultaneous requests that can be processed by each replica of an application at any given time. Concurrency can be configured as a soft limit or a hard limit:

  • A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded.
  • A hard limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests.

    Important

    Using a hard limit configuration is only recommended if there is a clear use case for it with your application. Having a low, hard limit specified may have a negative impact on the throughput and latency of an application, and might cause cold starts.

Adding a soft target and a hard limit means that the autoscaler targets the soft target number of concurrent requests, but imposes a hard limit of the hard limit value for the maximum number of requests.

If the hard limit value is less than the soft limit value, the soft limit value is tuned down, because there is no need to target more requests than the number that can actually be handled.

4.2.3.1. Configuring a soft concurrency target

A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded. You can specify a soft concurrency target for your Knative service by setting the autoscaling.knative.dev/target annotation in the spec, or by using the kn service command with the correct flags.

Procedure

  • Optional: Set the autoscaling.knative.dev/target annotation for your Knative service in the spec of the Service custom resource:

    Example service spec

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: example-service
      namespace: default
    spec:
      template:
        metadata:
          annotations:
            autoscaling.knative.dev/target: "200"

  • Optional: Use the kn service command to specify the --concurrency-target flag:

    $ kn service create <service_name> --image <image_uri> --concurrency-target <integer>

    Example command to create a service with a concurrency target of 50 requests

    $ kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-target 50

4.2.3.2. Configuring a hard concurrency limit

A hard concurrency limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests. You can specify a hard concurrency limit for your Knative service by modifying the containerConcurrency spec, or by using the kn service command with the correct flags.

Procedure

  • Optional: Set the containerConcurrency spec for your Knative service in the spec of the Service custom resource:

    Example service spec

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: example-service
      namespace: default
    spec:
      template:
        spec:
          containerConcurrency: 50

    The default value is 0, which means that there is no limit on the number of simultaneous requests that are permitted to flow into one replica of the service at a time.

    A value greater than 0 specifies the exact number of requests that are permitted to flow into one replica of the service at a time. This example would enable a hard concurrency limit of 50 requests.

  • Optional: Use the kn service command to specify the --concurrency-limit flag:

    $ kn service create <service_name> --image <image_uri> --concurrency-limit <integer>

    Example command to create a service with a concurrency limit of 50 requests

    $ kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-limit 50

4.2.3.3. Concurrency target utilization

This value specifies the percentage of the concurrency limit that is actually targeted by the autoscaler. This is also known as specifying the hotness at which a replica runs, which enables the autoscaler to scale up before the defined hard limit is reached.

For example, if the containerConcurrency value is set to 10, and the target-utilization-percentage value is set to 70 percent, the autoscaler creates a new replica when the average number of concurrent requests across all existing replicas reaches 7. Requests numbered 7 to 10 are still sent to the existing replicas, but additional replicas are started in anticipation of being required after the containerConcurrency value is reached.

Example service configured using the target-utilization-percentage annotation

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example-service
  namespace: default
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target-utilization-percentage: "70"
...

4.2.4. Scale-to-zero

Knative Serving provides automatic scaling, or autoscaling, for applications to match incoming demand.

4.2.4.1. Enabling scale-to-zero

You can use the enable-scale-to-zero spec to enable or disable scale-to-zero globally for applications on the cluster.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have cluster administrator permissions.
  • You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.

Procedure

  • Modify the enable-scale-to-zero spec in the KnativeServing custom resource (CR):

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        autoscaler:
          enable-scale-to-zero: "false" 1

    1
    The enable-scale-to-zero spec can be either "true" or "false". If set to true, scale-to-zero is enabled. If set to false, applications are scaled down to the configured minimum scale bound. The default value is "true".
4.2.4.2. Configuring the scale-to-zero grace period

Knative Serving provides automatic scaling down to zero pods for applications. You can use the scale-to-zero-grace-period spec to define an upper bound time limit that Knative waits for scale-to-zero machinery to be in place before the last replica of an application is removed.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have cluster administrator permissions.
  • You are using the default Knative Pod Autoscaler. The scale-to-zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.

Procedure

  • Modify the scale-to-zero-grace-period spec in the KnativeServing custom resource (CR):

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        autoscaler:
          scale-to-zero-grace-period: "30s" 1

    1
    The grace period time in seconds. The default value is 30 seconds.

4.3. Configuring Serverless applications

4.3.1. Overriding Knative Serving system deployment configurations

You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing custom resources (CRs).

Note

You can only override probes that are defined in the deployment by default.

All Knative Serving deployments define a readiness and a liveness probe by default, with these exceptions:

  • net-kourier-controller and 3scale-kourier-gateway only define a readiness probe.
  • net-istio-controller and net-istio-webhook define no probes.
4.3.1.1. Overriding system deployment configurations

Currently, overriding default configuration settings is supported for the resources, replicas, labels, annotations, and nodeSelector fields, as well as for the readiness and liveness fields for probes.

In the following example, a KnativeServing CR overrides the webhook deployment so that:

  • The readiness probe timeout for net-kourier-controller is set to be 10 seconds.
  • The deployment has specified CPU and memory resource limits.
  • The deployment has 3 replicas.
  • The example-label: label label is added.
  • The example-annotation: annotation annotation is added.
  • The nodeSelector field is set to select nodes with the disktype: hdd label.
Note

The KnativeServing CR label and annotation settings override the deployment’s labels and annotations for both the deployment itself and the resulting pods.

KnativeServing CR example

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: ks
  namespace: knative-serving
spec:
  high-availability:
    replicas: 2
  deployments:
  - name: net-kourier-controller
    readinessProbes: 1
      - container: controller
        timeoutSeconds: 10
  - name: webhook
    resources:
    - container: webhook
      requests:
        cpu: 300m
        memory: 60Mi
      limits:
        cpu: 1000m
        memory: 1000Mi
    replicas: 3
    labels:
      example-label: label
    annotations:
      example-annotation: annotation
    nodeSelector:
      disktype: hdd

1
You can use the readiness and liveness probe overrides to override all fields of a probe in a container of a deployment as specified in the Kubernetes API except for the fields related to the probe handler: exec, grpc, httpGet, and tcpSocket.

4.3.2. Multi-container support for Serving

You can deploy a multi-container pod by using a single Knative service. This method is useful for separating application responsibilities into smaller, specialized parts.

Important

Multi-container support for Serving is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

4.3.2.1. Configuring a multi-container service

Multi-container support is enabled by default. You can create a multi-container pod by specifiying multiple containers in the service.

Procedure

  1. Modify your service to include additional containers. Only one container can handle requests, so specify ports for exactly one container. Here is an example configuration with two containers:

    Multiple containers configuration

    apiVersion: serving.knative.dev/v1
    kind: Service
    ...
    spec:
      template:
        spec:
          containers:
            - name: first-container 1
              image: gcr.io/knative-samples/helloworld-go
              ports:
                - containerPort: 8080 2
            - name: second-container 3
              image: gcr.io/knative-samples/helloworld-java

    1
    First container configuration.
    2
    Port specification for the first container.
    3
    Second container configuration.

4.3.3. EmptyDir volumes

emptyDir volumes are empty volumes that are created when a pod is created, and are used to provide temporary working disk space. emptyDir volumes are deleted when the pod they were created for is deleted.

4.3.3.1. Configuring the EmptyDir extension

The kubernetes.podspec-volumes-emptydir extension controls whether emptyDir volumes can be used with Knative Serving. To enable using emptyDir volumes, you must modify the KnativeServing custom resource (CR) to include the following YAML:

Example KnativeServing CR

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    features:
      kubernetes.podspec-volumes-emptydir: enabled
...

4.3.4. Persistent Volume Claims for Serving

Some serverless applications need permanent data storage. To achieve this, you can configure persistent volume claims (PVCs) for your Knative services.

4.3.4.1. Enabling PVC support

Procedure

  1. To enable Knative Serving to use PVCs and write to them, modify the KnativeServing custom resource (CR) to include the following YAML:

    Enabling PVCs with write access

    ...
    spec:
      config:
        features:
          "kubernetes.podspec-persistent-volume-claim": enabled
          "kubernetes.podspec-persistent-volume-write": enabled
    ...

    • The kubernetes.podspec-persistent-volume-claim extension controls whether persistent volumes (PVs) can be used with Knative Serving.
    • The kubernetes.podspec-persistent-volume-write extension controls whether PVs are available to Knative Serving with the write access.
  2. To claim a PV, modify your service to include the PV configuration. For example, you might have a persistent volume claim with the following configuration:

    Note

    Use the storage class that supports the access mode that you are requesting. For example, you can use the ocs-storagecluster-cephfs class for the ReadWriteMany access mode.

    PersistentVolumeClaim configuration

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-pv-claim
      namespace: my-ns
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ocs-storagecluster-cephfs
      resources:
        requests:
          storage: 1Gi

    In this case, to claim a PV with write access, modify your service as follows:

    Knative service PVC configuration

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      namespace: my-ns
    ...
    spec:
     template:
       spec:
         containers:
             ...
             volumeMounts: 1
               - mountPath: /data
                 name: mydata
                 readOnly: false
         volumes:
           - name: mydata
             persistentVolumeClaim: 2
               claimName: example-pv-claim
               readOnly: false 3

    1
    Volume mount specification.
    2
    Persistent volume claim specification.
    3
    Flag that enables read-only access.
    Note

    To successfully use persistent storage in Knative services, you need additional configuration, such as the user permissions for the Knative container user.

4.3.4.2. Additional resources

4.3.5. Init containers

Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. You can enable the use of init containers for Knative services by modifying the KnativeServing custom resource (CR).

Note

Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently.

4.3.5.1. Enabling init containers

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have cluster administrator permissions.

Procedure

  • Enable the use of init containers by adding the kubernetes.podspec-init-containers flag to the KnativeServing CR:

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        features:
          kubernetes.podspec-init-containers: enabled
    ...

4.3.6. Resolving image tags to digests

If the Knative Serving controller has access to the container registry, Knative Serving resolves image tags to a digest when you create a revision of a service. This is known as tag-to-digest resolution, and helps to provide consistency for deployments.

4.3.6.1. Tag-to-digest resolution

To give the controller access to the container registry on OpenShift Container Platform, you must create a secret and then configure controller custom certificates. You can configure controller custom certificates by modifying the controller-custom-certs spec in the KnativeServing custom resource (CR). The secret must reside in the same namespace as the KnativeServing CR.

If a secret is not included in the KnativeServing CR, this setting defaults to using public key infrastructure (PKI). When using PKI, the cluster-wide certificates are automatically injected into the Knative Serving controller by using the config-service-sa config map. The OpenShift Serverless Operator populates the config-service-sa config map with cluster-wide certificates and mounts the config map as a volume to the controller.

4.3.6.1.1. Configuring tag-to-digest resolution by using a secret

If the controller-custom-certs spec uses the Secret type, the secret is mounted as a secret volume. Knative components consume the secret directly, assuming that the secret has the required certificates.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform.
  • You have installed the OpenShift Serverless Operator and Knative Serving on your cluster.

Procedure

  1. Create a secret:

    Example command

    $ oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>

  2. Configure the controller-custom-certs spec in the KnativeServing custom resource (CR) to use the Secret type:

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      controller-custom-certs:
        name: custom-secret
        type: Secret

4.3.7. Configuring TLS authentication

You can use Transport Layer Security (TLS) to encrypt Knative traffic and for authentication.

TLS is the only supported method of traffic encryption for Knative Kafka. Red Hat recommends using both SASL and TLS together for Knative Kafka resources.

Note

If you want to enable internal TLS with a Red Hat OpenShift Service Mesh integration, you must enable Service Mesh with mTLS instead of the internal encryption explained in the following procedure.  See the documentation for Enabling Knative Serving metrics when using Service Mesh with mTLS.

4.3.7.1. Enabling TLS authentication for internal traffic

OpenShift Serverless supports TLS edge termination by default, so that HTTPS traffic from end users is encrypted. However, internal traffic behind the OpenShift route is forwarded to applications by using plain data. By enabling TLS for internal traffic, the traffic sent between components is encrypted, which makes this traffic more secure.

Note

If you want to enable internal TLS with a Red Hat OpenShift Service Mesh integration, you must enable Service Mesh with mTLS instead of the internal encryption explained in the following procedure.

Important

Internal TLS encryption support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Serving.
  • You have installed the OpenShift (oc) CLI.

Procedure

  1. Create a Knative service that includes the internal-encryption: "true" field in the spec:

    ...
    spec:
      config:
        network:
          internal-encryption: "true"
    ...
  2. Restart the activator pods in the knative-serving namespace to load the certificates:

    $ oc delete pod -n knative-serving --selector app=activator

4.3.8. Restrictive network policies

4.3.8.1. Clusters with restrictive network policies

If you are using a cluster that multiple users have access to, your cluster might use network policies to control which pods, services, and namespaces can communicate with each other over the network. If your cluster uses restrictive network policies, it is possible that Knative system pods are not able to access your Knative application. For example, if your namespace has the following network policy, which denies all requests, Knative system pods cannot access your Knative application:

Example NetworkPolicy object that denies all requests to the namespace

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-by-default
  namespace: example-namespace
spec:
  podSelector:
  ingress: []

4.3.8.2. Enabling communication with Knative applications on a cluster with restrictive network policies

To allow access to your applications from Knative system pods, you must add a label to each of the Knative system namespaces, and then create a NetworkPolicy object in your application namespace that allows access to the namespace for other namespaces that have this label.

Important

A network policy that denies requests to non-Knative services on your cluster still prevents access to these services. However, by allowing access from Knative system namespaces to your Knative application, you are allowing access to your Knative application from all namespaces in the cluster.

If you do not want to allow access to your Knative application from all namespaces on the cluster, you might want to use JSON Web Token authentication for Knative services instead. JSON Web Token authentication for Knative services requires Service Mesh.

Prerequisites

  • Install the OpenShift CLI (oc).
  • OpenShift Serverless Operator and Knative Serving are installed on your cluster.

Procedure

  1. Add the knative.openshift.io/system-namespace=true label to each Knative system namespace that requires access to your application:

    1. Label the knative-serving namespace:

      $ oc label namespace knative-serving knative.openshift.io/system-namespace=true
    2. Label the knative-serving-ingress namespace:

      $ oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true
    3. Label the knative-eventing namespace:

      $ oc label namespace knative-eventing knative.openshift.io/system-namespace=true
    4. Label the knative-kafka namespace:

      $ oc label namespace knative-kafka knative.openshift.io/system-namespace=true
  2. Create a NetworkPolicy object in your application namespace to allow access from namespaces with the knative.openshift.io/system-namespace label:

    Example NetworkPolicy object

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: <network_policy_name> 1
      namespace: <namespace> 2
    spec:
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              knative.openshift.io/system-namespace: "true"
      podSelector: {}
      policyTypes:
      - Ingress

    1
    Provide a name for your network policy.
    2
    The namespace where your application exists.

4.4. Traffic splitting

4.4.1. Traffic splitting overview

In a Knative application, traffic can be managed by creating a traffic split. A traffic split is configured as part of a route, which is managed by a Knative service.

Traffic management for a Knative application

Configuring a route allows requests to be sent to different revisions of a service. This routing is determined by the traffic spec of the Service object.

A traffic spec declaration consists of one or more revisions, each responsible for handling a portion of the overall traffic. The percentages of traffic routed to each revision must add up to 100%, which is ensured by a Knative validation.

The revisions specified in a traffic spec can either be a fixed, named revision, or can point to the “latest” revision, which tracks the head of the list of all revisions for the service. The "latest" revision is a type of floating reference that updates if a new revision is created. Each revision can have a tag attached that creates an additional access URL for that revision.

The traffic spec can be modified by:

  • Editing the YAML of a Service object directly.
  • Using the Knative (kn) CLI --traffic flag.
  • Using the OpenShift Container Platform web console.

When you create a Knative service, it does not have any default traffic spec settings.

4.4.2. Traffic spec examples

The following example shows a traffic spec where 100% of traffic is routed to the latest revision of the service. Under status, you can see the name of the latest revision that latestRevision resolves to:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example-service
  namespace: default
spec:
...
  traffic:
  - latestRevision: true
    percent: 100
status:
  ...
  traffic:
  - percent: 100
    revisionName: example-service

The following example shows a traffic spec where 100% of traffic is routed to the revision tagged as current, and the name of that revision is specified as example-service. The revision tagged as latest is kept available, even though no traffic is routed to it:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example-service
  namespace: default
spec:
...
  traffic:
  - tag: current
    revisionName: example-service
    percent: 100
  - tag: latest
    latestRevision: true
    percent: 0

The following example shows how the list of revisions in the traffic spec can be extended so that traffic is split between multiple revisions. This example sends 50% of traffic to the revision tagged as current, and 50% of traffic to the revision tagged as candidate. The revision tagged as latest is kept available, even though no traffic is routed to it:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example-service
  namespace: default
spec:
...
  traffic:
  - tag: current
    revisionName: example-service-1
    percent: 50
  - tag: candidate
    revisionName: example-service-2
    percent: 50
  - tag: latest
    latestRevision: true
    percent: 0

4.4.3. Traffic splitting using the Knative CLI

Using the Knative (kn) CLI to create traffic splits provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service update command to split traffic between revisions of a service.

4.4.3.1. Creating a traffic split by using the Knative CLI

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have installed the Knative (kn) CLI.
  • You have created a Knative service.

Procedure

  • Specify the revision of your service and what percentage of traffic you want to route to it by using the --traffic tag with a standard kn service update command:

    Example command

    $ kn service update <service_name> --traffic <revision>=<percentage>

    Where:

    • <service_name> is the name of the Knative service that you are configuring traffic routing for.
    • <revision> is the revision that you want to configure to receive a percentage of traffic. You can either specify the name of the revision, or a tag that you assigned to the revision by using the --tag flag.
    • <percentage> is the percentage of traffic that you want to send to the specified revision.
  • Optional: The --traffic flag can be specified multiple times in one command. For example, if you have a revision tagged as @latest and a revision named stable, you can specify the percentage of traffic that you want to split to each revision as follows:

    Example command

    $ kn service update example-service --traffic @latest=20,stable=80

    If you have multiple revisions and do not specify the percentage of traffic that should be split to the last revision, the --traffic flag can calculate this automatically. For example, if you have a third revision named example, and you use the following command:

    Example command

    $ kn service update example-service --traffic @latest=10,stable=60

    The remaining 30% of traffic is split to the example revision, even though it was not specified.

4.4.4. CLI flags for traffic splitting

The Knative (kn) CLI supports traffic operations on the traffic block of a service as part of the kn service update command.

4.4.4.1. Knative CLI traffic splitting flags

The following table displays a summary of traffic splitting flags, value formats, and the operation the flag performs. The Repetition column denotes whether repeating the particular value of flag is allowed in a kn service update command.

FlagValue(s)OperationRepetition

--traffic

RevisionName=Percent

Gives Percent traffic to RevisionName

Yes

--traffic

Tag=Percent

Gives Percent traffic to the revision having Tag

Yes

--traffic

@latest=Percent

Gives Percent traffic to the latest ready revision

No

--tag

RevisionName=Tag

Gives Tag to RevisionName

Yes

--tag

@latest=Tag

Gives Tag to the latest ready revision

No

--untag

Tag

Removes Tag from revision

Yes

4.4.4.1.1. Multiple flags and order precedence

All traffic-related flags can be specified using a single kn service update command. kn defines the precedence of these flags. The order of the flags specified when using the command is not taken into account.

The precedence of the flags as they are evaluated by kn are:

  1. --untag: All the referenced revisions with this flag are removed from the traffic block.
  2. --tag: Revisions are tagged as specified in the traffic block.
  3. --traffic: The referenced revisions are assigned a portion of the traffic split.

You can add tags to revisions and then split traffic according to the tags you have set.

4.4.4.1.2. Custom URLs for revisions

Assigning a --tag flag to a service by using the kn service update command creates a custom URL for the revision that is created when you update the service. The custom URL follows the pattern https://<tag>-<service_name>-<namespace>.<domain> or http://<tag>-<service_name>-<namespace>.<domain>.

The --tag and --untag flags use the following syntax:

  • Require one value.
  • Denote a unique tag in the traffic block of the service.
  • Can be specified multiple times in one command.
4.4.4.1.2.1. Example: Assign a tag to a revision

The following example assigns the tag latest to a revision named example-revision:

$ kn service update <service_name> --tag @latest=example-tag
4.4.4.1.2.2. Example: Remove a tag from a revision

You can remove a tag to remove the custom URL, by using the --untag flag.

Note

If a revision has its tags removed, and it is assigned 0% of the traffic, the revision is removed from the traffic block entirely.

The following command removes all tags from the revision named example-revision:

$ kn service update <service_name> --untag example-tag

4.4.5. Splitting traffic between revisions

After you create a serverless application, the application is displayed in the Topology view of the Developer perspective in the OpenShift Container Platform web console. The application revision is represented by the node, and the Knative service is indicated by a quadrilateral around the node.

Any new change in the code or the service configuration creates a new revision, which is a snapshot of the code at a given time. For a service, you can manage the traffic between the revisions of the service by splitting and routing it to the different revisions as required.

4.4.5.1. Managing traffic between revisions by using the OpenShift Container Platform web console

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have logged in to the OpenShift Container Platform web console.

Procedure

To split traffic between multiple revisions of an application in the Topology view:

  1. Click the Knative service to see its overview in the side panel.
  2. Click the Resources tab, to see a list of Revisions and Routes for the service.

    Figure 4.1. Serverless application

    odc serverless app
  3. Click the service, indicated by the S icon at the top of the side panel, to see an overview of the service details.
  4. Click the YAML tab and modify the service configuration in the YAML editor, and click Save. For example, change the timeoutseconds from 300 to 301 . This change in the configuration triggers a new revision. In the Topology view, the latest revision is displayed and the Resources tab for the service now displays the two revisions.
  5. In the Resources tab, click Set Traffic Distribution to see the traffic distribution dialog box:

    1. Add the split traffic percentage portion for the two revisions in the Splits field.
    2. Add tags to create custom URLs for the two revisions.
    3. Click Save to see two nodes representing the two revisions in the Topology view.

      Figure 4.2. Serverless application revisions

      odc serverless revisions

4.4.6. Rerouting traffic using blue-green strategy

You can safely reroute traffic from a production version of an app to a new version, by using a blue-green deployment strategy.

4.4.6.1. Routing and managing traffic by using a blue-green deployment strategy

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on the cluster.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create and deploy an app as a Knative service.
  2. Find the name of the first revision that was created when you deployed the service, by viewing the output from the following command:

    $ oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'

    Example command

    $ oc get ksvc example-service -o=jsonpath='{.status.latestCreatedRevisionName}'

    Example output

    $ example-service-00001

  3. Add the following YAML to the service spec to send inbound traffic to the revision:

    ...
    spec:
      traffic:
        - revisionName: <first_revision_name>
          percent: 100 # All traffic goes to this revision
    ...
  4. Verify that you can view your app at the URL output you get from running the following command:

    $ oc get ksvc <service_name>
  5. Deploy a second revision of your app by modifying at least one field in the template spec of the service and redeploying it. For example, you can modify the image of the service, or an env environment variable. You can redeploy the service by applying the service YAML file, or by using the kn service update command if you have installed the Knative (kn) CLI.
  6. Find the name of the second, latest revision that was created when you redeployed the service, by running the command:

    $ oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'

    At this point, both the first and second revisions of the service are deployed and running.

  7. Update your existing service to create a new, test endpoint for the second revision, while still sending all other traffic to the first revision:

    Example of updated service spec with test endpoint

    ...
    spec:
      traffic:
        - revisionName: <first_revision_name>
          percent: 100 # All traffic is still being routed to the first revision
        - revisionName: <second_revision_name>
          percent: 0 # No traffic is routed to the second revision
          tag: v2 # A named route
    ...

    After you redeploy this service by reapplying the YAML resource, the second revision of the app is now staged. No traffic is routed to the second revision at the main URL, and Knative creates a new service named v2 for testing the newly deployed revision.

  8. Get the URL of the new service for the second revision, by running the following command:

    $ oc get ksvc <service_name> --output jsonpath="{.status.traffic[*].url}"

    You can use this URL to validate that the new version of the app is behaving as expected before you route any traffic to it.

  9. Update your existing service again, so that 50% of traffic is sent to the first revision, and 50% is sent to the second revision:

    Example of updated service spec splitting traffic 50/50 between revisions

    ...
    spec:
      traffic:
        - revisionName: <first_revision_name>
          percent: 50
        - revisionName: <second_revision_name>
          percent: 50
          tag: v2
    ...

  10. When you are ready to route all traffic to the new version of the app, update the service again to send 100% of traffic to the second revision:

    Example of updated service spec sending all traffic to the second revision

    ...
    spec:
      traffic:
        - revisionName: <first_revision_name>
          percent: 0
        - revisionName: <second_revision_name>
          percent: 100
          tag: v2
    ...

    Tip

    You can remove the first revision instead of setting it to 0% of traffic if you do not plan to roll back the revision. Non-routeable revision objects are then garbage-collected.

  11. Visit the URL of the first revision to verify that no more traffic is being sent to the old version of the app.

4.5. External and Ingress routing

4.5.1. Routing overview

Knative leverages OpenShift Container Platform TLS termination to provide routing for Knative services. When a Knative service is created, an OpenShift Container Platform route is automatically created for the service. This route is managed by the OpenShift Serverless Operator. The OpenShift Container Platform route exposes the Knative service through the same domain as the OpenShift Container Platform cluster.

You can disable Operator control of OpenShift Container Platform routing so that you can configure a Knative route to directly use your TLS certificates instead.

Knative routes can also be used alongside the OpenShift Container Platform route to provide additional fine-grained routing capabilities, such as traffic splitting.

4.5.1.1. Additional resources

4.5.2. Customizing labels and annotations

OpenShift Container Platform routes support the use of custom labels and annotations, which you can configure by modifying the metadata spec of a Knative service. Custom labels and annotations are propagated from the service to the Knative route, then to the Knative ingress, and finally to the OpenShift Container Platform route.

4.5.2.1. Customizing labels and annotations for OpenShift Container Platform routes

Prerequisites

  • You must have the OpenShift Serverless Operator and Knative Serving installed on your OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a Knative service that contains the label or annotation that you want to propagate to the OpenShift Container Platform route:

    • To create a service by using YAML:

      Example service created by using YAML

      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: <service_name>
        labels:
          <label_name>: <label_value>
        annotations:
          <annotation_name>: <annotation_value>
      ...

    • To create a service by using the Knative (kn) CLI, enter:

      Example service created by using a kn command

      $ kn service create <service_name> \
        --image=<image> \
        --annotation <annotation_name>=<annotation_value> \
        --label <label_value>=<label_value>

  2. Verify that the OpenShift Container Platform route has been created with the annotation or label that you added by inspecting the output from the following command:

    Example command for verification

    $ oc get routes.route.openshift.io \
         -l serving.knative.openshift.io/ingressName=<service_name> \ 1
         -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \ 2
         -n knative-serving-ingress -o yaml \
             | grep -e "<label_name>: \"<label_value>\""  -e "<annotation_name>: <annotation_value>" 3

    1
    Use the name of your service.
    2
    Use the namespace where your service was created.
    3
    Use your values for the label and annotation names and values.

4.5.3. Configuring routes for Knative services

If you want to configure a Knative service to use your TLS certificate on OpenShift Container Platform, you must disable the automatic creation of a route for the service by the OpenShift Serverless Operator and instead manually create a route for the service.

Note

When you complete the following procedure, the default OpenShift Container Platform route in the knative-serving-ingress namespace is not created. However, the Knative route for the application is still created in this namespace.

4.5.3.1. Configuring OpenShift Container Platform routes for Knative services

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving component must be installed on your OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a Knative service that includes the serving.knative.openshift.io/disableRoute=true annotation:

    Important

    The serving.knative.openshift.io/disableRoute=true annotation instructs OpenShift Serverless to not automatically create a route for you. However, the service still shows a URL and reaches a status of Ready. This URL does not work externally until you create your own route with the same hostname as the hostname in the URL.

    1. Create a Knative Service resource:

      Example resource

      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: <service_name>
        annotations:
          serving.knative.openshift.io/disableRoute: "true"
      spec:
        template:
          spec:
            containers:
            - image: <image>
      ...

    2. Apply the Service resource:

      $ oc apply -f <filename>
    3. Optional. Create a Knative service by using the kn service create command:

      Example kn command

      $ kn service create <service_name> \
        --image=gcr.io/knative-samples/helloworld-go \
        --annotation serving.knative.openshift.io/disableRoute=true

  2. Verify that no OpenShift Container Platform route has been created for the service:

    Example command

    $ $ oc get routes.route.openshift.io \
      -l serving.knative.openshift.io/ingressName=$KSERVICE_NAME \
      -l serving.knative.openshift.io/ingressNamespace=$KSERVICE_NAMESPACE \
      -n knative-serving-ingress

    You will see the following output:

    No resources found in knative-serving-ingress namespace.
  3. Create a Route resource in the knative-serving-ingress namespace:

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      annotations:
        haproxy.router.openshift.io/timeout: 600s 1
      name: <route_name> 2
      namespace: knative-serving-ingress 3
    spec:
      host: <service_host> 4
      port:
        targetPort: http2
      to:
        kind: Service
        name: kourier
        weight: 100
      tls:
        insecureEdgeTerminationPolicy: Allow
        termination: edge 5
        key: |-
          -----BEGIN PRIVATE KEY-----
          [...]
          -----END PRIVATE KEY-----
        certificate: |-
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE-----
        caCertificate: |-
          -----BEGIN CERTIFICATE-----
          [...]
          -----END CERTIFICATE----
      wildcardPolicy: None
    1
    The timeout value for the OpenShift Container Platform route. You must set the same value as the max-revision-timeout-seconds setting (600s by default).
    2
    The name of the OpenShift Container Platform route.
    3
    The namespace for the OpenShift Container Platform route. This must be knative-serving-ingress.
    4
    The hostname for external access. You can set this to <service_name>-<service_namespace>.<domain>.
    5
    The certificates you want to use. Currently, only edge termination is supported.
  4. Apply the Route resource:

    $ oc apply -f <filename>

4.5.4. Global HTTPS redirection

HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol spec for the KnativeServing custom resource (CR).

4.5.4.1. HTTPS redirection global settings

Example KnativeServing CR that enables HTTPS redirection

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    network:
      httpProtocol: "redirected"
...

4.5.5. URL scheme for external routes

The URL scheme of external routes defaults to HTTPS for enhanced security. This scheme is determined by the default-external-scheme key in the KnativeServing custom resource (CR) spec.

4.5.5.1. Setting the URL scheme for external routes

Default spec

...
spec:
  config:
    network:
      default-external-scheme: "https"
...

You can override the default spec to use HTTP by modifying the default-external-scheme key:

HTTP override spec

...
spec:
  config:
    network:
      default-external-scheme: "http"
...

4.5.6. HTTPS redirection per service

You can enable or disable HTTPS redirection for a service by configuring the networking.knative.dev/http-option annotation.

4.5.6.1. Redirecting HTTPS for a service

The following example shows how you can use this annotation in a Knative Service YAML object:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: example
  namespace: default
  annotations:
    networking.knative.dev/http-option: "redirected"
spec:
  ...

4.5.7. Cluster local availability

By default, Knative services are published to a public IP address. Being published to a public IP address means that Knative services are public applications, and have a publicly accessible URL.

Publicly accessible URLs are accessible from outside of the cluster. However, developers may need to build back-end services that are only be accessible from inside the cluster, known as private services. Developers can label individual services in the cluster with the networking.knative.dev/visibility=cluster-local label to make them private.

Important

For OpenShift Serverless 1.15.0 and newer versions, the serving.knative.dev/visibility label is no longer available. You must update existing services to use the networking.knative.dev/visibility label instead.

4.5.7.1. Setting cluster availability to cluster local

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on the cluster.
  • You have created a Knative service.

Procedure

  • Set the visibility for your service by adding the networking.knative.dev/visibility=cluster-local label:

    $ oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local

Verification

  • Check that the URL for your service is now in the format http://<service_name>.<namespace>.svc.cluster.local, by entering the following command and reviewing the output:

    $ oc get ksvc

    Example output

    NAME            URL                                                                         LATESTCREATED     LATESTREADY       READY   REASON
    hello           http://hello.default.svc.cluster.local                                      hello-tx2g7       hello-tx2g7       True

4.5.7.2. Enabling TLS authentication for cluster local services

For cluster local services, the Kourier local gateway kourier-internal is used. If you want to use TLS traffic against the Kourier local gateway, you must configure your own server certificates in the local gateway.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Serving.
  • You have administrator permissions.
  • You have installed the OpenShift (oc) CLI.

Procedure

  1. Deploy server certificates in the knative-serving-ingress namespace:

    $ export san="knative"
    Note

    Subject Alternative Name (SAN) validation is required so that these certificates can serve the request to <app_name>.<namespace>.svc.cluster.local.

  2. Generate a root key and certificate:

    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
        -subj '/O=Example/CN=Example' \
        -keyout ca.key \
        -out ca.crt
  3. Generate a server key that uses SAN validation:

    $ openssl req -out tls.csr -newkey rsa:2048 -nodes -keyout tls.key \
      -subj "/CN=Example/O=Example" \
      -addext "subjectAltName = DNS:$san"
  4. Create server certificates:

    $ openssl x509 -req -extfile <(printf "subjectAltName=DNS:$san") \
      -days 365 -in tls.csr \
      -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt
  5. Configure a secret for the Kourier local gateway:

    1. Deploy a secret in knative-serving-ingress namespace from the certificates created by the previous steps:

      $ oc create -n knative-serving-ingress secret tls server-certs \
          --key=tls.key \
          --cert=tls.crt --dry-run=client -o yaml | oc apply -f -
    2. Update the KnativeServing custom resource (CR) spec to use the secret that was created by the Kourier gateway:

      Example KnativeServing CR

      ...
      spec:
        config:
          kourier:
            cluster-cert-secret: server-certs
      ...

The Kourier controller sets the certificate without restarting the service, so that you do not need to restart the pod.

You can access the Kourier internal service with TLS through port 443 by mounting and using the ca.crt from the client.

4.5.8. Kourier Gateway service type

The Kourier Gateway is exposed by default as the ClusterIP service type. This service type is determined by the service-type ingress spec in the KnativeServing custom resource (CR).

Default spec

...
spec:
  ingress:
    kourier:
      service-type: ClusterIP
...

4.5.8.1. Setting the Kourier Gateway service type

You can override the default service type to use a load balancer service type instead by modifying the service-type spec:

LoadBalancer override spec

...
spec:
  ingress:
    kourier:
      service-type: LoadBalancer
...

4.5.9. Using HTTP2 and gRPC

OpenShift Serverless supports only insecure or edge-terminated routes. Insecure or edge-terminated routes do not support HTTP2 on OpenShift Container Platform. These routes also do not support gRPC because gRPC is transported by HTTP2. If you use these protocols in your application, you must call the application using the ingress gateway directly. To do this you must find the ingress gateway’s public address and the application’s specific host.

4.5.9.1. Interacting with a serverless application using HTTP2 and gRPC
Important

This method applies to OpenShift Container Platform 4.10 and later. For older versions, see the following section.

Prerequisites

  • Install OpenShift Serverless Operator and Knative Serving on your cluster.
  • Install the OpenShift CLI (oc).
  • Create a Knative service.
  • Upgrade OpenShift Container Platform 4.10 or later.
  • Enable HTTP/2 on OpenShift Ingress controller.

Procedure

  1. Add the serverless.openshift.io/default-enable-http2=true annotation to the KnativeServing Custom Resource:

    $ oc annotate knativeserving <your_knative_CR> -n knative-serving serverless.openshift.io/default-enable-http2=true
  2. After the annotation is added, you can verify that the appProtocol value of the Kourier service is h2c:

    $ oc get svc -n knative-serving-ingress kourier -o jsonpath="{.spec.ports[0].appProtocol}"

    Example output

    h2c

  3. Now you can use the gRPC framework over the HTTP/2 protocol for external traffic, for example:

    import "google.golang.org/grpc"
    
    grpc.Dial(
       YOUR_URL, 1
       grpc.WithTransportCredentials(insecure.NewCredentials())), 2
    )
    1
    Your ksvc URL.
    2
    Your certificate.
4.5.9.2. Interacting with a serverless application using HTTP2 and gRPC in OpenShift Container Platform 4.9 and older
Important

This method needs to expose Kourier Gateway using the LoadBalancer service type. You can configure this by adding the following YAML to your KnativeServing custom resource definition (CRD):

...
spec:
  ingress:
    kourier:
      service-type: LoadBalancer
...

Prerequisites

  • Install OpenShift Serverless Operator and Knative Serving on your cluster.
  • Install the OpenShift CLI (oc).
  • Create a Knative service.

Procedure

  1. Find the application host. See the instructions in Verifying your serverless application deployment.
  2. Find the ingress gateway’s public address:

    $ oc -n knative-serving-ingress get svc kourier

    Example output

    NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP                                                             PORT(S)                                                                                                                                      AGE
    kourier   LoadBalancer   172.30.51.103   a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com   80:31380/TCP,443:31390/TCP   67m

    The public address is surfaced in the EXTERNAL-IP field, and in this case is a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com.

  3. Manually set the host header of your HTTP request to the application’s host, but direct the request itself against the public address of the ingress gateway.

    $ curl -H "Host: hello-default.example.com" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com

    Example output

    Hello Serverless!

    You can also make a direct gRPC request against the ingress gateway:

    import "google.golang.org/grpc"
    
    grpc.Dial(
        "a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80",
        grpc.WithAuthority("hello-default.example.com:80"),
        grpc.WithInsecure(),
    )
    Note

    Ensure that you append the respective port, 80 by default, to both hosts as shown in the previous example.

4.6. Configuring access to Knative services

4.6.1. Configuring JSON Web Token authentication for Knative services

OpenShift Serverless does not currently have user-defined authorization features. To add user-defined authorization to your deployment, you must integrate OpenShift Serverless with Red Hat OpenShift Service Mesh, and then configure JSON Web Token (JWT) authentication and sidecar injection for Knative services.

4.6.2. Using JSON Web Token authentication with Service Mesh 2.x

You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 2.x and OpenShift Serverless. To do this, you must create authentication requests and policies in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service.

4.6.2.1. Configuring JSON Web Token authentication for Service Mesh 2.x and OpenShift Serverless
Important

Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress, is not supported when Kourier is enabled.

If you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively.

Prerequisites

  • You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Add the sidecar.istio.io/inject="true" annotation to your service:

    Example service

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: <service_name>
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true" 1
            sidecar.istio.io/rewriteAppHTTPProbers: "true" 2
    ...

    1
    Add the sidecar.istio.io/inject="true" annotation.
    2
    You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default.
  2. Apply the Service resource:

    $ oc apply -f <filename>
  3. Create a RequestAuthentication resource in each serverless application namespace that is a member in the ServiceMeshMemberRoll object:

    apiVersion: security.istio.io/v1beta1
    kind: RequestAuthentication
    metadata:
      name: jwt-example
      namespace: <namespace>
    spec:
      jwtRules:
      - issuer: testing@secure.istio.io
        jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json
  4. Apply the RequestAuthentication resource:

    $ oc apply -f <filename>
  5. Allow access to the RequestAuthenticaton resource from system pods for each serverless application namespace that is a member in the ServiceMeshMemberRoll object, by creating the following AuthorizationPolicy resource:

    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allowlist-by-paths
      namespace: <namespace>
    spec:
      action: ALLOW
      rules:
      - to:
        - operation:
            paths:
            - /metrics 1
            - /healthz 2
    1
    The path on your application to collect metrics by system pod.
    2
    The path on your application to probe by system pod.
  6. Apply the AuthorizationPolicy resource:

    $ oc apply -f <filename>
  7. For each serverless application namespace that is a member in the ServiceMeshMemberRoll object, create the following AuthorizationPolicy resource:

    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: require-jwt
      namespace: <namespace>
    spec:
      action: ALLOW
      rules:
      - from:
        - source:
           requestPrincipals: ["testing@secure.istio.io/testing@secure.istio.io"]
  8. Apply the AuthorizationPolicy resource:

    $ oc apply -f <filename>

Verification

  1. If you try to use a curl request to get the Knative service URL, it is denied:

    Example command

    $ curl http://hello-example-1-default.apps.mycluster.example.com/

    Example output

    RBAC: access denied

  2. Verify the request with a valid JWT.

    1. Get the valid JWT token:

      $ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode -
    2. Access the service by using the valid token in the curl request header:

      $ curl -H "Authorization: Bearer $TOKEN"  http://hello-example-1-default.apps.example.com

      The request is now allowed:

      Example output

      Hello OpenShift!

4.6.3. Using JSON Web Token authentication with Service Mesh 1.x

You can use JSON Web Token (JWT) authentication with Knative services by using Service Mesh 1.x and OpenShift Serverless. To do this, you must create a policy in the application namespace that is a member of the ServiceMeshMemberRoll object. You must also enable sidecar injection for the service.

4.6.3.1. Configuring JSON Web Token authentication for Service Mesh 1.x and OpenShift Serverless
Important

Adding sidecar injection to pods in system namespaces, such as knative-serving and knative-serving-ingress, is not supported when Kourier is enabled.

If you require sidecar injection for pods in these namespaces, see the OpenShift Serverless documentation on Integrating Service Mesh with OpenShift Serverless natively.

Prerequisites

  • You have installed the OpenShift Serverless Operator, Knative Serving, and Red Hat OpenShift Service Mesh on your cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Add the sidecar.istio.io/inject="true" annotation to your service:

    Example service

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: <service_name>
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true" 1
            sidecar.istio.io/rewriteAppHTTPProbers: "true" 2
    ...

    1
    Add the sidecar.istio.io/inject="true" annotation.
    2
    You must set the annotation sidecar.istio.io/rewriteAppHTTPProbers: "true" in your Knative service, because OpenShift Serverless versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default.
  2. Apply the Service resource:

    $ oc apply -f <filename>
  3. Create a policy in a serverless application namespace which is a member in the ServiceMeshMemberRoll object, that only allows requests with valid JSON Web Tokens (JWT):

    Important

    The paths /metrics and /healthz must be included in excludedPaths because they are accessed from system pods in the knative-serving namespace.

    apiVersion: authentication.istio.io/v1alpha1
    kind: Policy
    metadata:
      name: default
      namespace: <namespace>
    spec:
      origins:
      - jwt:
          issuer: testing@secure.istio.io
          jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json"
          triggerRules:
          - excludedPaths:
            - prefix: /metrics 1
            - prefix: /healthz 2
      principalBinding: USE_ORIGIN
    1
    The path on your application to collect metrics by system pod.
    2
    The path on your application to probe by system pod.
  4. Apply the Policy resource:

    $ oc apply -f <filename>

Verification

  1. If you try to use a curl request to get the Knative service URL, it is denied:

    $ curl http://hello-example-default.apps.mycluster.example.com/

    Example output

    Origin authentication failed.

  2. Verify the request with a valid JWT.

    1. Get the valid JWT token:

      $ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode -
    2. Access the service by using the valid token in the curl request header:

      $ curl http://hello-example-default.apps.mycluster.example.com/ -H "Authorization: Bearer $TOKEN"

      The request is now allowed:

      Example output

      Hello OpenShift!

4.7. Configuring custom domains for Knative services

4.7.1. Configuring a custom domain for a Knative service

Knative services are automatically assigned a default domain name based on your cluster configuration. For example, <service_name>-<namespace>.example.com. You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service.

You can do this by creating a DomainMapping resource for the service. You can also create multiple DomainMapping resources to map multiple domains and subdomains to a single service.

4.7.2. Custom domain mapping

You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. To map a custom domain name to a custom resource (CR), you must create a DomainMapping CR that maps to an Addressable target CR, such as a Knative service or a Knative route.

4.7.2.1. Creating a custom domain mapping

You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. To map a custom domain name to a custom resource (CR), you must create a DomainMapping CR that maps to an Addressable target CR, such as a Knative service or a Knative route.

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created a Knative service and control a custom domain that you want to map to that service.

    Note

    Your custom domain must point to the IP address of the OpenShift Container Platform cluster.

Procedure

  1. Create a YAML file containing the DomainMapping CR in the same namespace as the target CR you want to map to:

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: <domain_name> 1
     namespace: <namespace> 2
    spec:
     ref:
       name: <target_name> 3
       kind: <target_type> 4
       apiVersion: serving.knative.dev/v1
    1
    The custom domain name that you want to map to the target CR.
    2
    The namespace of both the DomainMapping CR and the target CR.
    3
    The name of the target CR to map to the custom domain.
    4
    The type of CR being mapped to the custom domain.

    Example service domain mapping

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: example-domain
     namespace: default
    spec:
     ref:
       name: example-service
       kind: Service
       apiVersion: serving.knative.dev/v1

    Example route domain mapping

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: example-domain
     namespace: default
    spec:
     ref:
       name: example-route
       kind: Route
       apiVersion: serving.knative.dev/v1

  2. Apply the DomainMapping CR as a YAML file:

    $ oc apply -f <filename>

4.7.3. Custom domains for Knative services using the Knative CLI

You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Knative (kn) CLI to create a DomainMapping custom resource (CR) that maps to an Addressable target CR, such as a Knative service or a Knative route.

4.7.3.1. Creating a custom domain mapping by using the Knative CLI

Prerequisites

  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
  • You have created a Knative service or route, and control a custom domain that you want to map to that CR.

    Note

    Your custom domain must point to the DNS of the OpenShift Container Platform cluster.

  • You have installed the Knative (kn) CLI.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  • Map a domain to a CR in the current namespace:

    $ kn domain create <domain_mapping_name> --ref <target_name>

    Example command

    $ kn domain create example-domain-map --ref example-service

    The --ref flag specifies an Addressable target CR for domain mapping.

    If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace.

  • Map a domain to a Knative service in a specified namespace:

    $ kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>

    Example command

    $ kn domain create example-domain-map --ref ksvc:example-service:example-namespace

  • Map a domain to a Knative route:

    $ kn domain create <domain_mapping_name> --ref <kroute:route_name>

    Example command

    $ kn domain create example-domain-map --ref kroute:example-route

4.7.4. Domain mapping using the Developer perspective

You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Developer perspective of the OpenShift Container Platform web console to map a DomainMapping custom resource (CR) to a Knative service.

4.7.4.1. Mapping a custom domain to a service by using the Developer perspective

Prerequisites

  • You have logged in to the web console.
  • You are in the Developer perspective.
  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster. This must be completed by a cluster administrator.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created a Knative service and control a custom domain that you want to map to that service.

    Note

    Your custom domain must point to the IP address of the OpenShift Container Platform cluster.

Procedure

  1. Navigate to the Topology page.
  2. Right-click on the service that you want to map to a domain, and select the Edit option that contains the service name. For example, if the service is named example-service, select the Edit example-service option.
  3. In the Advanced options section, click Show advanced Routing options.

    1. If the domain mapping CR that you want to map to the service already exists, you can select it in the Domain mapping list.
    2. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com, the Create option is Create "example.com".
  4. Click Save to save the changes to your service.

Verification

  1. Navigate to the Topology page.
  2. Click on the service that you have created.
  3. In the Resources tab of the service information window, you can see the domain you have mapped to the service listed under Domain mappings.

4.7.5. Domain mapping using the Administrator perspective

If you do not want to switch to the Developer perspective in the OpenShift Container Platform web console or use the Knative (kn) CLI or YAML files, you can use the Administator perspective of the OpenShift Container Platform web console.

4.7.5.1. Mapping a custom domain to a service by using the Administrator perspective

Knative services are automatically assigned a default domain name based on your cluster configuration. For example, <service_name>-<namespace>.example.com. You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service.

You can do this by creating a DomainMapping resource for the service. You can also create multiple DomainMapping resources to map multiple domains and subdomains to a single service.

If you have cluster administrator permissions, you can create a DomainMapping custom resource (CR) by using the Administrator perspective in the OpenShift Container Platform web console.

Prerequisites

  • You have logged in to the web console.
  • You are in the Administrator perspective.
  • You have installed the OpenShift Serverless Operator.
  • You have installed Knative Serving.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created a Knative service and control a custom domain that you want to map to that service.

    Note

    Your custom domain must point to the IP address of the OpenShift Container Platform cluster.

Procedure

  1. Navigate to CustomResourceDefinitions and use the search box to find the DomainMapping custom resource definition (CRD).
  2. Click the DomainMapping CRD, then navigate to the Instances tab.
  3. Click Create DomainMapping.
  4. Modify the YAML for the DomainMapping CR so that it includes the following information for your instance:

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: <domain_name> 1
     namespace: <namespace> 2
    spec:
     ref:
       name: <target_name> 3
       kind: <target_type> 4
       apiVersion: serving.knative.dev/v1
    1
    The custom domain name that you want to map to the target CR.
    2
    The namespace of both the DomainMapping CR and the target CR.
    3
    The name of the target CR to map to the custom domain.
    4
    The type of CR being mapped to the custom domain.

    Example domain mapping to a Knative service

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
     name: custom-ksvc-domain.example.com
     namespace: default
    spec:
     ref:
       name: example-service
       kind: Service
       apiVersion: serving.knative.dev/v1

Verification

  • Access the custom domain by using a curl request. For example:

    Example command

    $ curl custom-ksvc-domain.example.com

    Example output

    Hello OpenShift!

4.7.6. Securing a mapped service using a TLS certificate

4.7.6.1. Securing a service with a custom domain by using a TLS certificate

After you have configured a custom domain for a Knative service, you can use a TLS certificate to secure the mapped service. To do this, you must create a Kubernetes TLS secret, and then update the DomainMapping CR to use the TLS secret that you have created.

Note

If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true, Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless.

To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true.

Prerequisites

  • You configured a custom domain for a Knative service and have a working DomainMapping CR.
  • You have a TLS certificate from your Certificate Authority provider or a self-signed certificate.
  • You have obtained the cert and key files from your Certificate Authority provider, or a self-signed certificate.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a Kubernetes TLS secret:

    $ oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>
  2. Add the networking.internal.knative.dev/certificate-uid: <id>` label to the Kubernetes TLS secret:

    $ oc label secret <tls_secret_name> networking.internal.knative.dev/certificate-uid="<id>"

    If you are using a third-party secret provider such as cert-manager, you can configure your secret manager to label the Kubernetes TLS secret automatically. Cert-manager users can use the secret template offered to automatically generate secrets with the correct label. In this case, secret filtering is done based on the key only, but this value can carry useful information such as the certificate ID that the secret contains.

    Note

    The {cert-manager-operator} is a Technology Preview feature. For more information, see the Installing the {cert-manager-operator} documentation.

  3. Update the DomainMapping CR to use the TLS secret that you have created:

    apiVersion: serving.knative.dev/v1alpha1
    kind: DomainMapping
    metadata:
      name: <domain_name>
      namespace: <namespace>
    spec:
      ref:
        name: <service_name>
        kind: Service
        apiVersion: serving.knative.dev/v1
    # TLS block specifies the secret to be used
      tls:
        secretName: <tls_secret_name>

Verification

  1. Verify that the DomainMapping CR status is True, and that the URL column of the output shows the mapped domain with the scheme https:

    $ oc get domainmapping <domain_name>

    Example output

    NAME                      URL                               READY   REASON
    example.com               https://example.com               True

  2. Optional: If the service is exposed publicly, verify that it is available by running the following command:

    $ curl https://<domain_name>

    If the certificate is self-signed, skip verification by adding the -k flag to the curl command.

4.7.6.2. Improving net-kourier memory usage by using secret filtering

By default, the informers implementation for the Kubernetes client-go library fetches all resources of a particular type. This can lead to a substantial overhead when many resources are available, which can cause the Knative net-kourier ingress controller to fail on large clusters due to memory leaking. However, a filtering mechanism is available for the Knative net-kourier ingress controller, which enables the controller to only fetch Knative related secrets. You can enable this mechanism by setting an environment variable to the KnativeServing custom resource (CR).

Important

If you enable secret filtering, all of your secrets need to be labeled with networking.internal.knative.dev/certificate-uid: "<id>". Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • A project that you created or that you have roles and permissions for to create applications and other workloads in OpenShift Container Platform.
  • Install the OpenShift Serverless Operator and Knative Serving.
  • Install the OpenShift CLI (oc).

Procedure

  • Set the ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID variable to true for net-kourier-controller in the KnativeServing CR:

    Example KnativeServing CR

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
     name: knative-serving
     namespace: knative-serving
    spec:
     deployments:
       - env:
         - container: controller
           envVars:
           - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID
             value: 'true'
         name: net-kourier-controller

4.8. Configuring high availability for Knative services

4.8.1. High availability for Knative services

High availability (HA) is a standard feature of Kubernetes APIs that helps to ensure that APIs stay operational if a disruption occurs. In an HA deployment, if an active controller crashes or is deleted, another controller is readily available. This controller takes over processing of the APIs that were being serviced by the controller that is now unavailable.

HA in OpenShift Serverless is available through leader election, which is enabled by default after the Knative Serving or Eventing control plane is installed. When using a leader election HA pattern, instances of controllers are already scheduled and running inside the cluster before they are required. These controller instances compete to use a shared resource, known as the leader election lock. The instance of the controller that has access to the leader election lock resource at any given time is called the leader.

4.8.2. High availability for Knative services

High availability (HA) is available by default for the Knative Serving activator, autoscaler, autoscaler-hpa, controller, webhook, kourier-control, and kourier-gateway components, which are configured to have two replicas each by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeServing custom resource (CR).

4.8.2.1. Configuring high availability replicas for Knative Serving

To specify three minimum replicas for the eligible deployment resources, set the value of the field spec.high-availability.replicas in the custom resource to 3.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • The OpenShift Serverless Operator and Knative Serving are installed on your cluster.

Procedure

  1. In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHubInstalled Operators.
  2. Select the knative-serving namespace.
  3. Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab.
  4. Click knative-serving, then go to the YAML tab in the knative-serving page.

    Knative Serving YAML
  5. Modify the number of replicas in the KnativeServing CR:

    Example YAML

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      high-availability:
        replicas: 3

Chapter 5. Eventing

5.1. Knative Eventing

Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers and event consumers.

Event producers create events, and event sinks, or consumers, receive events. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.

Knative Eventing supports the following use cases:

Publish an event without creating a consumer
You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events.
Consume an event without creating a publisher
You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST.

To enable delivery to multiple types of sinks, Knative Eventing defines the following generic interfaces that can be implemented by multiple Kubernetes resources:

Addressable resources
Able to receive and acknowledge an event delivered over HTTP to an address defined in the status.address.url field of the event. The Kubernetes Service resource also satisfies the addressable interface.
Callable resources
Able to receive an event delivered over HTTP and transform it, returning 0 or 1 new events in the HTTP response payload. These returned events may be further processed in the same way that events from an external event source are processed.

5.2. Event sources

5.2.1. Event sources

A Knative event source can be any Kubernetes object that generates or imports cloud events, and relays those events to another endpoint, known as a sink. Sourcing events is critical to developing a distributed system that reacts to events.

You can create and manage Knative event sources by using the Developer perspective in the OpenShift Container Platform web console, the Knative (kn) CLI, or by applying YAML files.

Currently, OpenShift Serverless supports the following event source types:

API server source
Brings Kubernetes API server events into Knative. The API server source sends a new event each time a Kubernetes resource is created, updated or deleted.
Ping source
Produces events with a fixed payload on a specified cron schedule.
Kafka event source
Connects a Kafka cluster to a sink as an event source.

You can also create a custom event source.

5.2.2. Event source in the Administrator perspective

Sourcing events is critical to developing a distributed system that reacts to events.

5.2.2.1. Creating an event source by using the Administrator perspective

A Knative event source can be any Kubernetes object that generates or imports cloud events, and relays those events to another endpoint, known as a sink.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have logged in to the web console and are in the Administrator perspective.
  • You have cluster administrator permissions for OpenShift Container Platform.

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to ServerlessEventing.
  2. In the Create list, select Event Source. You will be directed to the Event Sources page.
  3. Select the event source type that you want to create.

5.2.3. Creating an API server source

The API server source is an event source that can be used to connect an event sink, such as a Knative service, to the Kubernetes API server. The API server source watches for Kubernetes events and forwards them to the Knative Eventing broker.

5.2.3.1. Creating an API server source by using the web console

After Knative Eventing is installed on your cluster, you can create an API server source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console.
  • The OpenShift Serverless Operator and Knative Eventing are installed on the cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).
Procedure

If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource.

  1. Create a service account, role, and role binding for the event source as a YAML file:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: events-sa
      namespace: default 1
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: event-watcher
      namespace: default 2
    rules:
      - apiGroups:
          - ""
        resources:
          - events
        verbs:
          - get
          - list
          - watch
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: k8s-ra-event-watcher
      namespace: default 3
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: event-watcher
    subjects:
      - kind: ServiceAccount
        name: events-sa
        namespace: default 4
    1 2 3 4
    Change this namespace to the namespace that you have selected for installing the event source.
  2. Apply the YAML file:

    $ oc apply -f <filename>
  3. In the Developer perspective, navigate to +AddEvent Source. The Event Sources page is displayed.
  4. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider.
  5. Select ApiServerSource and then click Create Event Source. The Create Event Source page is displayed.
  6. Configure the ApiServerSource settings by using the Form view or YAML view:

    Note

    You can switch between the Form view and YAML view. The data is persisted when switching between the views.

    1. Enter v1 as the APIVERSION and Event as the KIND.
    2. Select the Service Account Name for the service account that you created.
    3. Select the Sink for the event source. A Sink can be either a Resource, such as a channel, broker, or service, or a URI.
  7. Click Create.

Verification

  • After you have created the API server source, you will see it connected to the service it is sinked to in the Topology view.

    ApiServerSource Topology view
Note

If a URI sink is used, modify the URI by right-clicking on URI sinkEdit URI.

Deleting the API server source

  1. Navigate to the Topology view.
  2. Right-click the API server source and select Delete ApiServerSource.

    Delete the ApiServerSource
5.2.3.2. Creating an API server source by using the Knative CLI

You can use the kn source apiserver create command to create an API server source by using the kn CLI. Using the kn CLI to create an API server source provides a more streamlined and intuitive user interface than modifying YAML files directly.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on the cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Knative (kn) CLI.
Procedure

If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource.

  1. Create a service account, role, and role binding for the event source as a YAML file:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: events-sa
      namespace: default 1
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: event-watcher
      namespace: default 2
    rules:
      - apiGroups:
          - ""
        resources:
          - events
        verbs:
          - get
          - list
          - watch
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: k8s-ra-event-watcher
      namespace: default 3
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: event-watcher
    subjects:
      - kind: ServiceAccount
        name: events-sa
        namespace: default 4
    1 2 3 4
    Change this namespace to the namespace that you have selected for installing the event source.
  2. Apply the YAML file:

    $ oc apply -f <filename>
  3. Create an API server source that has an event sink. In the following example, the sink is a broker:

    $ kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource "event:v1" --service-account <service_account_name> --mode Resource
  4. To check that the API server source is set up correctly, create a Knative service that dumps incoming messages to its log:

    $ kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  5. If you used a broker as an event sink, create a trigger to filter events from the default broker to the service:

    $ kn trigger create <trigger_name> --sink ksvc:<service_name>
  6. Create events by launching a pod in the default namespace:

    $ oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  7. Check that the controller is mapped correctly by inspecting the output generated by the following command:

    $ kn source apiserver describe <source_name>

    Example output

    Name:                mysource
    Namespace:           default
    Annotations:         sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer
    Age:                 3m
    ServiceAccountName:  events-sa
    Mode:                Resource
    Sink:
      Name:       default
      Namespace:  default
      Kind:       Broker (eventing.knative.dev/v1)
    Resources:
      Kind:        event (v1)
      Controller:  false
    Conditions:
      OK TYPE                     AGE REASON
      ++ Ready                     3m
      ++ Deployed                  3m
      ++ SinkProvided              3m
      ++ SufficientPermissions     3m
      ++ EventTypesProvided        3m

Verification

You can verify that the Kubernetes events were sent to Knative by looking at the message dumper function logs.

  1. Get the pods:

    $ oc get pods
  2. View the message dumper function logs for the pods:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.apiserver.resource.update
      datacontenttype: application/json
      ...
    Data,
      {
        "apiVersion": "v1",
        "involvedObject": {
          "apiVersion": "v1",
          "fieldPath": "spec.containers{hello-node}",
          "kind": "Pod",
          "name": "hello-node",
          "namespace": "default",
           .....
        },
        "kind": "Event",
        "message": "Started container",
        "metadata": {
          "name": "hello-node.159d7608e3a3572c",
          "namespace": "default",
          ....
        },
        "reason": "Started",
        ...
      }

Deleting the API server source

  1. Delete the trigger:

    $ kn trigger delete <trigger_name>
  2. Delete the event source:

    $ kn source apiserver delete <source_name>
  3. Delete the service account, cluster role, and cluster binding:

    $ oc delete -f authentication.yaml
5.2.3.2.1. Knative CLI sink flag

When you create an event source by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the sink flag

$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ 1
  --ce-override "sink=bound"

1
svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.
5.2.3.3. Creating an API server source by using YAML files

Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create an API server source by using YAML, you must create a YAML file that defines an ApiServerSource object, then apply it by using the oc apply command.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on the cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created the default broker in the same namespace as the one defined in the API server source YAML file.
  • Install the OpenShift CLI (oc).
Procedure

If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource.

  1. Create a service account, role, and role binding for the event source as a YAML file:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: events-sa
      namespace: default 1
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: event-watcher
      namespace: default 2
    rules:
      - apiGroups:
          - ""
        resources:
          - events
        verbs:
          - get
          - list
          - watch
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: k8s-ra-event-watcher
      namespace: default 3
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: event-watcher
    subjects:
      - kind: ServiceAccount
        name: events-sa
        namespace: default 4
    1 2 3 4
    Change this namespace to the namespace that you have selected for installing the event source.
  2. Apply the YAML file:

    $ oc apply -f <filename>
  3. Create an API server source as a YAML file:

    apiVersion: sources.knative.dev/v1alpha1
    kind: ApiServerSource
    metadata:
      name: testevents
    spec:
      serviceAccountName: events-sa
      mode: Resource
      resources:
        - apiVersion: v1
          kind: Event
      sink:
        ref:
          apiVersion: eventing.knative.dev/v1
          kind: Broker
          name: default
  4. Apply the ApiServerSource YAML file:

    $ oc apply -f <filename>
  5. To check that the API server source is set up correctly, create a Knative service as a YAML file that dumps incoming messages to its log:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: event-display
      namespace: default
    spec:
      template:
        spec:
          containers:
            - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  6. Apply the Service YAML file:

    $ oc apply -f <filename>
  7. Create a Trigger object as a YAML file that filters events from the default broker to the service created in the previous step:

    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      name: event-display-trigger
      namespace: default
    spec:
      broker: default
      subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: event-display
  8. Apply the Trigger YAML file:

    $ oc apply -f <filename>
  9. Create events by launching a pod in the default namespace:

    $ oc create deployment hello-node --image=quay.io/openshift-knative/knative-eventing-sources-event-display
  10. Check that the controller is mapped correctly, by entering the following command and inspecting the output:

    $ oc get apiserversource.sources.knative.dev testevents -o yaml

    Example output

    apiVersion: sources.knative.dev/v1alpha1
    kind: ApiServerSource
    metadata:
      annotations:
      creationTimestamp: "2020-04-07T17:24:54Z"
      generation: 1
      name: testevents
      namespace: default
      resourceVersion: "62868"
      selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2
      uid: 1603d863-bb06-4d1c-b371-f580b4db99fa
    spec:
      mode: Resource
      resources:
      - apiVersion: v1
        controller: false
        controllerSelector:
          apiVersion: ""
          kind: ""
          name: ""
          uid: ""
        kind: Event
        labelSelector: {}
      serviceAccountName: events-sa
      sink:
        ref:
          apiVersion: eventing.knative.dev/v1
          kind: Broker
          name: default

Verification

To verify that the Kubernetes events were sent to Knative, you can look at the message dumper function logs.

  1. Get the pods by entering the following command:

    $ oc get pods
  2. View the message dumper function logs for the pods by entering the following command:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.apiserver.resource.update
      datacontenttype: application/json
      ...
    Data,
      {
        "apiVersion": "v1",
        "involvedObject": {
          "apiVersion": "v1",
          "fieldPath": "spec.containers{hello-node}",
          "kind": "Pod",
          "name": "hello-node",
          "namespace": "default",
           .....
        },
        "kind": "Event",
        "message": "Started container",
        "metadata": {
          "name": "hello-node.159d7608e3a3572c",
          "namespace": "default",
          ....
        },
        "reason": "Started",
        ...
      }

Deleting the API server source

  1. Delete the trigger:

    $ oc delete -f trigger.yaml
  2. Delete the event source:

    $ oc delete -f k8s-events.yaml
  3. Delete the service account, cluster role, and cluster binding:

    $ oc delete -f authentication.yaml

5.2.4. Creating a ping source

A ping source is an event source that can be used to periodically send ping events with a constant payload to an event consumer. A ping source can be used to schedule sending events, similar to a timer.

5.2.4.1. Creating a ping source by using the web console

After Knative Eventing is installed on your cluster, you can create a ping source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console.
  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the logs of the service.

    1. In the Developer perspective, navigate to +AddYAML.
    2. Copy the example YAML:

      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: event-display
      spec:
        template:
          spec:
            containers:
              - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
    3. Click Create.
  2. Create a ping source in the same namespace as the service created in the previous step, or any other sink that you want to send events to.

    1. In the Developer perspective, navigate to +AddEvent Source. The Event Sources page is displayed.
    2. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider.
    3. Select Ping Source and then click Create Event Source. The Create Event Source page is displayed.

      Note

      You can configure the PingSource settings by using the Form view or YAML view and can switch between the views. The data is persisted when switching between the views.

    4. Enter a value for Schedule. In this example, the value is */2 * * * *, which creates a PingSource that sends a message every two minutes.
    5. Optional: You can enter a value for Data, which is the message payload.
    6. Select a Sink. This can be either a Resource or a URI. In this example, the event-display service created in the previous step is used as the Resource sink.
    7. Click Create.

Verification

You can verify that the ping source was created and is connected to the sink by viewing the Topology page.

  1. In the Developer perspective, navigate to Topology.
  2. View the ping source and sink.

    View the ping source and service in the Topology view

Deleting the ping source

  1. Navigate to the Topology view.
  2. Right-click the API server source and select Delete Ping Source.
5.2.4.2. Creating a ping source by using the Knative CLI

You can use the kn source ping create command to create a ping source by using the Knative (kn) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly.

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster.
  • You have installed the Knative (kn) CLI.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • Optional: If you want to use the verification steps for this procedure, install the OpenShift CLI (oc).

Procedure

  1. To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service logs:

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  2. For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer:

    $ kn source ping create test-ping-source \
        --schedule "*/2 * * * *" \
        --data '{"message": "Hello world!"}' \
        --sink ksvc:event-display
  3. Check that the controller is mapped correctly by entering the following command and inspecting the output:

    $ kn source ping describe test-ping-source

    Example output

    Name:         test-ping-source
    Namespace:    default
    Annotations:  sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer
    Age:          15s
    Schedule:     */2 * * * *
    Data:         {"message": "Hello world!"}
    
    Sink:
      Name:       event-display
      Namespace:  default
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE                 AGE REASON
      ++ Ready                 8s
      ++ Deployed              8s
      ++ SinkProvided         15s
      ++ ValidSchedule        15s
      ++ EventTypeProvided    15s
      ++ ResourcesCorrect     15s

Verification

You can verify that the Kubernetes events were sent to the Knative event sink by looking at the logs of the sink pod.

By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a ping source that sends a message every 2 minutes, so each message should be observed in a newly created pod.

  1. Watch for new pods created:

    $ watch oc get pods
  2. Cancel watching the pods using Ctrl+C, then look at the logs of the created pod:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.sources.ping
      source: /apis/v1/namespaces/default/pingsources/test-ping-source
      id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9
      time: 2020-04-07T16:16:00.000601161Z
      datacontenttype: application/json
    Data,
      {
        "message": "Hello world!"
      }

Deleting the ping source

  • Delete the ping source:

    $ kn delete pingsources.sources.knative.dev <ping_source_name>
5.2.4.2.1. Knative CLI sink flag

When you create an event source by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the sink flag

$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ 1
  --ce-override "sink=bound"

1
svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.
5.2.4.3. Creating a ping source by using YAML

Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create a serverless ping source by using YAML, you must create a YAML file that defines a PingSource object, then apply it by using oc apply.

Example PingSource object

apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
  name: test-ping-source
spec:
  schedule: "*/2 * * * *" 1
  data: '{"message": "Hello world!"}' 2
  sink: 3
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display

1
The schedule of the event specified using CRON expression.
2
The event message body expressed as a JSON encoded data string.
3
These are the details of the event consumer. In this example, we are using a Knative service named event-display.

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service’s logs.

    1. Create a service YAML file:

      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: event-display
      spec:
        template:
          spec:
            containers:
              - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
    2. Create the service:

      $ oc apply -f <filename>
  2. For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer.

    1. Create a YAML file for the ping source:

      apiVersion: sources.knative.dev/v1
      kind: PingSource
      metadata:
        name: test-ping-source
      spec:
        schedule: "*/2 * * * *"
        data: '{"message": "Hello world!"}'
        sink:
          ref:
            apiVersion: serving.knative.dev/v1
            kind: Service
            name: event-display
    2. Create the ping source:

      $ oc apply -f <filename>
  3. Check that the controller is mapped correctly by entering the following command:

    $ oc get pingsource.sources.knative.dev <ping_source_name> -oyaml

    Example output

    apiVersion: sources.knative.dev/v1
    kind: PingSource
    metadata:
      annotations:
        sources.knative.dev/creator: developer
        sources.knative.dev/lastModifier: developer
      creationTimestamp: "2020-04-07T16:11:14Z"
      generation: 1
      name: test-ping-source
      namespace: default
      resourceVersion: "55257"
      selfLink: /apis/sources.knative.dev/v1/namespaces/default/pingsources/test-ping-source
      uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164
    spec:
      data: '{ value: "hello" }'
      schedule: '*/2 * * * *'
      sink:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: event-display
          namespace: default

Verification

You can verify that the Kubernetes events were sent to the Knative event sink by looking at the sink pod’s logs.

By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a PingSource that sends a message every 2 minutes, so each message should be observed in a newly created pod.

  1. Watch for new pods created:

    $ watch oc get pods
  2. Cancel watching the pods using Ctrl+C, then look at the logs of the created pod:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.sources.ping
      source: /apis/v1/namespaces/default/pingsources/test-ping-source
      id: 042ff529-240e-45ee-b40c-3a908129853e
      time: 2020-04-07T16:22:00.000791674Z
      datacontenttype: application/json
    Data,
      {
        "message": "Hello world!"
      }

Deleting the ping source

  • Delete the ping source:

    $ oc delete -f <filename>

    Example command

    $ oc delete -f ping-source.yaml

5.2.5. Kafka source

You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the OpenShift Container Platform web console, the Knative (kn) CLI, or by creating a KafkaSource object directly as a YAML file and using the OpenShift CLI (oc) to apply it.

5.2.5.1. Creating a Kafka event source by using the web console

After Knative Kafka is installed on your cluster, you can create a Kafka source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a Kafka source.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster.
  • You have logged in to the web console.
  • You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. In the Developer perspective, navigate to the +Add page and select Event Source.
  2. In the Event Sources page, select Kafka Source in the Type section.
  3. Configure the Kafka Source settings:

    1. Add a comma-separated list of Bootstrap Servers.
    2. Add a comma-separated list of Topics.
    3. Add a Consumer Group.
    4. Select the Service Account Name for the service account that you created.
    5. Select the Sink for the event source. A Sink can be either a Resource, such as a channel, broker, or service, or a URI.
    6. Enter a Name for the Kafka event source.
  4. Click Create.

Verification

You can verify that the Kafka event source was created and is connected to the sink by viewing the Topology page.

  1. In the Developer perspective, navigate to Topology.
  2. View the Kafka event source and sink.

    View the Kafka source and service in the Topology view
5.2.5.2. Creating a Kafka event source by using the Knative CLI

You can use the kn source kafka create command to create a Kafka source by using the Knative (kn) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, Knative Serving, and the KnativeKafka custom resource (CR) are installed on your cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
  • You have installed the Knative (kn) CLI.
  • Optional: You have installed the OpenShift CLI (oc) if you want to use the verification steps in this procedure.

Procedure

  1. To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs:

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display
  2. Create a KafkaSource CR:

    $ kn source kafka create <kafka_source_name> \
        --servers <cluster_kafka_bootstrap>.kafka.svc:9092 \
        --topics <topic_name> --consumergroup my-consumer-group \
        --sink event-display
    Note

    Replace the placeholder values in this command with values for your source name, bootstrap servers, and topics.

    The --servers, --topics, and --consumergroup options specify the connection parameters to the Kafka cluster. The --consumergroup option is optional.

  3. Optional: View details about the KafkaSource CR you created:

    $ kn source kafka describe <kafka_source_name>

    Example output

    Name:              example-kafka-source
    Namespace:         kafka
    Age:               1h
    BootstrapServers:  example-cluster-kafka-bootstrap.kafka.svc:9092
    Topics:            example-topic
    ConsumerGroup:     example-consumer-group
    
    Sink:
      Name:       event-display
      Namespace:  default
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE            AGE REASON
      ++ Ready            1h
      ++ Deployed         1h
      ++ SinkProvided     1h

Verification steps

  1. Trigger the Kafka instance to send a message to the topic:

    $ oc -n kafka run kafka-producer \
        -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true \
        --restart=Never -- bin/kafka-console-producer.sh \
        --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic

    Enter the message in the prompt. This command assumes that:

    • The Kafka cluster is installed in the kafka namespace.
    • The KafkaSource object has been configured to use the my-topic topic.
  2. Verify that the message arrived by viewing the logs:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.kafka.event
      source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic
      subject: partition:46#0
      id: partition:46/offset:0
      time: 2021-03-10T11:21:49.4Z
    Extensions,
      traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00
    Data,
      Hello!

5.2.5.2.1. Knative CLI sink flag

When you create an event source by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the sink flag

$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ 1
  --ce-override "sink=bound"

1
svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.
5.2.5.3. Creating a Kafka event source by using YAML

Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka source by using YAML, you must create a YAML file that defines a KafkaSource object, then apply it by using the oc apply command.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a KafkaSource object as a YAML file:

    apiVersion: sources.knative.dev/v1beta1
    kind: KafkaSource
    metadata:
      name: <source_name>
    spec:
      consumerGroup: <group_name> 1
      bootstrapServers:
      - <list_of_bootstrap_servers>
      topics:
      - <list_of_topics> 2
      sink:
      - <list_of_sinks> 3
    1
    A consumer group is a group of consumers that use the same group ID, and consume data from a topic.
    2
    A topic provides a destination for the storage of data. Each topic is split into one or more partitions.
    3
    A sink specifies where events are sent to from a source.
    Important

    Only the v1beta1 version of the API for KafkaSource objects on OpenShift Serverless is supported. Do not use the v1alpha1 version of this API, as this version is now deprecated.

    Example KafkaSource object

    apiVersion: sources.knative.dev/v1beta1
    kind: KafkaSource
    metadata:
      name: kafka-source
    spec:
      consumerGroup: knative-group
      bootstrapServers:
      - my-cluster-kafka-bootstrap.kafka:9092
      topics:
      - knative-demo-topic
      sink:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: event-display

  2. Apply the KafkaSource YAML file:

    $ oc apply -f <filename>

Verification

  • Verify that the Kafka event source was created by entering the following command:

    $ oc get pods

    Example output

    NAME                                    READY     STATUS    RESTARTS   AGE
    kafkasource-kafka-source-5ca0248f-...   1/1       Running   0          13m

5.2.5.4. Configuring SASL authentication for Kafka sources

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.

Prerequisites

  • You have cluster or dedicated administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a username and password for a Kafka cluster.
  • You have chosen the SASL mechanism to use, for example, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.
  • If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster.
  • You have installed the OpenShift (oc) CLI.

Procedure

  1. Create the certificate files as secrets in your chosen namespace:

    $ oc create secret -n <namespace> generic <kafka_auth_secret> \
      --from-file=ca.crt=caroot.pem \
      --from-literal=password="SecretPassword" \
      --from-literal=saslType="SCRAM-SHA-512" \ 1
      --from-literal=user="my-sasl-user"
    1
    The SASL type can be PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.
  2. Create or modify your Kafka source so that it contains the following spec configuration:

    apiVersion: sources.knative.dev/v1beta1
    kind: KafkaSource
    metadata:
      name: example-source
    spec:
    ...
      net:
        sasl:
          enable: true
          user:
            secretKeyRef:
              name: <kafka_auth_secret>
              key: user
          password:
            secretKeyRef:
              name: <kafka_auth_secret>
              key: password
          type:
            secretKeyRef:
              name: <kafka_auth_secret>
              key: saslType
        tls:
          enable: true
          caCert: 1
            secretKeyRef:
              name: <kafka_auth_secret>
              key: ca.crt
    ...
    1
    The caCert spec is not required if you are using a public cloud Kafka service, such as Red Hat OpenShift Streams for Apache Kafka.

5.2.6. Custom event sources

If you need to ingress events from an event producer that is not included in Knative, or from a producer that emits events which are not in the CloudEvent format, you can do this by creating a custom event source. You can create a custom event source by using one of the following methods:

  • Use a PodSpecable object as an event source, by creating a sink binding.
  • Use a container as an event source, by creating a container source.
5.2.6.1. Sink binding

The SinkBinding object supports decoupling event production from delivery addressing. Sink binding is used to connect event producers to an event consumer, or sink. An event producer is a Kubernetes resource that embeds a PodSpec template and produces events. A sink is an addressable Kubernetes object that can receive events.

The SinkBinding object injects environment variables into the PodTemplateSpec of the sink, which means that the application code does not need to interact directly with the Kubernetes API to locate the event destination. These environment variables are as follows:

K_SINK
The URL of the resolved sink.
K_CE_OVERRIDES
A JSON object that specifies overrides to the outbound event.
Note

The SinkBinding object currently does not support custom revision names for services.

5.2.6.1.1. Creating a sink binding by using YAML

Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create a sink binding by using YAML, you must create a YAML file that defines an SinkBinding object, then apply it by using the oc apply command.

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. To check that sink binding is set up correctly, create a Knative event display service, or event sink, that dumps incoming messages to its log.

    1. Create a service YAML file:

      Example service YAML file

      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: event-display
      spec:
        template:
          spec:
            containers:
              - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest

    2. Create the service:

      $ oc apply -f <filename>
  2. Create a sink binding instance that directs events to the service.

    1. Create a sink binding YAML file:

      Example service YAML file

      apiVersion: sources.knative.dev/v1alpha1
      kind: SinkBinding
      metadata:
        name: bind-heartbeat
      spec:
        subject:
          apiVersion: batch/v1
          kind: Job 1
          selector:
            matchLabels:
              app: heartbeat-cron
      
        sink:
          ref:
            apiVersion: serving.knative.dev/v1
            kind: Service
            name: event-display

      1
      In this example, any Job with the label app: heartbeat-cron will be bound to the event sink.
    2. Create the sink binding:

      $ oc apply -f <filename>
  3. Create a CronJob object.

    1. Create a cron job YAML file:

      Example cron job YAML file

      apiVersion: batch/v1
      kind: CronJob
      metadata:
        name: heartbeat-cron
      spec:
        # Run every minute
        schedule: "* * * * *"
        jobTemplate:
          metadata:
            labels:
              app: heartbeat-cron
              bindings.knative.dev/include: "true"
          spec:
            template:
              spec:
                restartPolicy: Never
                containers:
                  - name: single-heartbeat
                    image: quay.io/openshift-knative/heartbeats:latest
                    args:
                      - --period=1
                    env:
                      - name: ONE_SHOT
                        value: "true"
                      - name: POD_NAME
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.name
                      - name: POD_NAMESPACE
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.namespace

      Important

      To use sink binding, you must manually add a bindings.knative.dev/include=true label to your Knative resources.

      For example, to add this label to a CronJob resource, add the following lines to the Job resource YAML definition:

        jobTemplate:
          metadata:
            labels:
              app: heartbeat-cron
              bindings.knative.dev/include: "true"
    2. Create the cron job:

      $ oc apply -f <filename>
  4. Check that the controller is mapped correctly by entering the following command and inspecting the output:

    $ oc get sinkbindings.sources.knative.dev bind-heartbeat -oyaml

    Example output

    spec:
      sink:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: event-display
          namespace: default
      subject:
        apiVersion: batch/v1
        kind: Job
        namespace: default
        selector:
          matchLabels:
            app: heartbeat-cron

Verification

You can verify that the Kubernetes events were sent to the Knative event sink by looking at the message dumper function logs.

  1. Enter the command:

    $ oc get pods
  2. Enter the command:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.eventing.samples.heartbeat
      source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod
      id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596
      time: 2019-10-18T15:23:20.809775386Z
      contenttype: application/json
    Extensions,
      beats: true
      heart: yes
      the: 42
    Data,
      {
        "id": 1,
        "label": ""
      }

5.2.6.1.2. Creating a sink binding by using the Knative CLI

You can use the kn source binding create command to create a sink binding by using the Knative (kn) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly.

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • Install the Knative (kn) CLI.
  • Install the OpenShift CLI (oc).
Note

The following procedure requires you to create YAML files.

If you change the names of the YAML files from those used in the examples, you must ensure that you also update the corresponding CLI commands.

Procedure

  1. To check that sink binding is set up correctly, create a Knative event display service, or event sink, that dumps incoming messages to its log:

    $ kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
  2. Create a sink binding instance that directs events to the service:

    $ kn source binding create bind-heartbeat --subject Job:batch/v1:app=heartbeat-cron --sink ksvc:event-display
  3. Create a CronJob object.

    1. Create a cron job YAML file:

      Example cron job YAML file

      apiVersion: batch/v1
      kind: CronJob
      metadata:
        name: heartbeat-cron
      spec:
        # Run every minute
        schedule: "* * * * *"
        jobTemplate:
          metadata:
            labels:
              app: heartbeat-cron
              bindings.knative.dev/include: "true"
          spec:
            template:
              spec:
                restartPolicy: Never
                containers:
                  - name: single-heartbeat
                    image: quay.io/openshift-knative/heartbeats:latest
                    args:
                      - --period=1
                    env:
                      - name: ONE_SHOT
                        value: "true"
                      - name: POD_NAME
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.name
                      - name: POD_NAMESPACE
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.namespace

      Important

      To use sink binding, you must manually add a bindings.knative.dev/include=true label to your Knative CRs.

      For example, to add this label to a CronJob CR, add the following lines to the Job CR YAML definition:

        jobTemplate:
          metadata:
            labels:
              app: heartbeat-cron
              bindings.knative.dev/include: "true"
    2. Create the cron job:

      $ oc apply -f <filename>
  4. Check that the controller is mapped correctly by entering the following command and inspecting the output:

    $ kn source binding describe bind-heartbeat

    Example output

    Name:         bind-heartbeat
    Namespace:    demo-2
    Annotations:  sources.knative.dev/creator=minikube-user, sources.knative.dev/lastModifier=minikub ...
    Age:          2m
    Subject:
      Resource:   job (batch/v1)
      Selector:
        app:      heartbeat-cron
    Sink:
      Name:       event-display
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE     AGE REASON
      ++ Ready     2m

Verification

You can verify that the Kubernetes events were sent to the Knative event sink by looking at the message dumper function logs.

  • View the message dumper function logs by entering the following commands:

    $ oc get pods
    $ oc logs $(oc get pod -o name | grep event-display) -c user-container

    Example output

    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.eventing.samples.heartbeat
      source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod
      id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596
      time: 2019-10-18T15:23:20.809775386Z
      contenttype: application/json
    Extensions,
      beats: true
      heart: yes
      the: 42
    Data,
      {
        "id": 1,
        "label": ""
      }

5.2.6.1.2.1. Knative CLI sink flag

When you create an event source by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the sink flag

$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ 1
  --ce-override "sink=bound"

1
svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.
5.2.6.1.3. Creating a sink binding by using the web console

After Knative Eventing is installed on your cluster, you can create a sink binding by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console.
  • The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Create a Knative service to use as a sink:

    1. In the Developer perspective, navigate to +AddYAML.
    2. Copy the example YAML:

      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: event-display
      spec:
        template:
          spec:
            containers:
              - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
    3. Click Create.
  2. Create a CronJob resource that is used as an event source and sends an event every minute.

    1. In the Developer perspective, navigate to +AddYAML.
    2. Copy the example YAML:

      apiVersion: batch/v1
      kind: CronJob
      metadata:
        name: heartbeat-cron
      spec:
        # Run every minute
        schedule: "*/1 * * * *"
        jobTemplate:
          metadata:
            labels:
              app: heartbeat-cron
              bindings.knative.dev/include: true 1
          spec:
            template:
              spec:
                restartPolicy: Never
                containers:
                  - name: single-heartbeat
                    image: quay.io/openshift-knative/heartbeats
                    args:
                    - --period=1
                    env:
                      - name: ONE_SHOT
                        value: "true"
                      - name: POD_NAME
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.name
                      - name: POD_NAMESPACE
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.namespace
      1
      Ensure that you include the bindings.knative.dev/include: true label. The default namespace selection behavior of OpenShift Serverless uses inclusion mode.
    3. Click Create.
  3. Create a sink binding in the same namespace as the service created in the previous step, or any other sink that you want to send events to.

    1. In the Developer perspective, navigate to +AddEvent Source. The Event Sources page is displayed.
    2. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider.
    3. Select Sink Binding and then click Create Event Source. The Create Event Source page is displayed.

      Note

      You can configure the Sink Binding settings by using the Form view or YAML view and can switch between the views. The data is persisted when switching between the views.

    4. In the apiVersion field enter batch/v1.
    5. In the Kind field enter Job.

      Note

      The CronJob kind is not supported directly by OpenShift Serverless sink binding, so the Kind field must target the Job objects created by the cron job, rather than the cron job object itself.

    6. Select a Sink. This can be either a Resource or a URI. In this example, the event-display service created in the previous step is used as the Resource sink.
    7. In the Match labels section:

      1. Enter app in the Name field.
      2. Enter heartbeat-cron in the Value field.

        Note

        The label selector is required when using cron jobs with sink binding, rather than the resource name. This is because jobs created by a cron job do not have a predictable name, and contain a randomly generated string in their name. For example, hearthbeat-cron-1cc23f.

    8. Click Create.

Verification

You can verify that the sink binding, sink, and cron job have been created and are working correctly by viewing the Topology page and pod logs.

  1. In the Developer perspective, navigate to Topology.
  2. View the sink binding, sink, and heartbeats cron job.

    View the sink binding and service in the Topology view
  3. Observe that successful jobs are being registered by the cron job once the sink binding is added. This means that the sink binding is successfully reconfiguring the jobs created by the cron job.
  4. Browse the logs of the event-display service pod to see events produced by the heartbeats cron job.
5.2.6.1.4. Sink binding reference

You can use a PodSpecable object as an event source by creating a sink binding. You can configure multiple parameters when creating a SinkBinding object.

SinkBinding objects support the following parameters:

FieldDescriptionRequired or optional

apiVersion

Specifies the API version, for example sources.knative.dev/v1.

Required

kind

Identifies this resource object as a SinkBinding object.

Required

metadata

Specifies metadata that uniquely identifies the SinkBinding object. For example, a name.

Required

spec

Specifies the configuration information for this SinkBinding object.

Required

spec.sink

A reference to an object that resolves to a URI to use as the sink.

Required

spec.subject

References the resources for which the runtime contract is augmented by binding implementations.

Required

spec.ceOverrides

Defines overrides to control the output format and modifications to the event sent to the sink.

Optional

5.2.6.1.4.1. Subject parameter

The Subject parameter references the resources for which the runtime contract is augmented by binding implementations. You can configure multiple fields for a Subject definition.

The Subject definition supports the following fields:

FieldDescriptionRequired or optional

apiVersion

API version of the referent.

Required

kind

Kind of the referent.

Required

namespace

Namespace of the referent. If omitted, this defaults to the namespace of the object.

Optional

name

Name of the referent.

Do not use if you configure selector.

selector

Selector of the referents.

Do not use if you configure name.

selector.matchExpressions

A list of label selector requirements.

Only use one of either matchExpressions or matchLabels.

selector.matchExpressions.key

The label key that the selector applies to.

Required if using matchExpressions.

selector.matchExpressions.operator

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

Required if using matchExpressions.

selector.matchExpressions.values

An array of string values. If the operator parameter value is In or NotIn, the values array must be non-empty. If the operator parameter value is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

Required if using matchExpressions.

selector.matchLabels

A map of key-value pairs. Each key-value pair in the matchLabels map is equivalent to an element of matchExpressions, where the key field is matchLabels.<key>, the operator is In, and the values array contains only matchLabels.<value>.

Only use one of either matchExpressions or matchLabels.

Subject parameter examples

Given the following YAML, the Deployment object named mysubject in the default namespace is selected:

apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
  name: bind-heartbeat
spec:
  subject:
    apiVersion: apps/v1
    kind: Deployment
    namespace: default
    name: mysubject
  ...

Given the following YAML, any Job object with the label working=example in the default namespace is selected:

apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
  name: bind-heartbeat
spec:
  subject:
    apiVersion: batch/v1
    kind: Job
    namespace: default
    selector:
      matchLabels:
        working: example
  ...

Given the following YAML, any Pod object with the label working=example or working=sample in the default namespace is selected:

apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
  name: bind-heartbeat
spec:
  subject:
    apiVersion: v1
    kind: Pod
    namespace: default
    selector:
      - matchExpression:
        key: working
        operator: In
        values:
          - example
          - sample
  ...
5.2.6.1.4.2. CloudEvent overrides

A ceOverrides definition provides overrides that control the CloudEvent’s output format and modifications sent to the sink. You can configure multiple fields for the ceOverrides definition.

A ceOverrides definition supports the following fields:

FieldDescriptionRequired or optional

extensions

Specifies which attributes are added or overridden on the outbound event. Each extensions key-value pair is set independently on the event as an attribute extension.

Optional

Note

Only valid CloudEvent attribute names are allowed as extensions. You cannot set the spec defined attributes from the extensions override configuration. For example, you can not modify the type attribute.

CloudEvent Overrides example

apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
  name: bind-heartbeat
spec:
  ...
  ceOverrides:
    extensions:
      extra: this is an extra attribute
      additional: 42

This sets the K_CE_OVERRIDES environment variable on the subject:

Example output

{ "extensions": { "extra": "this is an extra attribute", "additional": "42" } }

5.2.6.1.4.3. The include label

To use a sink binding, you need to do assign the bindings.knative.dev/include: "true" label to either the resource or the namespace that the resource is included in. If the resource definition does not include the label, a cluster administrator can attach it to the namespace by running:

$ oc label namespace <namespace> bindings.knative.dev/include=true
5.2.6.2. Container source

Container sources create a container image that generates events and sends events to a sink. You can use a container source to create a custom event source, by creating a container image and a ContainerSource object that uses your image URI.

5.2.6.2.1. Guidelines for creating a container image

Two environment variables are injected by the container source controller: K_SINK and K_CE_OVERRIDES. These variables are resolved from the sink and ceOverrides spec, respectively. Events are sent to the sink URI specified in the K_SINK environment variable. The message must be sent as a POST using the CloudEvent HTTP format.

Example container images

The following is an example of a heartbeats container image:

package main

import (
	"context"
	"encoding/json"
	"flag"
	"fmt"
	"log"
	"os"
	"strconv"
	"time"

	duckv1 "knative.dev/pkg/apis/duck/v1"

	cloudevents "github.com/cloudevents/sdk-go/v2"
	"github.com/kelseyhightower/envconfig"
)

type Heartbeat struct {
	Sequence int    `json:"id"`
	Label    string `json:"label"`
}

var (
	eventSource string
	eventType   string
	sink        string
	label       string
	periodStr   string
)

func init() {
	flag.StringVar(&eventSource, "eventSource", "", "the event-source (CloudEvents)")
	flag.StringVar(&eventType, "eventType", "dev.knative.eventing.samples.heartbeat", "the event-type (CloudEvents)")
	flag.StringVar(&sink, "sink", "", "the host url to heartbeat to")
	flag.StringVar(&label, "label", "", "a special label")
	flag.StringVar(&periodStr, "period", "5", "the number of seconds between heartbeats")
}

type envConfig struct {
	// Sink URL where to send heartbeat cloud events
	Sink string `envconfig:"K_SINK"`

	// CEOverrides are the CloudEvents overrides to be applied to the outbound event.
	CEOverrides string `envconfig:"K_CE_OVERRIDES"`

	// Name of this pod.
	Name string `envconfig:"POD_NAME" required:"true"`

	// Namespace this pod exists in.
	Namespace string `envconfig:"POD_NAMESPACE" required:"true"`

	// Whether to run continuously or exit.
	OneShot bool `envconfig:"ONE_SHOT" default:"false"`
}

func main() {
	flag.Parse()

	var env envConfig
	if err := envconfig.Process("", &env); err != nil {
		log.Printf("[ERROR] Failed to process env var: %s", err)
		os.Exit(1)
	}

	if env.Sink != "" {
		sink = env.Sink
	}

	var ceOverrides *duckv1.CloudEventOverrides
	if len(env.CEOverrides) > 0 {
		overrides := duckv1.CloudEventOverrides{}
		err := json.Unmarshal([]byte(env.CEOverrides), &overrides)
		if err != nil {
			log.Printf("[ERROR] Unparseable CloudEvents overrides %s: %v", env.CEOverrides, err)
			os.Exit(1)
		}
		ceOverrides = &overrides
	}

	p, err := cloudevents.NewHTTP(cloudevents.WithTarget(sink))
	if err != nil {
		log.Fatalf("failed to create http protocol: %s", err.Error())
	}

	c, err := cloudevents.NewClient(p, cloudevents.WithUUIDs(), cloudevents.WithTimeNow())
	if err != nil {
		log.Fatalf("failed to create client: %s", err.Error())
	}

	var period time.Duration
	if p, err := strconv.Atoi(periodStr); err != nil {
		period = time.Duration(5) * time.Second
	} else {
		period = time.Duration(p) * time.Second
	}

	if eventSource == "" {
		eventSource = fmt.Sprintf("https://knative.dev/eventing-contrib/cmd/heartbeats/#%s/%s", env.Namespace, env.Name)
		log.Printf("Heartbeats Source: %s", eventSource)
	}

	if len(label) > 0 && label[0] == '"' {
		label, _ = strconv.Unquote(label)
	}
	hb := &Heartbeat{
		Sequence: 0,
		Label:    label,
	}
	ticker := time.NewTicker(period)
	for {
		hb.Sequence++

		event := cloudevents.NewEvent("1.0")
		event.SetType(eventType)
		event.SetSource(eventSource)
		event.SetExtension("the", 42)
		event.SetExtension("heart", "yes")
		event.SetExtension("beats", true)

		if ceOverrides != nil && ceOverrides.Extensions != nil {
			for n, v := range ceOverrides.Extensions {
				event.SetExtension(n, v)
			}
		}

		if err := event.SetData(cloudevents.ApplicationJSON, hb); err != nil {
			log.Printf("failed to set cloudevents data: %s", err.Error())
		}

		log.Printf("sending cloudevent to %s", sink)
		if res := c.Send(context.Background(), event); !cloudevents.IsACK(res) {
			log.Printf("failed to send cloudevent: %v", res)
		}

		if env.OneShot {
			return
		}

		// Wait for next tick
		<-ticker.C
	}
}

The following is an example of a container source that references the previous heartbeats container image:

apiVersion: sources.knative.dev/v1
kind: ContainerSource
metadata:
  name: test-heartbeats
spec:
  template:
    spec:
      containers:
        # This corresponds to a heartbeats image URI that you have built and published
        - image: gcr.io/knative-releases/knative.dev/eventing/cmd/heartbeats
          name: heartbeats
          args:
            - --period=1
          env:
            - name: POD_NAME
              value: "example-pod"
            - name: POD_NAMESPACE
              value: "event-test"
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: example-service
...
5.2.6.2.2. Creating and managing container sources by using the Knative CLI

You can use the kn source container commands to create and manage container sources by using the Knative (kn) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly.

Create a container source

$ kn source container create <container_source_name> --image <image_uri> --sink <sink>

Delete a container source

$ kn source container delete <container_source_name>

Describe a container source

$ kn source container describe <container_source_name>

List existing container sources

$ kn source container list

List existing container sources in YAML format

$ kn source container list -o yaml

Update a container source

This command updates the image URI for an existing container source:

$ kn source container update <container_source_name> --image <image_uri>
5.2.6.2.3. Creating a container source by using the web console

After Knative Eventing is installed on your cluster, you can create a container source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console.
  • The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. In the Developer perspective, navigate to +AddEvent Source. The Event Sources page is displayed.
  2. Select Container Source and then click Create Event Source. The Create Event Source page is displayed.
  3. Configure the Container Source settings by using the Form view or YAML view:

    Note

    You can switch between the Form view and YAML view. The data is persisted when switching between the views.

    1. In the Image field, enter the URI of the image that you want to run in the container created by the container source.
    2. In the Name field, enter the name of the image.
    3. Optional: In the Arguments field, enter any arguments to be passed to the container.
    4. Optional: In the Environment variables field, add any environment variables to set in the container.
    5. In the Sink section, add a sink where events from the container source are routed to. If you are using the Form view, you can choose from the following options:

      1. Select Resource to use a channel, broker, or service as a sink for the event source.
      2. Select URI to specify where the events from the container source are routed to.
  4. After you have finished configuring the container source, click Create.
5.2.6.2.4. Container source reference

You can use a container as an event source, by creating a ContainerSource object. You can configure multiple parameters when creating a ContainerSource object.

ContainerSource objects support the following fields:

FieldDescriptionRequired or optional

apiVersion

Specifies the API version, for example sources.knative.dev/v1.

Required

kind

Identifies this resource object as a ContainerSource object.

Required

metadata

Specifies metadata that uniquely identifies the ContainerSource object. For example, a name.

Required

spec

Specifies the configuration information for this ContainerSource object.

Required

spec.sink

A reference to an object that resolves to a URI to use as the sink.

Required

spec.template

A template spec for the ContainerSource object.

Required

spec.ceOverrides

Defines overrides to control the output format and modifications to the event sent to the sink.

Optional

Template parameter example

apiVersion: sources.knative.dev/v1
kind: ContainerSource
metadata:
  name: test-heartbeats
spec:
  template:
    spec:
      containers:
        - image: quay.io/openshift-knative/heartbeats:latest
          name: heartbeats
          args:
            - --period=1
          env:
            - name: POD_NAME
              value: "mypod"
            - name: POD_NAMESPACE
              value: "event-test"
  ...

5.2.6.2.4.1. CloudEvent overrides

A ceOverrides definition provides overrides that control the CloudEvent’s output format and modifications sent to the sink. You can configure multiple fields for the ceOverrides definition.

A ceOverrides definition supports the following fields:

FieldDescriptionRequired or optional

extensions

Specifies which attributes are added or overridden on the outbound event. Each extensions key-value pair is set independently on the event as an attribute extension.

Optional

Note

Only valid CloudEvent attribute names are allowed as extensions. You cannot set the spec defined attributes from the extensions override configuration. For example, you can not modify the type attribute.

CloudEvent Overrides example

apiVersion: sources.knative.dev/v1
kind: ContainerSource
metadata:
  name: test-heartbeats
spec:
  ...
  ceOverrides:
    extensions:
      extra: this is an extra attribute
      additional: 42

This sets the K_CE_OVERRIDES environment variable on the subject:

Example output

{ "extensions": { "extra": "this is an extra attribute", "additional": "42" } }

5.2.7. Connecting an event source to a sink using the Developer perspective

When you create an event source by using the OpenShift Container Platform web console, you can specify a sink that events are sent to from that source. The sink can be any addressable or callable resource that can receive incoming events from other resources.

5.2.7.1. Connect an event source to a sink using the Developer perspective

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have logged in to the web console and are in the Developer perspective.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have created a sink, such as a Knative service, channel or broker.

Procedure

  1. Create an event source of any type, by navigating to +AddEvent Source and selecting the event source type that you want to create.
  2. In the Sink section of the Create Event Source form view, select your sink in the Resource list.
  3. Click Create.

Verification

You can verify that the event source was created and is connected to the sink by viewing the Topology page.

  1. In the Developer perspective, navigate to Topology.
  2. View the event source and click the connected sink to see the sink details in the right panel.

5.3. Event sinks

5.3.1. Event sinks

When you create an event source, you can specify a sink where events are sent to from the source. A sink is an addressable or a callable resource that can receive incoming events from other resources. Knative services, channels and brokers are all examples of sinks.

Addressable objects receive and acknowledge an event delivered over HTTP to an address defined in their status.address.url field. As a special case, the core Kubernetes Service object also fulfills the addressable interface.

Callable objects are able to receive an event delivered over HTTP and transform the event, returning 0 or 1 new events in the HTTP response. These returned events may be further processed in the same way that events from an external event source are processed.

5.3.1.1. Knative CLI sink flag

When you create an event source by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the sink flag

$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ 1
  --ce-override "sink=bound"

1
svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.
Tip

You can configure which CRs can be used with the --sink flag for Knative (kn) CLI commands by Customizing kn.

5.3.2. Kafka sink

Kafka sinks are a type of event sink that are available if a cluster administrator has enabled Kafka on your cluster. You can send events directly from an event source to a Kafka topic by using a Kafka sink.

5.3.2.1. Using a Kafka sink

You can create an event sink called a Kafka sink that sends events to a Kafka topic. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. By default, a Kafka sink uses the binary content mode, which is more efficient than the structured mode. To create a Kafka sink by using YAML, you must create a YAML file that defines a KafkaSink object, then apply it by using the oc apply command.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a KafkaSink object definition as a YAML file:

    Kafka sink YAML

    apiVersion: eventing.knative.dev/v1alpha1
    kind: KafkaSink
    metadata:
      name: <sink-name>
      namespace: <namespace>
    spec:
      topic: <topic-name>
      bootstrapServers:
       - <bootstrap-server>

  2. To create the Kafka sink, apply the KafkaSink YAML file:

    $ oc apply -f <filename>
  3. Configure an event source so that the sink is specified in its spec:

    Example of a Kafka sink connected to an API server source

    apiVersion: sources.knative.dev/v1alpha2
    kind: ApiServerSource
    metadata:
      name: <source-name> 1
      namespace: <namespace> 2
    spec:
      serviceAccountName: <service-account-name> 3
      mode: Resource
      resources:
      - apiVersion: v1
        kind: Event
      sink:
        ref:
          apiVersion: eventing.knative.dev/v1alpha1
          kind: KafkaSink
          name: <sink-name> 4

    1
    The name of the event source.
    2
    The namespace of the event source.
    3
    The service account for the event source.
    4
    The Kafka sink name.
5.3.2.2. Configuring security for Kafka sinks

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resources (CRs) are installed on your OpenShift Container Platform cluster.
  • Kafka sink is enabled in the KnativeKafka CR.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a Kafka cluster CA certificate stored as a .pem file.
  • You have a Kafka cluster client certificate and a key stored as .pem files.
  • You have installed the OpenShift (oc) CLI.
  • You have chosen the SASL mechanism to use, for example, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.

Procedure

  1. Create the certificate files as a secret in the same namespace as your KafkaSink object:

    Important

    Certificates and keys must be in PEM format.

    • For authentication using SASL without encryption:

      $ oc create secret -n <namespace> generic <secret_name> \
        --from-literal=protocol=SASL_PLAINTEXT \
        --from-literal=sasl.mechanism=<sasl_mechanism> \
        --from-literal=user=<username> \
        --from-literal=password=<password>
    • For authentication using SASL and encryption using TLS:

      $ oc create secret -n <namespace> generic <secret_name> \
        --from-literal=protocol=SASL_SSL \
        --from-literal=sasl.mechanism=<sasl_mechanism> \
        --from-file=ca.crt=<my_caroot.pem_file_path> \ 1
        --from-literal=user=<username> \
        --from-literal=password=<password>
      1
      The ca.crt can be omitted to use the system’s root CA set if you are using a public cloud managed Kafka service, such as Red Hat OpenShift Streams for Apache Kafka.
    • For authentication and encryption using TLS:

      $ oc create secret -n <namespace> generic <secret_name> \
        --from-literal=protocol=SSL \
        --from-file=ca.crt=<my_caroot.pem_file_path> \ 1
        --from-file=user.crt=<my_cert.pem_file_path> \
        --from-file=user.key=<my_key.pem_file_path>
      1
      The ca.crt can be omitted to use the system’s root CA set if you are using a public cloud managed Kafka service, such as Red Hat OpenShift Streams for Apache Kafka.
  2. Create or modify a KafkaSink object and add a reference to your secret in the auth spec:

    apiVersion: eventing.knative.dev/v1alpha1
    kind: KafkaSink
    metadata:
       name: <sink_name>
       namespace: <namespace>
    spec:
    ...
       auth:
         secret:
           ref:
             name: <secret_name>
    ...
  3. Apply the KafkaSink object:

    $ oc apply -f <filename>

5.4. Brokers

5.4.1. Brokers

Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink.

Broker event delivery overview

5.4.2. Broker types

Cluster administrators can set the default broker implementation for a cluster. When you create a broker, the default broker implementation is used, unless you provide set configurations in the Broker object.

5.4.2.1. Default broker implementation for development purposes

Knative provides a default, channel-based broker implementation. This channel-based broker can be used for development and testing purposes, but does not provide adequate event delivery guarantees for production environments. The default broker is backed by the InMemoryChannel channel implementation by default.

If you want to use Kafka to reduce network hops, use the Kafka broker implementation. Do not configure the channel-based broker to be backed by the KafkaChannel channel implementation.

5.4.2.2. Production-ready Kafka broker implementation

For production-ready Knative Eventing deployments, Red Hat recommends using the Knative Kafka broker implementation. The Kafka broker is an Apache Kafka native implementation of the Knative broker, which sends CloudEvents directly to the Kafka instance.

The Kafka broker has a native integration with Kafka for storing and routing events. This allows better integration with Kafka for the broker and trigger model over other broker types, and reduces network hops. Other benefits of the Kafka broker implementation include:

  • At-least-once delivery guarantees
  • Ordered delivery of events, based on the CloudEvents partitioning extension
  • Control plane high availability
  • A horizontally scalable data plane

The Knative Kafka broker stores incoming CloudEvents as Kafka records, using the binary content mode. This means that all CloudEvent attributes and extensions are mapped as headers on the Kafka record, while the data spec of the CloudEvent corresponds to the value of the Kafka record.

5.4.3. Creating brokers

Knative provides a default, channel-based broker implementation. This channel-based broker can be used for development and testing purposes, but does not provide adequate event delivery guarantees for production environments.

If a cluster administrator has configured your OpenShift Serverless deployment to use Kafka as the default broker type, creating a broker by using the default settings creates a Kafka-based broker.

If your OpenShift Serverless deployment is not configured to use Kafka broker as the default broker type, the channel-based broker is created when you use the default settings in the following procedures.

5.4.3.1. Creating a broker by using the Knative CLI

Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Using the Knative (kn) CLI to create brokers provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn broker create command to create a broker.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the Knative (kn) CLI.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  • Create a broker:

    $ kn broker create <broker_name>

Verification

  1. Use the kn command to list all existing brokers:

    $ kn broker list

    Example output

    NAME      URL                                                                     AGE   CONDITIONS   READY   REASON
    default   http://broker-ingress.knative-eventing.svc.cluster.local/test/default   45s   5 OK / 5     True

  2. Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists:

    View the broker in the web console Topology view
5.4.3.2. Creating a broker by annotating a trigger

Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create a broker by adding the eventing.knative.dev/injection: enabled annotation to a Trigger object.

Important

If you create a broker by using the eventing.knative.dev/injection: enabled annotation, you cannot delete this broker without cluster administrator permissions. If you delete the broker without having a cluster administrator remove this annotation first, the broker is created again after deletion.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. Create a Trigger object as a YAML file that has the eventing.knative.dev/injection: enabled annotation:

    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      annotations:
        eventing.knative.dev/injection: enabled
      name: <trigger_name>
    spec:
      broker: default
      subscriber: 1
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: <service_name>
    1
    Specify details about the event sink, or subscriber, that the trigger sends events to.
  2. Apply the Trigger YAML file:

    $ oc apply -f <filename>

Verification

You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console.

  1. Enter the following oc command to get the broker:

    $ oc -n <namespace> get broker default

    Example output

    NAME      READY     REASON    URL                                                                     AGE
    default   True                http://broker-ingress.knative-eventing.svc.cluster.local/test/default   3m56s

  2. Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists:

    View the broker in the web console Topology view
5.4.3.3. Creating a broker by labeling a namespace

Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create the default broker automatically by labelling a namespace that you own or have write permissions for.

Note

Brokers created using this method are not removed if you remove the label. You must manually delete them.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • Install the OpenShift CLI (oc).
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  • Label a namespace with eventing.knative.dev/injection=enabled:

    $ oc label namespace <namespace> eventing.knative.dev/injection=enabled

Verification

You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console.

  1. Use the oc command to get the broker:

    $ oc -n <namespace> get broker <broker_name>

    Example command

    $ oc -n default get broker default

    Example output

    NAME      READY     REASON    URL                                                                     AGE
    default   True                http://broker-ingress.knative-eventing.svc.cluster.local/test/default   3m56s

  2. Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists:

    View the broker in the web console Topology view
5.4.3.4. Deleting a broker that was created by injection

If you create a broker by injection and later want to delete it, you must delete it manually. Brokers created by using a namespace label or trigger annotation are not deleted permanently if you remove the label or annotation.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Remove the eventing.knative.dev/injection=enabled label from the namespace:

    $ oc label namespace <namespace> eventing.knative.dev/injection-

    Removing the annotation prevents Knative from recreating the broker after you delete it.

  2. Delete the broker from the selected namespace:

    $ oc -n <namespace> delete broker <broker_name>

Verification

  • Use the oc command to get the broker:

    $ oc -n <namespace> get broker <broker_name>

    Example command

    $ oc -n default get broker default

    Example output

    No resources found.
    Error from server (NotFound): brokers.eventing.knative.dev "default" not found

5.4.3.5. Creating a broker by using the web console

After Knative Eventing is installed on your cluster, you can create a broker by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a broker.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console.
  • The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  1. In the Developer perspective, navigate to +AddBroker. The Broker page is displayed.
  2. Optional. Update the Name of the broker. If you do not update the name, the generated broker is named default.
  3. Click Create.

Verification

You can verify that the broker was created by viewing broker components in the Topology page.

  1. In the Developer perspective, navigate to Topology.
  2. View the mt-broker-ingress, mt-broker-filter, and mt-broker-controller components.

    View the broker components in the Topology view
5.4.3.6. Creating a broker by using the Administrator perspective

Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink.

Broker event delivery overview

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have logged in to the web console and are in the Administrator perspective.
  • You have cluster administrator permissions for OpenShift Container Platform.

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to ServerlessEventing.
  2. In the Create list, select Broker. You will be directed to the Create Broker page.
  3. Optional: Modify the YAML configuration for the broker.
  4. Click Create.
5.4.3.7. Next steps
5.4.3.8. Additional resources

5.4.4. Configuring the default broker backing channel

If you are using a channel-based broker, you can set the default backing channel type for the broker to either InMemoryChannel or KafkaChannel.

Prerequisites

  • You have administrator permissions on OpenShift Container Platform.
  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.
  • You have installed the OpenShift (oc) CLI.
  • If you want to use Kafka channels as the default backing channel type, you must also install the KnativeKafka CR on your cluster.

Procedure

  1. Modify the KnativeEventing custom resource (CR) to add configuration details for the config-br-default-channel config map:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      config: 1
        config-br-default-channel:
          channel-template-spec: |
            apiVersion: messaging.knative.dev/v1beta1
            kind: KafkaChannel 2
            spec:
              numPartitions: 6 3
              replicationFactor: 3 4
    1
    In spec.config, you can specify the config maps that you want to add modified configurations for.
    2
    The default backing channel type configuration. In this example, the default channel implementation for the cluster is KafkaChannel.
    3
    The number of partitions for the Kafka channel that backs the broker.
    4
    The replication factor for the Kafka channel that backs the broker.
  2. Apply the updated KnativeEventing CR:

    $ oc apply -f <filename>

5.4.5. Configuring the default broker class

You can use the config-br-defaults config map to specify default broker class settings for Knative Eventing. You can specify the default broker class for the entire cluster or for one or more namespaces. Currently the MTChannelBasedBroker and Kafka broker types are supported.

Prerequisites

  • You have administrator permissions on OpenShift Container Platform.
  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.
  • If you want to use Kafka broker as the default broker implementation, you must also install the KnativeKafka CR on your cluster.

Procedure

  • Modify the KnativeEventing custom resource to add configuration details for the config-br-defaults config map:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      defaultBrokerClass: Kafka 1
      config: 2
        config-br-defaults: 3
          default-br-config: |
            clusterDefault: 4
              brokerClass: Kafka
              apiVersion: v1
              kind: ConfigMap
              name: kafka-broker-config 5
              namespace: knative-eventing 6
            namespaceDefaults: 7
              my-namespace:
                brokerClass: MTChannelBasedBroker
                apiVersion: v1
                kind: ConfigMap
                name: config-br-default-channel 8
                namespace: knative-eventing 9
    ...
    1
    The default broker class for Knative Eventing.
    2
    In spec.config, you can specify the config maps that you want to add modified configurations for.
    3
    The config-br-defaults config map specifies the default settings for any broker that does not specify spec.config settings or a broker class.
    4
    The cluster-wide default broker class configuration. In this example, the default broker class implementation for the cluster is Kafka.
    5
    The kafka-broker-config config map specifies default settings for the Kafka broker. See "Configuring Kafka broker settings" in the "Additional resources" section.
    6
    The namespace where the kafka-broker-config config map exists.
    7
    The namespace-scoped default broker class configuration. In this example, the default broker class implementation for the my-namespace namespace is MTChannelBasedBroker. You can specify default broker class implementations for multiple namespaces.
    8
    The config-br-default-channel config map specifies the default backing channel for the broker. See "Configuring the default broker backing channel" in the "Additional resources" section.
    9
    The namespace where the config-br-default-channel config map exists.
    Important

    Configuring a namespace-specific default overrides any cluster-wide settings.

5.4.6. Kafka broker

For production-ready Knative Eventing deployments, Red Hat recommends using the Knative Kafka broker implementation. The Kafka broker is an Apache Kafka native implementation of the Knative broker, which sends CloudEvents directly to the Kafka instance.

The Kafka broker has a native integration with Kafka for storing and routing events. This allows better integration with Kafka for the broker and trigger model over other broker types, and reduces network hops. Other benefits of the Kafka broker implementation include:

  • At-least-once delivery guarantees
  • Ordered delivery of events, based on the CloudEvents partitioning extension
  • Control plane high availability
  • A horizontally scalable data plane

The Knative Kafka broker stores incoming CloudEvents as Kafka records, using the binary content mode. This means that all CloudEvent attributes and extensions are mapped as headers on the Kafka record, while the data spec of the CloudEvent corresponds to the value of the Kafka record.

5.4.6.1. Creating a Kafka broker when it is not configured as the default broker type

If your OpenShift Serverless deployment is not configured to use Kafka broker as the default broker type, you can use one of the following procedures to create a Kafka-based broker.

5.4.6.1.1. Creating a Kafka broker by using YAML

Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka broker by using YAML, you must create a YAML file that defines a Broker object, then apply it by using the oc apply command.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a Kafka-based broker as a YAML file:

    apiVersion: eventing.knative.dev/v1
    kind: Broker
    metadata:
      annotations:
        eventing.knative.dev/broker.class: Kafka 1
      name: example-kafka-broker
    spec:
      config:
        apiVersion: v1
        kind: ConfigMap
        name: kafka-broker-config 2
        namespace: knative-eventing
    1
    The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka.
    2
    The default config map for Knative Kafka brokers. This config map is created when the Kafka broker functionality is enabled on the cluster by a cluster administrator.
  2. Apply the Kafka-based broker YAML file:

    $ oc apply -f <filename>
5.4.6.1.2. Creating a Kafka broker that uses an externally managed Kafka topic

If you want to use a Kafka broker without allowing it to create its own internal topic, you can use an externally managed Kafka topic instead. To do this, you must create a Kafka Broker object that uses the kafka.eventing.knative.dev/external.topic annotation.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster.
  • You have access to a Kafka instance such as Red Hat AMQ Streams, and have created a Kafka topic.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a Kafka-based broker as a YAML file:

    apiVersion: eventing.knative.dev/v1
    kind: Broker
    metadata:
      annotations:
        eventing.knative.dev/broker.class: Kafka 1
        kafka.eventing.knative.dev/external.topic: <topic_name> 2
    ...
    1
    The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka.
    2
    The name of the Kafka topic that you want to use.
  2. Apply the Kafka-based broker YAML file:

    $ oc apply -f <filename>
5.4.6.1.3. Knative Broker implementation for Apache Kafka with isolated data plane
Important

The Knative Broker implementation for Apache Kafka with isolated data plane is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The Knative Broker implementation for Apache Kafka has 2 planes:

Control plane
Consists of controllers that talk to the Kubernetes API, watch for custom objects, and manage the data plane.
Data plane
The collection of components that listen for incoming events, talk to Apache Kafka, and send events to the event sinks. The Knative Broker implementation for Apache Kafka data plane is where events flow. The implementation consists of kafka-broker-receiver and kafka-broker-dispatcher deployments.

When you configure a Broker class of Kafka, the Knative Broker implementation for Apache Kafka uses a shared data plane. This means that the kafka-broker-receiver and kafka-broker-dispatcher deployments in the knative-eventing namespace are used for all Apache Kafka Brokers in the cluster.

However, when you configure a Broker class of KafkaNamespaced, the Apache Kafka broker controller creates a new data plane for each namespace where a broker exists. This data plane is used by all KafkaNamespaced brokers in that namespace. This provides isolation between the data planes, so that the kafka-broker-receiver and kafka-broker-dispatcher deployments in the user namespace are only used for the broker in that namespace.

Important

As a consequence of having separate data planes, this security feature creates more deployments and uses more resources. Unless you have such isolation requirements, use a regular Broker with a class of Kafka.

5.4.6.1.4. Creating a Knative broker for Apache Kafka that uses an isolated data plane
Important

The Knative Broker implementation for Apache Kafka with isolated data plane is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

To create a KafkaNamespaced broker, you must set the eventing.knative.dev/broker.class annotation to KafkaNamespaced.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster.
  • You have access to an Apache Kafka instance, such as Red Hat AMQ Streams, and have created a Kafka topic.
  • You have created a project, or have access to a project, with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create an Apache Kafka-based broker by using a YAML file:

    apiVersion: eventing.knative.dev/v1
    kind: Broker
    metadata:
      annotations:
        eventing.knative.dev/broker.class: KafkaNamespaced 1
      name: default
      namespace: my-namespace 2
    spec:
      config:
        apiVersion: v1
        kind: ConfigMap
        name: my-config 3
    ...
    1
    To use the Apache Kafka broker with isolated data planes, the broker class value must be KafkaNamespaced.
    2 3
    The referenced ConfigMap object my-config must be in the same namespace as the Broker object, in this case my-namespace.
  2. Apply the Apache Kafka-based broker YAML file:

    $ oc apply -f <filename>
Important

The ConfigMap object in spec.config must be in the same namespace as the Broker object:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
  namespace: my-namespace
data:
  ...

After the creation of the first Broker object with the KafkaNamespaced class, the kafka-broker-receiver and kafka-broker-dispatcher deployments are created in the namespace. Subsequently, all brokers with the KafkaNamespaced class in the same namespace will use the same data plane. If no brokers with the KafkaNamespaced class exist in the namespace, the data plane in the namespace is deleted.

5.4.6.2. Configuring Kafka broker settings

You can configure the replication factor, bootstrap servers, and the number of topic partitions for a Kafka broker, by creating a config map and referencing this config map in the Kafka Broker object.

Prerequisites

  • You have cluster or dedicated administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Modify the kafka-broker-config config map, or create your own config map that contains the following configuration:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: <config_map_name> 1
      namespace: <namespace> 2
    data:
      default.topic.partitions: <integer> 3
      default.topic.replication.factor: <integer> 4
      bootstrap.servers: <list_of_servers> 5
    1
    The config map name.
    2
    The namespace where the config map exists.
    3
    The number of topic partitions for the Kafka broker. This controls how quickly events can be sent to the broker. A higher number of partitions requires greater compute resources.
    4
    The replication factor of topic messages. This prevents against data loss. A higher replication factor requires greater compute resources and more storage.
    5
    A comma separated list of bootstrap servers. This can be inside or outside of the OpenShift Container Platform cluster, and is a list of Kafka clusters that the broker receives events from and sends events to.
    Important

    The default.topic.replication.factor value must be less than or equal to the number of Kafka broker instances in your cluster. For example, if you only have one Kafka broker, the default.topic.replication.factor value should not be more than "1".

    Example Kafka broker config map

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kafka-broker-config
      namespace: knative-eventing
    data:
      default.topic.partitions: "10"
      default.topic.replication.factor: "3"
      bootstrap.servers: "my-cluster-kafka-bootstrap.kafka:9092"

  2. Apply the config map:

    $ oc apply -f <config_map_filename>
  3. Specify the config map for the Kafka Broker object:

    Example Broker object

    apiVersion: eventing.knative.dev/v1
    kind: Broker
    metadata:
      name: <broker_name> 1
      namespace: <namespace> 2
      annotations:
        eventing.knative.dev/broker.class: Kafka 3
    spec:
      config:
        apiVersion: v1
        kind: ConfigMap
        name: <config_map_name> 4
        namespace: <namespace> 5
    ...

    1
    The broker name.
    2
    The namespace where the broker exists.
    3
    The broker class annotation. In this example, the broker is a Kafka broker that uses the class value Kafka.
    4
    The config map name.
    5
    The namespace where the config map exists.
  4. Apply the broker:

    $ oc apply -f <broker_filename>
5.4.6.3. Security configuration for Knative Kafka brokers

Kafka clusters are generally secured by using the TLS or SASL authentication methods. You can configure a Kafka broker or channel to work against a protected Red Hat AMQ Streams cluster by using TLS or SASL.

Note

Red Hat recommends that you enable both SASL and TLS together.

5.4.6.3.1. Configuring TLS authentication for Kafka brokers

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.

Prerequisites

  • You have cluster or dedicated administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a Kafka cluster CA certificate stored as a .pem file.
  • You have a Kafka cluster client certificate and a key stored as .pem files.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create the certificate files as a secret in the knative-eventing namespace:

    $ oc create secret -n knative-eventing generic <secret_name> \
      --from-literal=protocol=SSL \
      --from-file=ca.crt=caroot.pem \
      --from-file=user.crt=certificate.pem \
      --from-file=user.key=key.pem
    Important

    Use the key names ca.crt, user.crt, and user.key. Do not change them.

  2. Edit the KnativeKafka CR and add a reference to your secret in the broker spec:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      broker:
        enabled: true
        defaultConfig:
          authSecretName: <secret_name>
    ...
5.4.6.3.2. Configuring SASL authentication for Kafka brokers

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.

Prerequisites

  • You have cluster or dedicated administrator permissions on OpenShift Container Platform.
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have a username and password for a Kafka cluster.
  • You have chosen the SASL mechanism to use, for example, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.
  • If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create the certificate files as a secret in the knative-eventing namespace:

    $ oc create secret -n knative-eventing generic <secret_name> \
      --from-literal=protocol=SASL_SSL \
      --from-literal=sasl.mechanism=<sasl_mechanism> \
      --from-file=ca.crt=caroot.pem \
      --from-literal=password="SecretPassword" \
      --from-literal=user="my-sasl-user"
    • Use the key names ca.crt, password, and sasl.mechanism. Do not change them.
    • If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example:

      $ oc create secret -n <namespace> generic <kafka_auth_secret> \
        --from-literal=tls.enabled=true \
        --from-literal=password="SecretPassword" \
        --from-literal=saslType="SCRAM-SHA-512" \
        --from-literal=user="my-sasl-user"
  2. Edit the KnativeKafka CR and add a reference to your secret in the broker spec:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      broker:
        enabled: true
        defaultConfig:
          authSecretName: <secret_name>
    ...
5.4.6.4. Additional resources

5.4.7. Managing brokers

The Knative (kn) CLI provides commands that can be used to describe and list existing brokers.

5.4.7.1. Listing existing brokers by using the Knative CLI

Using the Knative (kn) CLI to list brokers provides a streamlined and intuitive user interface. You can use the kn broker list command to list existing brokers in your cluster by using the Knative CLI.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the Knative (kn) CLI.

Procedure

  • List all existing brokers:

    $ kn broker list

    Example output

    NAME      URL                                                                     AGE   CONDITIONS   READY   REASON
    default   http://broker-ingress.knative-eventing.svc.cluster.local/test/default   45s   5 OK / 5     True

5.4.7.2. Describing an existing broker by using the Knative CLI

Using the Knative (kn) CLI to describe brokers provides a streamlined and intuitive user interface. You can use the kn broker describe command to print information about existing brokers in your cluster by using the Knative CLI.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have installed the Knative (kn) CLI.

Procedure

  • Describe an existing broker:

    $ kn broker describe <broker_name>

    Example command using default broker

    $ kn broker describe default

    Example output

    Name:         default
    Namespace:    default
    Annotations:  eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato ...
    Age:          22s
    
    Address:
      URL:    http://broker-ingress.knative-eventing.svc.cluster.local/default/default
    
    Conditions:
      OK TYPE                   AGE REASON
      ++ Ready                  22s
      ++ Addressable            22s
      ++ FilterReady            22s
      ++ IngressReady           22s
      ++ TriggerChannelReady    22s

5.4.8. Event delivery

You can configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. Configuring event delivery parameters, including a dead letter sink, ensures that any events that fail to be delivered to an event sink are retried. Otherwise, undelivered events are dropped.

5.4.8.1. Event delivery behavior patterns for channels and brokers

Different channel and broker types have their own behavior patterns that are followed for event delivery.

5.4.8.1.1. Knative Kafka channels and brokers

If an event is successfully delivered to a Kafka channel or broker receiver, the receiver responds with a 202 status code, which means that the event has been safely stored inside a Kafka topic and is not lost.

If the receiver responds with any other status code, the event is not safely stored, and steps must be taken by the user to resolve the issue.

5.4.8.2. Configurable event delivery parameters

The following parameters can be configured for event delivery:

Dead letter sink
You can configure the deadLetterSink delivery parameter so that if an event fails to be delivered, it is stored in the specified event sink. Undelivered events that are not stored in a dead letter sink are dropped. The dead letter sink be any addressable object that conforms to the Knative Eventing sink contract, such as a Knative service, a Kubernetes service, or a URI.
Retries
You can set a minimum number of times that the delivery must be retried before the event is sent to the dead letter sink, by configuring the retry delivery parameter with an integer value.
Back off delay
You can set the backoffDelay delivery parameter to specify the time delay before an event delivery retry is attempted after a failure. The duration of the backoffDelay parameter is specified using the ISO 8601 format. For example, PT1S specifies a 1 second delay.
Back off policy
The backoffPolicy delivery parameter can be used to specify the retry back off policy. The policy can be specified as either linear or exponential. When using the linear back off policy, the back off delay is equal to backoffDelay * <numberOfRetries>. When using the exponential backoff policy, the back off delay is equal to backoffDelay*2^<numberOfRetries>.
5.4.8.3. Examples of configuring event delivery parameters

You can configure event delivery parameters for Broker, Trigger, Channel, and Subscription objects. If you configure event delivery parameters for a broker or channel, these parameters are propagated to triggers or subscriptions created for those objects. You can also set event delivery parameters for triggers or subscriptions to override the settings for the broker or channel.

Example Broker object

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
...
spec:
  delivery:
    deadLetterSink:
      ref:
        apiVersion: eventing.knative.dev/v1alpha1
        kind: KafkaSink
        name: <sink_name>
    backoffDelay: <duration>
    backoffPolicy: <policy_type>
    retry: <integer>
...

Example Trigger object

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
  broker: <broker_name>
  delivery:
    deadLetterSink:
      ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: <sink_name>
    backoffDelay: <duration>
    backoffPolicy: <policy_type>
    retry: <integer>
...

Example Channel object

apiVersion: messaging.knative.dev/v1
kind: Channel
metadata:
...
spec:
  delivery:
    deadLetterSink:
      ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: <sink_name>
    backoffDelay: <duration>
    backoffPolicy: <policy_type>
    retry: <integer>
...

Example Subscription object

apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
...
spec:
  channel:
    apiVersion: messaging.knative.dev/v1
    kind: Channel
    name: <channel_name>
  delivery:
    deadLetterSink:
      ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: <sink_name>
    backoffDelay: <duration>
    backoffPolicy: <policy_type>
    retry: <integer>
...

5.4.8.4. Configuring event delivery ordering for triggers

If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and Knative Kafka are installed on your OpenShift Container Platform cluster.
  • Kafka broker is enabled for use on your cluster, and you have created a Kafka broker.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift (oc) CLI.

Procedure

  1. Create or modify a Trigger object and set the kafka.eventing.knative.dev/delivery.order annotation:

    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      name: <trigger_name>
      annotations:
         kafka.eventing.knative.dev/delivery.order: ordered
    ...

    The supported consumer delivery guarantees are:

    unordered
    An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management.
    ordered

    An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the next message of the partition.

    The default ordering guarantee is unordered.

  2. Apply the Trigger object:

    $ oc apply -f <filename>

5.5. Triggers

5.5.1. Triggers overview

Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink.

Broker event delivery overview

If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks. See Configuring event delivery ordering for triggers.

5.5.1.1. Configuring event delivery ordering for triggers

If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks.

Prerequisites

  • The OpenShift Serverless Operator, Knative Eventing, and Knative Kafka are installed on your OpenShift Container Platform cluster.
  • Kafka broker is enabled for use on your cluster, and you have created a Kafka broker.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • You have installed the OpenShift (oc) CLI.

Procedure

  1. Create or modify a Trigger object and set the kafka.eventing.knative.dev/delivery.order annotation:

    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      name: <trigger_name>
      annotations:
         kafka.eventing.knative.dev/delivery.order: ordered
    ...

    The supported consumer delivery guarantees are:

    unordered
    An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management.
    ordered

    An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the next message of the partition.

    The default ordering guarantee is unordered.

  2. Apply the Trigger object:

    $ oc apply -f <filename>
5.5.1.2. Next steps

5.5.2. Creating triggers

Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink.

Broker event delivery overview
5.5.2.1. Creating a trigger by using the Administrator perspective

Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console.

Prerequisites

  • The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have logged in to the web console and are in the Administrator perspective.
  • You have cluster administrator permissions for OpenShift Container Platform.
  • You have created a Knative broker.
  • You have created a Knative service to use as a subscriber.

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to ServerlessEventing.
  2. In the Broker tab, select the Options menu kebab for the broker that you want to add a trigger to.
  3. Click Add Trigger in the list.
  4. In the Add Trigger dialogue box, select a Subscriber for the trigger. The subscriber is the Knative service that will receive events from the broker.
  5. Click Add.
5.5.2.2. Creating a trigger by using the Developer perspective

Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console.

Prerequisites

  • The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster.
  • You have logged in to the web console.
  • You have created a project or have access t