Chapter 11. Event sources
11.1. Getting started with event sources
An event source is an object that links an event producer with an event sink, or consumer. A sink can be a Knative Service, Channel, or Broker that receives events from an event source.
Currently, OpenShift Serverless supports the following event source types:
- ApiServerSource
- Connects a sink to the Kubernetes API server.
- PingSource
- Periodically sends ping Events with a constant payload. It can be used as a timer.
SinkBinding is also supported, which allows you to connect core Kubernetes resources such as Deployment, Job, or StatefulSet with a sink.
You can create and manage Knative event sources using the Developer perspective in the OpenShift Container Platform web console, the kn
CLI, or by applying YAML files.
11.1.1. Prerequisites
- You must have a current installation of OpenShift Serverless, including Knative Serving and Eventing, in your OpenShift Container Platform cluster. This can be installed by a cluster administrator.
11.1.2. Creating event sources
- Create an ApiServerSource.
- Create an PingSource.
11.1.3. Additional resources
- For more information about eventing workflows using OpenShift Serverless, see Knative Eventing architecture.
11.2. Using the kn
CLI to list event sources and event source types
You can use the kn
CLI to list and manage available event sources or event source types for use with Knative Eventing.
Currently, kn
supports management of the following event source types:
ApiServerSource
- Connects a sink to the Kubernetes API server.
PingSource
- Periodically sends ping events with a constant payload. It can be used as a timer.
11.2.1. Listing available event source types using kn
Procedure
List the available event source types in the terminal:
$ kn source list-types
Example output
TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink
You can also list available event source types in YAML format:
$ kn source list-types -o yaml
11.2.2. Listing available event sources using kn
List available event sources by entering the following command:
$ kn source list
Example output
NAME TYPE RESOURCE SINK READY a1 ApiServerSource apiserversources.sources.knative.dev svc:eshow2 True b1 SinkBinding sinkbindings.sources.knative.dev svc:eshow3 False p1 PingSource pingsources.sources.knative.dev svc:eshow1 True
11.2.2.1. Listing event sources of a specific type only
You can list event sources of a specific type only, by using the --type
flag.
List available event sources of type
PingSource
by entering the following command:$ kn source list --type PingSource
Example output
NAME TYPE RESOURCE SINK READY p1 PingSource pingsources.sources.knative.dev svc:eshow1 True
11.2.3. Next steps
- See the documentation on Using ApiServerSource.
- See the documentation on Using PingSource.
11.3. Using ApiServerSource
ApiServerSource is an event source that can be used to connect an event sink, such as a Knative service, to the Kubernetes API server. ApiServerSource watches for Kubernetes events and forwards them to the Knative Eventing broker.
Both of the following procedures require you to create YAML files.
If you change the names of the YAML files from those used in the examples, you must ensure that you also update the corresponding CLI commands.
11.3.1. Using the ApiServerSource with the Knative CLI (kn
)
This section describes the steps required to create an ApiServerSource using kn
commands.
Prerequisites
-
You must have OpenShift Serverless, the Knative Serving and Eventing components, and the
kn
CLI installed.
Procedure
Create a service account, role, and role binding for the ApiServerSource.
You can do this by creating a file named
authentication.yaml
and copying the following sample code into it:apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4
NoteIf you want to re-use an existing service account with the appropriate permissions, you must modify the
authentication.yaml
for that service account.Create the service account, role binding and cluster binding:
$ oc apply -f authentication.yaml
Create an ApiServerSource that uses a broker as an event sink:
$ kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource "event:v1" --service-account <service_account_name> --mode Resource
To check that the ApiServerSource is set up correctly, create a Knative service that dumps incoming messages to its log:
$ kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
Create a trigger to filter events from the
default
broker to the service:$ kn trigger create <trigger_name> --sink svc:<service_name>
Create events by launching a Pod in the default namespace:
$ oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
Check that the controller is mapped correctly by inspecting the output generated by the following command:
$ kn source apiserver describe <source_name>
Example output
Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1alpha1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m
Verification steps
You can verify that the Kubernetes events were sent to Knative by looking at the message dumper function logs.
Get the Pods:
$ oc get pods
View the message dumper function logs for the Pods:
$ oc logs $(oc get pod -o name | grep event-display) -c user-container
Example output
☁️ cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json ... Data, { "apiVersion": "v1", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{hello-node}", "kind": "Pod", "name": "hello-node", "namespace": "default", ..... }, "kind": "Event", "message": "Started container", "metadata": { "name": "hello-node.159d7608e3a3572c", "namespace": "default", .... }, "reason": "Started", ... }
11.3.2. Deleting the ApiServerSource using the Knative CLI (kn
)
This section describes the steps used to delete the ApiServerSource, trigger, service, service account, cluster role, and cluster binding using kn
and oc
commands.
Prerequisites
-
You must have the
kn
CLI installed.
Procedure
Delete the trigger:
$ kn trigger delete <trigger_name>
Delete the service:
$ kn service delete <service_name>
Delete the event source:
$ kn source apiserver delete <source_name>
- Delete the service account, cluster role, and cluster binding:
$ oc delete -f authentication.yaml
11.3.3. Using the ApiServerSource with the YAML method
This guide describes the steps required to create an ApiServerSource using YAML files.
Prerequisites
- You will need to have a Knative Serving and Eventing installation.
-
You will need to have created the
default
broker in the same namespace as the one defined in the ApiServerSource YAML file.
Procedure
To create a service account, role, and role binding for the ApiServerSource, create a file named
authentication.yaml
and copy the following sample code into it:apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4
NoteIf you want to re-use an existing service account with the appropriate permissions, you must modify the
authentication.yaml
for that service account.After you have created the
authentication.yaml
file, apply it:$ oc apply -f authentication.yaml
To create an ApiServerSource event source, create a file named
k8s-events.yaml
and copy the following sample code into it:apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: name: testevents spec: serviceAccountName: events-sa mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1beta1 kind: Broker name: default
After you have created the
k8s-events.yaml
file, apply it:$ oc apply -f k8s-events.yaml
To check that the ApiServerSource is set up correctly, create a Knative service that dumps incoming messages to its log.
Copy the following sample YAML into a file named
service.yaml
:apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:v0.13.2
After you have created the
service.yaml
file, apply it:$ oc apply -f service.yaml
To create a trigger from the
default
broker that filters events to the service created in the previous step, create a file namedtrigger.yaml
and copy the following sample code into it:apiVersion: eventing.knative.dev/v1alpha1 kind: Trigger metadata: name: event-display-trigger namespace: default spec: subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display
After you have created the
trigger.yaml
file, apply it:$ oc apply -f trigger.yaml
To create events, launch a Pod in the default namespace:
$ oc create deployment hello-node --image=quay.io/openshift-knative/knative-eventing-sources-event-display
To check that the controller is mapped correctly, enter the following command and inspect the output:
$ oc get apiserversource.sources.knative.dev testevents -o yaml
Example output
apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: annotations: creationTimestamp: "2020-04-07T17:24:54Z" generation: 1 name: testevents namespace: default resourceVersion: "62868" selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2 uid: 1603d863-bb06-4d1c-b371-f580b4db99fa spec: mode: Resource resources: - apiVersion: v1 controller: false controllerSelector: apiVersion: "" kind: "" name: "" uid: "" kind: Event labelSelector: {} serviceAccountName: events-sa sink: ref: apiVersion: eventing.knative.dev/v1beta1 kind: Broker name: default
Verification steps
To verify that the Kubernetes events were sent to Knative, you can look at the message dumper function logs.
Get the Pods:
$ oc get pods
View the message dumper function logs for the Pods:
$ oc logs $(oc get pod -o name | grep event-display) -c user-container
Example output
☁️ cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json ... Data, { "apiVersion": "v1", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{hello-node}", "kind": "Pod", "name": "hello-node", "namespace": "default", ..... }, "kind": "Event", "message": "Started container", "metadata": { "name": "hello-node.159d7608e3a3572c", "namespace": "default", .... }, "reason": "Started", ... }
11.3.4. Deleting the ApiServerSource
This section describes how to delete the ApiServerSource, trigger, service, service account, cluster role, and cluster binding by deleting their YAML files.
Procedure
Delete the trigger:
$ oc delete -f trigger.yaml
Delete the service:
$ oc delete -f service.yaml
Delete the event source:
$ oc delete -f k8s-events.yaml
Delete the service account, cluster role, and cluster binding:
$ oc delete -f authentication.yaml
11.4. Using a PingSource
A PingSource is used to periodically send ping events with a constant payload to an event consumer, and can be used to schedule sending events, similar to a timer.
Example PingSource YAML
apiVersion: sources.knative.dev/v1alpha2 kind: PingSource metadata: name: test-ping-source spec: schedule: "*/2 * * * *" 1 jsonData: '{"message": "Hello world!"}' 2 sink: 3 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display
- 1
- The schedule of the event specified using CRON expression.
- 2
- The event message body expressed as a JSON encoded data string.
- 3
- These are the details of the event consumer. In this example, we are using a Knative service named
event-display
.
11.4.1. Using a PingSource with the kn
CLI
The following sections describe how to create, verify and remove a basic PingSource using the kn
CLI.
Prerequisites
- You have Knative Serving and Eventing installed.
-
You have the
kn
CLI installed.
Procedure
To verify that the PingSource is working, create a simple Knative service that dumps incoming messages to the service’s logs:
$ kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest
For each set of ping events that you want to request, create a PingSource in the same namespace as the event consumer:
$ kn source ping create test-ping-source \ --schedule "*/2 * * * *" \ --data '{"message": "Hello world!"}' \ --sink svc:event-display
Check that the controller is mapped correctly by entering the following command and inspecting the output:
$ kn source ping describe test-ping-source
Example output
Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {"message": "Hello world!"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s
Verfication steps
You can verify that the Kubernetes events were sent to the Knative event sink by looking at the sink pod’s logs.
By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a PingSource that sends a message every 2 minutes, so each message should be observed in a newly created pod.
Watch for new pods created:
$ watch oc get pods
Cancel watching the pods using Ctrl+C, then look at the logs of the created pod:
$ oc logs $(oc get pod -o name | grep event-display) -c user-container
Example output
☁️ cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { "message": "Hello world!" }
11.4.1.1. Remove the PingSource
Delete the PingSource:
$ kn delete pingsources.sources.knative.dev test-ping-source
Delete the
event-display
service:$ kn delete service.serving.knative.dev event-display
11.4.2. Using a PingSource with YAML
The following sections describe how to create, verify and remove a basic PingSource using YAML files.
Prerequisites
- You have Knative Serving and Eventing installed.
The following procedure requires you to create YAML files.
If you change the names of the YAML files from those used in the examples, you must ensure that you also update the corresponding CLI commands.
Procedure
To verify that the PingSource is working, create a simple Knative service that dumps incoming messages to the service’s logs.
Copy the example YAML into a file named
service.yaml
:apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest
Create the service:
$ oc apply --filename service.yaml
For each set of ping events that you want to request, create a PingSource in the same namespace as the event consumer.
Copy the example YAML into a file named
ping-source.yaml
:apiVersion: sources.knative.dev/v1alpha2 kind: PingSource metadata: name: test-ping-source spec: schedule: "*/2 * * * *" jsonData: '{"message": "Hello world!"}' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display
Create the PingSource:
$ oc apply --filename ping-source.yaml
Check that the controller is mapped correctly by entering the following command:
$ oc get pingsource.sources.knative.dev test-ping-source -oyaml
Example output
apiVersion: sources.knative.dev/v1alpha2 kind: PingSource metadata: annotations: sources.knative.dev/creator: developer sources.knative.dev/lastModifier: developer creationTimestamp: "2020-04-07T16:11:14Z" generation: 1 name: test-ping-source namespace: default resourceVersion: "55257" selfLink: /apis/sources.knative.dev/v1alpha2/namespaces/default/pingsources/test-ping-source uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164 spec: jsonData: '{ value: "hello" }' schedule: '*/2 * * * *' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default
Verfication steps
You can verify that the Kubernetes events were sent to the Knative event sink by looking at the sink pod’s logs.
By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a PingSource that sends a message every 2 minutes, so each message should be observed in a newly created pod.
Watch for new pods created:
$ watch oc get pods
Cancel watching the pods using Ctrl+C, then look at the logs of the created pod:
$ oc logs $(oc get pod -o name | grep event-display) -c user-container
Example output
☁️ cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 042ff529-240e-45ee-b40c-3a908129853e time: 2020-04-07T16:22:00.000791674Z datacontenttype: application/json Data, { "message": "Hello world!" }
11.4.2.1. Remove the PingSource
Delete the service by entering the following command:
$ oc delete --filename service.yaml
Delete the PingSource by entering the following command:
$ oc delete --filename ping-source.yaml