Chapter 5. Triggers
5.1. Triggers overview Copy linkLink copied to clipboard!
Triggers are an essential component in Knative Eventing that connect event sources to subscriber services based on defined filters. You can create a trigger to manage event routing dynamically within your system and ensure that events reach the appropriate destination based on your business logic.
Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink.
5.2. Creating triggers Copy linkLink copied to clipboard!
Triggers in Knative Eventing route events from a broker to a specific subscriber based on defined criteria. You can use a trigger to connect event producers to consumers dynamically and ensure that events reach the correct destination. You can create a trigger, configure filters, and verify its behavior to support both simple routing and complex event-driven workflows.
5.2.1. Common trigger configurations Copy linkLink copied to clipboard!
The following examples displays common configurations for triggers, demonstrating how to route events to Knative services or custom endpoints:
The following trigger routes all events from the default broker to the Knative Serving service named my-service:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
filter:
attributes:
type: dev.knative.foo.bar
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
You can route all events without a filter attribute for debugging. This approach helps you observe and analyze incoming events, identify issues, and validate event flow through the broker before you apply specific filters.
To apply this trigger, you can save the configuration to a file, for example, trigger.yaml and run the following command:
$ oc apply -f trigger.yaml
The following example displayes routing events to a custom path:
This Trigger routes all events from the default broker to a custom path /my-custom-path on the service named my-service:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
subscriber:
ref:
apiVersion: v1
kind: Service
name: my-service
uri: /my-custom-path
You can save the configuration to a file, for example, custom-path-trigger.yaml and run the following command:
$ oc apply -f custom-path-trigger.yaml
5.2.2. Creating a trigger Copy linkLink copied to clipboard!
Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After installing Knative Eventing on your cluster and you have created a broker, you can create a trigger by using the web console.
Prerequisites
- You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your OpenShift Container Platform cluster.
- You have logged in to the web console.
- You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have created a broker and a Knative service or other event sink to connect to the trigger.
Procedure
- Navigate to the Topology page.
- Hover over the broker that you want to create a trigger for, and drag the arrow. The Add Trigger option is displayed.
- Click Add Trigger.
- Select your sink in the Subscriber list.
- Click Add.
Verification
- After you create the subscription, view it on the Topology page, where a line connects the broker to the event sink.
Deleting a trigger
- Navigate to the Topology page.
- Click the trigger that you want to delete.
- In the Actions menu, select Delete Trigger.
5.2.3. Creating a trigger by using the Knative CLI Copy linkLink copied to clipboard!
You can use the kn trigger create command to create a trigger.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Eventing on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn) CLI. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Procedure
Create a trigger by running the following command:
$ kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>You can also create a trigger and simultaneously create the
defaultbroker by using broker injection:$ kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>By default, triggers forward all events sent to a broker to sinks that subscribe to that broker. You can use the
--filterattribute for triggers to filter events from a broker so that subscribers receive only a subset of events based on defined criteria.
5.3. List triggers from the command line Copy linkLink copied to clipboard!
Using the Knative (kn) CLI to list triggers provides a streamlined and intuitive user interface.
5.3.1. Listing triggers by using the Knative CLI Copy linkLink copied to clipboard!
You can use the kn trigger list command to list existing triggers in your cluster.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Eventing on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn) CLI.
Procedure
Print a list of available triggers by running the following command:
$ kn trigger listYou get an output similar to the following example:
NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 TrueOptional: Print a list of triggers in JSON format by running the following command:
$ kn trigger list -o json
5.4. Describe triggers from the command line Copy linkLink copied to clipboard!
Using the Knative (kn) CLI to describe triggers provides a streamlined and intuitive user interface.
5.4.1. Describing a trigger by using the Knative CLI Copy linkLink copied to clipboard!
You can use the kn trigger describe command to print information about existing triggers in your cluster by using the Knative CLI.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Eventing are on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn) CLI. - You have created a trigger.
Procedure
Enter the command:
$ kn trigger describe <trigger_name>You get an output similar to the following example:
Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m
5.5. Connecting a trigger to a sink Copy linkLink copied to clipboard!
You can connect a trigger to a sink so that it filters events from a broker before sending them to the sink. When you connect a sink to a trigger, you configure the sink as a subscriber in the Trigger resource.
5.5.1. Trigger configuration for connecting to a Kafka sink Copy linkLink copied to clipboard!
The following example shows how to configure a Trigger resource to send events to an Apache Kafka sink. It includes an example YAML definition that demonstrates how to define the sink as a subscriber in the Trigger resource spec.
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: <trigger_name>
spec:
...
subscriber:
ref:
apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
name: <kafka_sink_name>
-
name: <trigger_name>: The name of the trigger being connected to the sink. -
name: <kafka_sink_name>: The name of aKafkaSinkobject.
5.6. Filtering triggers from the command line Copy linkLink copied to clipboard!
Using the Knative (kn) CLI to filter events by using triggers provides a streamlined and intuitive user interface. You can use the kn trigger create command, along with the appropriate flags, to filter events by using triggers.
5.6.1. Filtering events with triggers by using the Knative CLI Copy linkLink copied to clipboard!
In the following trigger example, the trigger sends only events with the attribute type: dev.knative.samples.helloworld to the event sink:
$ kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>
You can also filter events by using many attributes. The following example shows how to filter events by using the type, source, and extension attributes:
$ kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> \
--filter type=dev.knative.samples.helloworld \
--filter source=dev.knative.samples/helloworldsource \
--filter myextension=my-extension-value
5.7. Advanced trigger filters Copy linkLink copied to clipboard!
The advanced trigger filters give you more precise event routing options. You can filter events by exact matches, prefixes, or suffixes and by CloudEvent extensions. This added control makes it easier to fine-tune how events flow, ensuring that only relevant events trigger specific actions.
5.7.1. Advanced trigger filters overview Copy linkLink copied to clipboard!
The advanced trigger filters feature adds a new filters field to triggers that aligns with the filters API field defined in the CloudEvents Subscriptions API. You can specify filter expressions, where each expression evaluates to true or false for each event.
The following example shows a trigger by using the advanced filters field:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
filters:
- cesql: "source LIKE '%commerce%' AND type IN ('order.created', 'order.updated', 'order.canceled')"
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
The filters field has an array of filter expressions, each evaluating to either true or false. If any expression evaluates to false, the event is not sent to the subscriber. Each filter expression uses a specific dialect that determines the type of filter and the set of allowed additional properties within the expression.
5.7.2. Supported filter dialects Copy linkLink copied to clipboard!
You can use dialects to define flexible filter expressions to target specific events.
The advanced trigger filters support the following dialects that offer different ways to match and filter events:
-
exact -
prefix -
suffix -
all -
any -
not -
cesql
Each dialect provides a different method for filtering events based on a specific criteria, enabling precise event selection for processing.
5.7.2.1. exact filter dialect Copy linkLink copied to clipboard!
The exact dialect filters events by comparing a string value of the CloudEvent attribute to exactly match the specified string. The comparison is case sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before comparing it to the specified value.
The following example displays the exact filter dialect:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
...
filters:
- exact:
type: com.github.push
5.7.2.2. prefix filter dialect Copy linkLink copied to clipboard!
The prefix dialect filters events by comparing a string value of the CloudEvent attribute that starts with the specified string. This comparison is case sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before matching it against the specified value.
The following example displays the prefix filter dialect:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
...
filters:
- prefix:
type: com.github.
5.7.2.3. suffix filter dialect Copy linkLink copied to clipboard!
The suffix dialect filters events by comparing a string value of the CloudEvent attribute that ends with the specified string. This comparison is case-sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before matching it to the specified value.
The following example displays suffix filter dialect:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
...
filters:
- suffix:
type: .created
5.7.2.4. all filter dialect Copy linkLink copied to clipboard!
The all filter dialect needs that all nested filter expressions evaluate to true to process the event. If any of the nested expressions return false, the event is not sent to the subscriber.
The following example displays all filter dialect:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
...
filters:
- all:
- exact:
type: com.github.push
- exact:
subject: https://github.com/cloudevents/spec
5.7.2.5. any filter dialect Copy linkLink copied to clipboard!
The any filter dialect requires at least one of the nested filter expressions to evaluate to true. If none of the nested expressions return true, the event is not sent to the subscriber.
The following example displays the any filter dialect:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
...
filters:
- any:
- exact:
type: com.github.push
- exact:
subject: https://github.com/cloudevents/spec
5.7.2.6. not filter dialect Copy linkLink copied to clipboard!
The not filter dialect requires that the nested filter expression evaluates to false for the event to be processed. If the nested expression evaluates to true, the event is not sent to the subscriber.
The following example displays the not filter dialect:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
...
filters:
- not:
exact:
type: com.github.push
5.7.2.7. cesql filter dialect Copy linkLink copied to clipboard!
CloudEvents SQL expressions (cesql) allow computing values and matching of CloudEvent attributes against complex expressions that lean on the syntax of Structured Query Language (SQL) WHERE clauses.
The cesql filter dialect uses CloudEvents SQL expressions to filter events. The provided CloudEvents Structured Query Language (CESQL) expression must evaluate to true for the event to be processed.
The following example displays the cesql filter dialect:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
...
spec:
...
filters:
- cesql: "source LIKE '%commerce%' AND type IN ('order.created', 'order.updated', 'order.canceled')"
5.7.3. Conflict with the existing filter field Copy linkLink copied to clipboard!
You can use the filters and the existing filter field at the same time. If you enable the new new-trigger-filters feature and an object has both filter and filters, the filters field overrides. This setup allows you to test the new filters field while maintaining support for existing filters. You can gradually introduce the new field into existing trigger objects.
The following example displayes the filters field overriding the filter field:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
# Existing filter field. This will be ignored when the new filters field is present.
filter:
attributes:
type: dev.knative.foo.bar
myextension: my-extension-value
# New filters field. This takes precedence over the old filter field.
filters:
- cesql: "type = 'dev.knative.foo.bar' AND myextension = 'my-extension-value'"
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
5.7.4. Legacy attributes filter Copy linkLink copied to clipboard!
The existing attributes filter enables exact match filtering on any number of CloudEvents attributes, including extensions. Its functionality mirrors the exact filter dialect, and you are encouraged to make the transition to the exact filter whenever possible. However, for backward compatibility, the attributes filter remains available.
The following example displays how to filter events from the default broker that match the type attribute dev.knative.foo.bar and have the extension myextension with the my-extension-value value:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
filter:
attributes:
type: dev.knative.foo.bar
myextension: my-extension-value
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
When both the filters field and the existing filter field are specified, the filters field takes precedence.
For example, in the following example configuration, events with the dev.knative.a type are delivered, while events with the dev.knative.b type are ignored:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
filters:
exact:
type: dev.knative.a
filter:
attributes:
type: dev.knative.b
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
5.8. Updating triggers from the command line Copy linkLink copied to clipboard!
Using the Knative (kn) CLI to update triggers provides a streamlined and intuitive user interface.
5.8.1. Updating a trigger by using the Knative CLI Copy linkLink copied to clipboard!
You can use the kn trigger update command with certain flags to update attributes for a trigger.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Eventing are on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn) CLI. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Procedure
Update a trigger by running the following command:
$ kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]You can update a trigger to filter exact event attributes that match incoming events. For example, using the
typeattribute:$ kn trigger update <trigger_name> --filter type=knative.dev.eventYou can remove a filter attribute from a trigger. For example, you can remove the filter attribute with key
type:$ kn trigger update <trigger_name> --filter type-You can use the
--sinkparameter to change the event sink of a trigger:$ kn trigger update <trigger_name> --sink ksvc:my-event-sink
5.9. Deleting triggers from the command line Copy linkLink copied to clipboard!
Using the Knative (kn) CLI to delete a trigger provides a streamlined and intuitive user interface.
5.9.1. Deleting a trigger by using the Knative CLI Copy linkLink copied to clipboard!
You can use the kn trigger delete command to delete a trigger.
Prerequisites
- You have installed the OpenShift Serverless Operator and Knative Eventing on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn) CLI. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Procedure
Delete a trigger by running the following command:
$ kn trigger delete <trigger_name>
Verification
List existing triggers by running the following command:
$ kn trigger listVerify that the trigger no longer exists:
You get an output similar to the following example:
No triggers found.
5.10. Event delivery order for triggers Copy linkLink copied to clipboard!
In Knative Eventing, event delivery order affects how applications process messages. When you use a Kafka broker, you can select ordered or unordered delivery. Configure the delivery order to support sequential processing or rank performance over ordering when strict order is not required.
5.10.1. Configuring event delivery ordering for triggers Copy linkLink copied to clipboard!
If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks.
Prerequisites
- You have installed the OpenShift Serverless Operator, Knative Eventing, and Knative Kafka on the cluster.
- You have created a Kafka broker and it is enabled.
- You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift (
oc) CLI.
Procedure
Create or change a
Triggerobject and set thekafka.eventing.knative.dev/delivery.orderannotation using the following example Trigger YAML file::apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered # ...The supported consumer delivery guarantees are:
unordered- An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management.
orderedAn ordered consumer is a per-partition blocking consumer that waits for a successful response from the
CloudEventsubscriber before it delivers the next message of the partition.The default ordering guarantee is
unordered.
Apply the
Triggerobject using the following command:$ oc apply -f <filename>