Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. Triggers
5.1. Triggers overview Copia collegamentoCollegamento copiato negli appunti!
Triggers are an essential component in Knative Eventing that connect specific event sources to subscriber services based on defined filters. By creating a Trigger, you can dynamically manage how events are routed within your system, ensuring they reach the appropriate destination based on your business logic.
Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST
request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST
request to an event sink.
5.2. Creating triggers Copia collegamentoCollegamento copiato negli appunti!
Triggers in Knative Eventing allow you to route events from a broker to a specific subscriber based on your requirements. By defining a Trigger, you can connect event producers to consumers dynamically, ensuring events are delivered to the correct destination. This section describes the steps to create a Trigger, configure its filters, and verify its functionality. Whether you’re working with simple routing needs or complex event-driven workflows.
The following examples displays common configurations for Triggers, demonstrating how to route events to Knative services or custom endpoints.
Example of routing events to a Knative Serving service
The following Trigger routes all events from the default broker to the Knative Serving service named my-service
:
Routing all events without a filter
attribute is recommended for debugging purposes. It allows you to observe and analyze all incoming events, helping identify issues or validate the flow of events through the broker before applying specific filters. To know more about filtering, see Advanced trigger filters.
To apply this trigger, you can save the configuration to a file, for example, trigger.yaml
and run the following command:
oc apply -f trigger.yaml
$ oc apply -f trigger.yaml
Example of routing events to a custom path
This Trigger routes all events from the default broker to a custom path /my-custom-path
on the service named my-service
:
You can save the configuration to a file, for example, custom-path-trigger.yaml
and run the following command:
oc apply -f custom-path-trigger.yaml
$ oc apply -f custom-path-trigger.yaml
5.2.1. Creating a trigger Copia collegamentoCollegamento copiato negli appunti!
Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console.
Prerequisites
- The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster.
- You have logged in to the web console.
- You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
- You have created a broker and a Knative service or other event sink to connect to the trigger.
Procedure
- Navigate to the Topology page.
- Hover over the broker that you want to create a trigger for, and drag the arrow. The Add Trigger option is displayed.
- Click Add Trigger.
- Select your sink in the Subscriber list.
- Click Add.
Verification
- After the subscription has been created, you can view it in the Topology page, where it is represented as a line that connects the broker to the event sink.
Deleting a trigger
- Navigate to the Topology page.
- Click on the trigger that you want to delete.
- In the Actions context menu, select Delete Trigger.
5.2.2. Creating a trigger by using the Knative CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the kn trigger create
command to create a trigger.
Prerequisites
- The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn
) CLI. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Procedure
Create a trigger:
kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>
$ kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can create a trigger and simultaneously create the
default
broker using broker injection:kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>
$ kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, triggers forward all events sent to a broker to sinks that are subscribed to that broker. Using the
--filter
attribute for triggers allows you to filter events from a broker, so that subscribers will only receive a subset of events based on your defined criteria.
5.3. List triggers from the command line Copia collegamentoCollegamento copiato negli appunti!
Using the Knative (kn
) CLI to list triggers provides a streamlined and intuitive user interface.
5.3.1. Listing triggers by using the Knative CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the kn trigger list
command to list existing triggers in your cluster.
Prerequisites
- The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn
) CLI.
Procedure
Print a list of available triggers:
kn trigger list
$ kn trigger list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True
NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Print a list of triggers in JSON format:
kn trigger list -o json
$ kn trigger list -o json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Describe triggers from the command line Copia collegamentoCollegamento copiato negli appunti!
Using the Knative (kn
) CLI to describe triggers provides a streamlined and intuitive user interface.
5.4.1. Describing a trigger by using the Knative CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the kn trigger describe
command to print information about existing triggers in your cluster by using the Knative CLI.
Prerequisites
- The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn
) CLI. - You have created a trigger.
Procedure
Enter the command:
kn trigger describe <trigger_name>
$ kn trigger describe <trigger_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Connecting a trigger to a sink Copia collegamentoCollegamento copiato negli appunti!
You can connect a trigger to a sink, so that events from a broker are filtered before they are sent to the sink. A sink that is connected to a trigger is configured as a subscriber
in the Trigger
object’s resource spec.
Example of a Trigger
object connected to an Apache Kafka sink
5.6. Filtering triggers from the command line Copia collegamentoCollegamento copiato negli appunti!
Using the Knative (kn
) CLI to filter events by using triggers provides a streamlined and intuitive user interface. You can use the kn trigger create
command, along with the appropriate flags, to filter events by using triggers.
5.6.1. Filtering events with triggers by using the Knative CLI Copia collegamentoCollegamento copiato negli appunti!
In the following trigger example, only events with the attribute type: dev.knative.samples.helloworld
are sent to the event sink:
kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>
$ kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>
You can also filter events by using multiple attributes. The following example shows how to filter events using the type, source, and extension attributes:
kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> \ --filter type=dev.knative.samples.helloworld \ --filter source=dev.knative.samples/helloworldsource \ --filter myextension=my-extension-value
$ kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> \
--filter type=dev.knative.samples.helloworld \
--filter source=dev.knative.samples/helloworldsource \
--filter myextension=my-extension-value
5.7. Advanced trigger filters Copia collegamentoCollegamento copiato negli appunti!
The advanced trigger filters give you advanced options for more precise event routing. You can filter events by exact matches, prefixes, or suffixes, as well as by CloudEvent extensions. This added control makes it easier to fine-tune how events flow ensuring that only relevant events trigger specific actions.
5.7.1. Advanced trigger filters overview Copia collegamentoCollegamento copiato negli appunti!
The advanced trigger filters feature adds a new filters
field to triggers that aligns with the filters API field defined in the CloudEvents Subscriptions
API. You can specify filter expressions, where each expression evaluates to true
or false
for each event.
The following example shows a trigger using the advanced filters field:
The filters
field contains an array of filter expressions, each evaluating to either true
or false
. If any expression evaluates to false
, the event is not sent to the subscriber. Each filter expression uses a specific dialect that determines the type of filter and the set of allowed additional properties within the expression.
5.7.2. Supported filter dialects Copia collegamentoCollegamento copiato negli appunti!
You can use dialects to define flexible filter expressions to target specific events.
The advanced trigger filters support the following dialects that offer different ways to match and filter events:
-
exact
-
prefix
-
suffix
-
all
-
any
-
not
-
cesql
Each dialect provides a different method for filtering events based on a specific criteria, enabling precise event selection for processing.
5.7.2.1. exact filter dialect Copia collegamentoCollegamento copiato negli appunti!
The exact
dialect filters events by comparing a string value of the CloudEvent attribute to exactly match the specified string. The comparison is case sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before comparing it to the specified value.
Example of the exact
filter dialect
5.7.2.2. prefix filter dialect Copia collegamentoCollegamento copiato negli appunti!
The prefix
dialect filters events by comparing a string value of the CloudEvent attribute that starts with the specified string. This comparison is case sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before matching it against the specified value.
Example of the prefix
filter dialect
5.7.2.3. suffix filter dialect Copia collegamentoCollegamento copiato negli appunti!
The suffix
dialect filters events by comparing a string value of the CloudEvent attribute that ends with the specified string. This comparison is case-sensitive. If the attribute is not a string, the filter converts the attribute to its string representation before matching it to the specified value.
Example of the suffix
filter dialect
5.7.2.4. all filter dialect Copia collegamentoCollegamento copiato negli appunti!
The all
filter dialect needs that all nested filter expressions evaluate to true
to process the event. If any of the nested expressions return false
, the event is not sent to the subscriber.
Example of the all
filter dialect
5.7.2.5. any filter dialect Copia collegamentoCollegamento copiato negli appunti!
The any
filter dialect requires at least one of the nested filter expressions to evaluate to true
. If none of the nested expressions return true
, the event is not sent to the subscriber.
Example of the any
filter dialect
5.7.2.6. not filter dialect Copia collegamentoCollegamento copiato negli appunti!
The not
filter dialect requires that the nested filter expression evaluates to false
for the event to be processed. If the nested expression evaluates to true
, the event is not sent to the subscriber.
Example of the not
filter dialect
5.7.2.7. cesql filter dialect Copia collegamentoCollegamento copiato negli appunti!
CloudEvents SQL expressions (cesql) allow computing values and matching of CloudEvent attributes against complex expressions that lean on the syntax of Structured Query Language (SQL) WHERE
clauses.
The cesql
filter dialect uses CloudEvents SQL expressions to filter events. The provided CESQL expression must evaluate to true
for the event to be processed.
Example of the cesql
filter dialect
For more information about the syntax and the features of the cesql
filter dialect, see CloudEvents SQL Expression Language.
5.7.3. Conflict with the existing filter field Copia collegamentoCollegamento copiato negli appunti!
You can use the filters
and the existing filter
field at the same time. If you enable the new new-trigger-filters
feature and an object contains both filter
and filters
, the filters
field overrides. This setup allows you to test the new filters
field while maintaining support for existing filters. You can gradually introduce the new field into existing trigger objects.
Example of filters
field overriding the filter
field:
5.7.4. Legacy attributes filter Copia collegamentoCollegamento copiato negli appunti!
The legacy attributes filter enables exact match filtering on any number of CloudEvents attributes, including extensions. Its functionality mirrors the exact filter dialect, and you are encouraged to transition to the exact filter whenever possible. However, for backward compatibility, the attributes filter remains available.
The following example displays how to filter events from the default broker that match the type
attribute dev.knative.foo.bar
and have the extension myextension
with the my-extension-value
value:
Example of filtering events with specific attributes
When both the filters
field and the legacy filter
field are specified, the filters
field takes precedence.
For example, in the following example configuration, events with the dev.knative.a
type are delivered, while events with the dev.knative.b
type are ignored:
5.8. Updating triggers from the command line Copia collegamentoCollegamento copiato negli appunti!
Using the Knative (kn
) CLI to update triggers provides a streamlined and intuitive user interface.
5.8.1. Updating a trigger by using the Knative CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the kn trigger update
command with certain flags to update attributes for a trigger.
Prerequisites
- The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn
) CLI. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Procedure
Update a trigger:
kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]
$ kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can update a trigger to filter exact event attributes that match incoming events. For example, using the
type
attribute:kn trigger update <trigger_name> --filter type=knative.dev.event
$ kn trigger update <trigger_name> --filter type=knative.dev.event
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can remove a filter attribute from a trigger. For example, you can remove the filter attribute with key
type
:kn trigger update <trigger_name> --filter type-
$ kn trigger update <trigger_name> --filter type-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the
--sink
parameter to change the event sink of a trigger:kn trigger update <trigger_name> --sink ksvc:my-event-sink
$ kn trigger update <trigger_name> --sink ksvc:my-event-sink
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9. Deleting triggers from the command line Copia collegamentoCollegamento copiato negli appunti!
Using the Knative (kn
) CLI to delete a trigger provides a streamlined and intuitive user interface.
5.9.1. Deleting a trigger by using the Knative CLI Copia collegamentoCollegamento copiato negli appunti!
You can use the kn trigger delete
command to delete a trigger.
Prerequisites
- The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
-
You have installed the Knative (
kn
) CLI. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Procedure
Delete a trigger:
kn trigger delete <trigger_name>
$ kn trigger delete <trigger_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List existing triggers:
kn trigger list
$ kn trigger list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the trigger no longer exists:
Example output
No triggers found.
No triggers found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10. Event delivery order for triggers Copia collegamentoCollegamento copiato negli appunti!
In Knative Eventing, the delivery order of events plays a critical role in ensuring messages are processed according to application requirements. When using a Kafka broker, you can specify whether events should be delivered in order or without strict ordering. By configuring the delivery order, you can optimize event handling for use cases that require sequential processing or prioritize performance for unordered delivery.
5.10.1. Configuring event delivery ordering for triggers Copia collegamentoCollegamento copiato negli appunti!
If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks.
Prerequisites
- The OpenShift Serverless Operator, Knative Eventing, and Knative Kafka are installed on your OpenShift Container Platform cluster.
- Kafka broker is enabled for use on your cluster, and you have created a Kafka broker.
- You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
-
You have installed the OpenShift (
oc
) CLI.
Procedure
Create or modify a
Trigger
object and set thekafka.eventing.knative.dev/delivery.order
annotation using the following example Trigger YAML file::Copy to Clipboard Copied! Toggle word wrap Toggle overflow The supported consumer delivery guarantees are:
unordered
- An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management.
ordered
An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the next message of the partition.
The default ordering guarantee is
unordered
.
Apply the
Trigger
object using the following command::oc apply -f <filename>
$ oc apply -f <filename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10.2. Next steps Copia collegamentoCollegamento copiato negli appunti!
- Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink.