Kamelets Reference
Kamelets Reference
Abstract
Preface Copy linkLink copied to clipboard!
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. AWS DynamoDB Sink Copy linkLink copied to clipboard!
Send data to AWS DynamoDB service. The sent data will insert/update/delete an item on the given AWS DynamoDB table.
Access Key/Secret Key are the basic method for authenticating to the AWS DynamoDB service. These parameters are optional, because the Kamelet also provides the following option 'useDefaultCredentialsProvider'.
When using a default Credentials Provider the AWS DynamoDB client will load the credentials through this provider and won’t use the static credential. This is the reason for not having access key and secret key as mandatory parameters for this Kamelet.
This Kamelet expects a JSON field as body. The mapping between the JSON fields and table attribute values is done by key, so if you have the input as follows:
{"username":"oscerd", "city":"Rome"}
The Kamelet will insert/update an item in the given AWS DynamoDB table and set the attributes 'username' and 'city' respectively. Please note that the JSON object must include the primary key values that define the item.
1.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-ddb-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| region * | AWS Region | The AWS region to connect to | string |
| |
| table * | Table | Name of the DynamoDB table to look at | string | ||
| accessKey | Access Key | The access key obtained from AWS | string | ||
| operation | Operation | The operation to perform (one of PutItem, UpdateItem, DeleteItem) | string |
|
|
| overrideEndpoint | Endpoint Overwrite | Set the need for overiding the endpoint URI. This option needs to be used in combination with uriEndpointOverride setting. | boolean |
| |
| secretKey | Secret Key | The secret key obtained from AWS | string | ||
| uriEndpointOverride | Overwrite Endpoint URI | Set the overriding endpoint URI. This option needs to be used in combination with overrideEndpoint option. | string | ||
| useDefaultCredentialsProvider | Default Credentials Provider | Set whether the DynamoDB client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | boolean |
| |
| writeCapacity | Write Capacity | The provisioned throughput to reserved for writing resources to your table | integer |
|
Fields marked with an asterisk (*) are mandatory.
1.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-ddb-sink Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.8.0
- camel:core
- camel:jackson
- camel:aws2-ddb
- camel:kamelet
1.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-ddb-sink.
1.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-ddb-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-ddb-sink-binding.yaml
1.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
1.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-ddb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-ddb-sink-binding.yaml
oc apply -f aws-ddb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
kamel bind channel:mychannel aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
This command creates the KameletBinding in the current namespace on the cluster.
1.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-ddb-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-ddb-sink-binding.yaml
1.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
1.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-ddb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-ddb-sink-binding.yaml
oc apply -f aws-ddb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p "sink.region=eu-west-1" -p "sink.table=The Table"
This command creates the KameletBinding in the current namespace on the cluster.
1.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 2. Avro Deserialize Action Copy linkLink copied to clipboard!
Deserialize payload to Avro
2.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the avro-deserialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Avro schema to use during serialization (as single-line, using JSON format) | string |
| |
| validate | Validate | Indicates if the content must be validated against the schema | boolean |
|
Fields marked with an asterisk (*) are mandatory.
2.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the avro-deserialize-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:kamelet
- camel:core
- camel:jackson-avro
2.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the avro-deserialize-action.
2.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the avro-deserialize-action Kamelet as an intermediate step in a Knative binding.
avro-deserialize-action-binding.yaml
2.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
2.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
avro-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-deserialize-action-binding.yaml
oc apply -f avro-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name avro-deserialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step avro-deserialize-action -p step-2.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step json-serialize-action channel:mychannel
kamel bind --name avro-deserialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step avro-deserialize-action -p step-2.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step json-serialize-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
2.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the avro-deserialize-action Kamelet as an intermediate step in a Kafka binding.
avro-deserialize-action-binding.yaml
2.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
2.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
avro-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-deserialize-action-binding.yaml
oc apply -f avro-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name avro-deserialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step avro-deserialize-action -p step-2.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind --name avro-deserialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step avro-deserialize-action -p step-2.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
2.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 3. Avro Serialize Action Copy linkLink copied to clipboard!
Serialize payload to Avro
3.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the avro-serialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Avro schema to use during serialization (as single-line, using JSON format) | string |
| |
| validate | Validate | Indicates if the content must be validated against the schema | boolean |
|
Fields marked with an asterisk (*) are mandatory.
3.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the avro-serialize-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:kamelet
- camel:core
- camel:jackson-avro
3.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the avro-serialize-action.
3.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the avro-serialize-action Kamelet as an intermediate step in a Knative binding.
avro-serialize-action-binding.yaml
3.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
3.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
avro-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-serialize-action-binding.yaml
oc apply -f avro-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' channel:mychannel
kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
3.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the avro-serialize-action Kamelet as an intermediate step in a Kafka binding.
avro-serialize-action-binding.yaml
3.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
3.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
avro-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-serialize-action-binding.yaml
oc apply -f avro-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
3.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 4. AWS Kinesis Sink Copy linkLink copied to clipboard!
Send data to AWS Kinesis.
The Kamelet expects the following header:
-
partition/ce-partition: to set the Kinesis partition key
If the header won’t be set the exchange ID will be used.
The Kamelet is also able to recognize the following header:
-
sequence-number/ce-sequencenumber: to set the Sequence number
This header is optional.
4.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-kinesis-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| stream * | Stream Name | The Kinesis stream that you want to access (needs to be created in advance) | string |
Fields marked with an asterisk (*) are mandatory.
4.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-kinesis-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-kinesis
- camel:kamelet
4.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-kinesis-sink.
4.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-kinesis-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-kinesis-sink-binding.yaml
4.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
4.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-kinesis-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-kinesis-sink-binding.yaml
oc apply -f aws-kinesis-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
kamel bind channel:mychannel aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
This command creates the KameletBinding in the current namespace on the cluster.
4.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-kinesis-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-kinesis-sink-binding.yaml
4.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
4.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-kinesis-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-kinesis-sink-binding.yaml
oc apply -f aws-kinesis-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
This command creates the KameletBinding in the current namespace on the cluster.
4.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 5. AWS Kinesis Source Copy linkLink copied to clipboard!
Receive data from AWS Kinesis.
5.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-kinesis-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| stream * | Stream Name | The Kinesis stream that you want to access (needs to be created in advance) | string |
Fields marked with an asterisk (*) are mandatory.
5.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-kinesis-source Kamelet relies upon the presence of the following dependencies:
- camel:gson
- camel:kamelet
- camel:aws2-kinesis
5.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-kinesis-source.
5.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the aws-kinesis-source Kamelet as a Knative source by binding it to a Knative object.
aws-kinesis-source-binding.yaml
5.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
5.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-kinesis-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-kinesis-source-binding.yaml
oc apply -f aws-kinesis-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" channel:mychannel
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
5.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the aws-kinesis-source Kamelet as a Kafka source by binding it to a Kafka topic.
aws-kinesis-source-binding.yaml
5.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
5.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-kinesis-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-kinesis-source-binding.yaml
oc apply -f aws-kinesis-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
5.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 6. AWS Lambda Sink Copy linkLink copied to clipboard!
Send a payload to an AWS Lambda function
6.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-lambda-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| function * | Function Name | The Lambda Function name | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string |
Fields marked with an asterisk (*) are mandatory.
6.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-lambda-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:aws2-lambda
6.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-lambda-sink.
6.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-lambda-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-lambda-sink-binding.yaml
6.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
6.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-lambda-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-lambda-sink-binding.yaml
oc apply -f aws-lambda-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
6.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-lambda-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-lambda-sink-binding.yaml
6.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
6.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-lambda-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-lambda-sink-binding.yaml
oc apply -f aws-lambda-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
6.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 7. AWS Redshift Sink Copy linkLink copied to clipboard!
Send data to an AWS Redshift Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
7.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-redshift-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured AWS Redshift Database | string | ||
| query * | Query | The Query to execute against the AWS Redshift Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured AWS Redshift Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
7.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-redshift-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:com.amazon.redshift:redshift-jdbc42:2.1.0.5
- mvn:org.apache.commons:commons-dbcp2:2.7.0
7.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-redshift-sink.
7.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-redshift-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-redshift-sink-binding.yaml
7.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
7.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-redshift-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-redshift-sink-binding.yaml
oc apply -f aws-redshift-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-redshift-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel aws-redshift-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
7.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-redshift-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-redshift-sink-binding.yaml
7.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
7.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-redshift-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-redshift-sink-binding.yaml
oc apply -f aws-redshift-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-redshift-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-redshift-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
7.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 8. AWS SNS Sink Copy linkLink copied to clipboard!
Send message to an AWS SNS Topic
8.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-sns-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| topicNameOrArn * | Topic Name | The SQS Topic name or ARN | string | ||
| autoCreateTopic | Autocreate Topic | Setting the autocreation of the SNS topic. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
8.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-sns-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:aws2-sns
8.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-sns-sink.
8.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-sns-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-sns-sink-binding.yaml
8.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
8.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sns-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sns-sink-binding.yaml
oc apply -f aws-sns-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
kamel bind channel:mychannel aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
This command creates the KameletBinding in the current namespace on the cluster.
8.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-sns-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-sns-sink-binding.yaml
8.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
8.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sns-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sns-sink-binding.yaml
oc apply -f aws-sns-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
This command creates the KameletBinding in the current namespace on the cluster.
8.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 9. AWS SQS Sink Copy linkLink copied to clipboard!
Send message to an AWS SQS Queue
9.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-sqs-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| queueNameOrArn * | Queue Name | The SQS Queue name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateQueue | Autocreate Queue | Setting the autocreation of the SQS queue. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
9.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-sqs-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-sqs
- camel:core
- camel:kamelet
9.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-sqs-sink.
9.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-sqs-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-sqs-sink-binding.yaml
9.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
9.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sqs-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-sink-binding.yaml
oc apply -f aws-sqs-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
9.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-sqs-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-sqs-sink-binding.yaml
9.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
9.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sqs-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-sink-binding.yaml
oc apply -f aws-sqs-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
9.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 10. AWS SQS Source Copy linkLink copied to clipboard!
Receive data from AWS SQS.
10.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-sqs-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| queueNameOrArn * | Queue Name | The SQS Queue name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateQueue | Autocreate Queue | Setting the autocreation of the SQS queue. | boolean |
| |
| deleteAfterRead | Auto-delete Messages | Delete messages after consuming them | boolean |
|
Fields marked with an asterisk (*) are mandatory.
10.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-sqs-source Kamelet relies upon the presence of the following dependencies:
- camel:aws2-sqs
- camel:core
- camel:kamelet
- camel:jackson
10.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-sqs-source.
10.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the aws-sqs-source Kamelet as a Knative source by binding it to a Knative object.
aws-sqs-source-binding.yaml
10.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
10.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sqs-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-sqs-source-binding.yaml
oc apply -f aws-sqs-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
10.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the aws-sqs-source Kamelet as a Kafka source by binding it to a Kafka topic.
aws-sqs-source-binding.yaml
10.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
10.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sqs-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-sqs-source-binding.yaml
oc apply -f aws-sqs-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
10.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 11. AWS 2 Simple Queue Service FIFO sink Copy linkLink copied to clipboard!
Send message to an AWS SQS FIFO Queue
11.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-sqs-fifo-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| queueNameOrArn * | Queue Name | The SQS Queue name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateQueue | Autocreate Queue | Setting the autocreation of the SQS queue. | boolean |
| |
| contentBasedDeduplication | Content-Based Deduplication | Use content-based deduplication (should be enabled in the SQS FIFO queue first) | boolean |
|
Fields marked with an asterisk (*) are mandatory.
11.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-sqs-fifo-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-sqs
- camel:core
- camel:kamelet
11.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-sqs-fifo-sink.
11.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-sqs-fifo-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-sqs-fifo-sink-binding.yaml
11.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
11.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sqs-fifo-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-fifo-sink-binding.yaml
oc apply -f aws-sqs-fifo-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
11.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-sqs-fifo-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-sqs-fifo-sink-binding.yaml
11.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
11.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-sqs-fifo-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-fifo-sink-binding.yaml
oc apply -f aws-sqs-fifo-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
11.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 12. AWS S3 Sink Copy linkLink copied to clipboard!
Upload data to AWS S3.
The Kamelet expects the following headers to be set:
-
file/ce-file: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
12.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-s3-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS. | string | ||
| bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN. | string | ||
| region * | AWS Region | The AWS region to connect to. | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS. | string | ||
| autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
12.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-s3-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-s3
- camel:kamelet
12.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-s3-sink.
12.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-s3-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-s3-sink-binding.yaml
12.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
12.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-s3-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
12.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-s3-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-s3-sink-binding.yaml
12.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
12.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-s3-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
12.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 13. AWS S3 Source Copy linkLink copied to clipboard!
Receive data from AWS S3.
13.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-s3-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
| |
| deleteAfterRead | Auto-delete Objects | Delete objects after consuming them | boolean |
|
Fields marked with an asterisk (*) are mandatory.
13.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-s3-source Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:aws2-s3
13.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-s3-source.
13.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the aws-s3-source Kamelet as a Knative source by binding it to a Knative object.
aws-s3-source-binding.yaml
13.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
13.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-s3-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-s3-source-binding.yaml
oc apply -f aws-s3-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
13.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the aws-s3-source Kamelet as a Kafka source by binding it to a Kafka topic.
aws-s3-source-binding.yaml
13.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
13.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-s3-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-s3-source-binding.yaml
oc apply -f aws-s3-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
13.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 14. AWS S3 Streaming upload Sink Copy linkLink copied to clipboard!
Upload data to AWS S3 in streaming upload mode.
14.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the aws-s3-streaming-upload-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS. | string | ||
| bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN. | string | ||
| keyName * | Key Name | Setting the key name for an element in the bucket through endpoint parameter. In Streaming Upload, with the default configuration, this will be the base for the progressive creation of files. | string | ||
| region * | AWS Region | The AWS region to connect to. | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS. | string | ||
| autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
| |
| batchMessageNumber | Batch Message Number | The number of messages composing a batch in streaming upload mode | int |
| |
| batchSize | Batch Size | The batch size (in bytes) in streaming upload mode | int |
| |
| namingStrategy | Naming Strategy | The naming strategy to use in streaming upload mode. There are 2 enums and the value can be one of progressive, random | string |
| |
| restartingPolicy | Restarting Policy | The restarting policy to use in streaming upload mode. There are 2 enums and the value can be one of override, lastPart | string |
| |
| streamingUploadMode | Streaming Upload Mode | Setting the Streaming Upload Mode | boolean |
|
Fields marked with an asterisk (*) are mandatory.
14.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the aws-s3-streaming-upload-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-s3
- camel:kamelet
14.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the aws-s3-streaming-upload-sink.
14.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the aws-s3-streaming-upload-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-s3-streaming-upload-sink-binding.yaml
14.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
14.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-s3-streaming-upload-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-streaming-upload-sink-binding.yaml
oc apply -f aws-s3-streaming-upload-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
14.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the aws-s3-streaming-upload-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-s3-streaming-upload-sink-binding.yaml
14.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
14.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
aws-s3-streaming-upload-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-streaming-upload-sink-binding.yaml
oc apply -f aws-s3-streaming-upload-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
14.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 15. Cassandra Sink Copy linkLink copied to clipboard!
Send data to a Cassandra Cluster.
This Kamelet expects the body as JSON Array. The content of the JSON Array will be used as input for the CQL Prepared Statement set in the query parameter.
15.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the cassandra-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname(s) cassandra server(s). Multiple hosts can be separated by comma. | string |
| |
| connectionPort * | Connection Port | Port number of cassandra server(s) | string |
| |
| keyspace * | Keyspace | Keyspace to use | string |
| |
| password * | Password | The password to use for accessing a secured Cassandra Cluster | string | ||
| query * | Query | The query to execute against the Cassandra cluster table | string | ||
| username * | Username | The username to use for accessing a secured Cassandra Cluster | string | ||
| consistencyLevel | Consistency Level | Consistency level to use. The value can be one of ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, SERIAL, LOCAL_SERIAL, LOCAL_ONE | string |
|
Fields marked with an asterisk (*) are mandatory.
15.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the cassandra-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:cassandraql
15.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the cassandra-sink.
15.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the cassandra-sink Kamelet as a Knative sink by binding it to a Knative object.
cassandra-sink-binding.yaml
15.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
15.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
cassandra-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f cassandra-sink-binding.yaml
oc apply -f cassandra-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.query=Query" -p "sink.username=The Username"
kamel bind channel:mychannel cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.query=Query" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
15.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the cassandra-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
cassandra-sink-binding.yaml
15.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
15.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
cassandra-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f cassandra-sink-binding.yaml
oc apply -f cassandra-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.query=The Query" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.query=The Query" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
15.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 16. Cassandra Source Copy linkLink copied to clipboard!
Query a Cassandra cluster table.
16.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the cassandra-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname(s) cassandra server(s). Multiple hosts can be separated by comma. | string |
| |
| connectionPort * | Connection Port | Port number of cassandra server(s) | string |
| |
| keyspace * | Keyspace | Keyspace to use | string |
| |
| password * | Password | The password to use for accessing a secured Cassandra Cluster | string | ||
| query * | Query | The query to execute against the Cassandra cluster table | string | ||
| username * | Username | The username to use for accessing a secured Cassandra Cluster | string | ||
| consistencyLevel | Consistency Level | Consistency level to use. The value can be one of ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, SERIAL, LOCAL_SERIAL, LOCAL_ONE | string |
| |
| resultStrategy | Result Strategy | The strategy to convert the result set of the query. Possible values are ALL, ONE, LIMIT_10, LIMIT_100… | string |
|
Fields marked with an asterisk (*) are mandatory.
16.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the cassandra-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:cassandraql
16.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the cassandra-source.
16.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the cassandra-source Kamelet as a Knative source by binding it to a Knative object.
cassandra-source-binding.yaml
16.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
16.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
cassandra-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f cassandra-source-binding.yaml
oc apply -f cassandra-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
16.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" channel:mychannel
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
16.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the cassandra-source Kamelet as a Kafka source by binding it to a Kafka topic.
cassandra-source-binding.yaml
16.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
16.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
cassandra-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f cassandra-source-binding.yaml
oc apply -f cassandra-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
16.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
16.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 17. Extract Field Action Copy linkLink copied to clipboard!
Extract a field from the body
17.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the extract-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| field * | Field | The name of the field to be added | string |
Fields marked with an asterisk (*) are mandatory.
17.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the extract-field-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:kamelet
- camel:core
- camel:jackson
17.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the extract-field-action.
17.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the extract-field-action Kamelet as an intermediate step in a Knative binding.
extract-field-action-binding.yaml
17.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
17.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
extract-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f extract-field-action-binding.yaml
oc apply -f extract-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" channel:mychannel
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
17.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the extract-field-action Kamelet as an intermediate step in a Kafka binding.
extract-field-action-binding.yaml
17.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
17.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
extract-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f extract-field-action-binding.yaml
oc apply -f extract-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
17.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 18. FTP Sink Copy linkLink copied to clipboard!
Send data to an FTP Server.
The Kamelet expects the following headers to be set:
-
file/ce-file: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
18.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the ftp-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the FTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the FTP server | string | ||
| username * | Username | The username to access the FTP server | string | ||
| fileExist | File Existence | How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore | string |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
|
Fields marked with an asterisk (*) are mandatory.
18.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the ftp-sink Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
18.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the ftp-sink.
18.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the ftp-sink Kamelet as a Knative sink by binding it to a Knative object.
ftp-sink-binding.yaml
18.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
18.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
ftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f ftp-sink-binding.yaml
oc apply -f ftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind channel:mychannel ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
18.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the ftp-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
ftp-sink-binding.yaml
18.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
18.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
ftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f ftp-sink-binding.yaml
oc apply -f ftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
18.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 19. FTP Source Copy linkLink copied to clipboard!
Receive data from an FTP Server.
19.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the ftp-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the FTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the FTP server | string | ||
| username * | Username | The username to access the FTP server | string | ||
| idempotent | Idempotency | Skip already processed files. | boolean |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
| |
| recursive | Recursive | If a directory, will look for files in all the sub-directories as well. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
19.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the ftp-source Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
19.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the ftp-source.
19.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the ftp-source Kamelet as a Knative source by binding it to a Knative object.
ftp-source-binding.yaml
19.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
19.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
ftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f ftp-source-binding.yaml
oc apply -f ftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
19.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
19.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the ftp-source Kamelet as a Kafka source by binding it to a Kafka topic.
ftp-source-binding.yaml
19.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
19.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
ftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f ftp-source-binding.yaml
oc apply -f ftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
19.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
19.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 20. Has Header Filter Action Copy linkLink copied to clipboard!
Filter based on the presence of one header
20.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the has-header-filter-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| name * | Header Name | The header name to evaluate. The header name must be passed by the source Kamelet. For Knative only, if you are using Cloud Events, you must include the CloudEvent (ce-) prefix in the header name. | string |
|
Fields marked with an asterisk (*) are mandatory.
20.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the has-header-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
20.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the has-header-filter-action.
20.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the has-header-filter-action Kamelet as an intermediate step in a Knative binding.
has-header-filter-action-binding.yaml
20.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
20.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
has-header-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f has-header-filter-action-binding.yaml
oc apply -f has-header-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name has-header-filter-action-binding timer-source?message="Hello" --step insert-header-action -p "step-0.name=my-header" -p "step-0.value=my-value" --step has-header-filter-action -p "step-1.name=my-header" channel:mychannel
kamel bind --name has-header-filter-action-binding timer-source?message="Hello" --step insert-header-action -p "step-0.name=my-header" -p "step-0.value=my-value" --step has-header-filter-action -p "step-1.name=my-header" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
20.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the has-header-filter-action Kamelet as an intermediate step in a Kafka binding.
has-header-filter-action-binding.yaml
20.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
20.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
has-header-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f has-header-filter-action-binding.yaml
oc apply -f has-header-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name has-header-filter-action-binding timer-source?message="Hello" --step insert-header-action -p "step-0.name=my-header" -p "step-0.value=my-value" --step has-header-filter-action -p "step-1.name=my-header" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind --name has-header-filter-action-binding timer-source?message="Hello" --step insert-header-action -p "step-0.name=my-header" -p "step-0.value=my-value" --step has-header-filter-action -p "step-1.name=my-header" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
20.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 21. Hoist Field Action Copy linkLink copied to clipboard!
Wrap data in a single field
21.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the hoist-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| field * | Field | The name of the field that will contain the event | string |
Fields marked with an asterisk (*) are mandatory.
21.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the hoist-field-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:core
- camel:jackson
- camel:kamelet
21.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the hoist-field-action.
21.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the hoist-field-action Kamelet as an intermediate step in a Knative binding.
hoist-field-action-binding.yaml
21.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
21.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
hoist-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f hoist-field-action-binding.yaml
oc apply -f hoist-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" channel:mychannel
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
21.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the hoist-field-action Kamelet as an intermediate step in a Kafka binding.
hoist-field-action-binding.yaml
21.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
21.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
hoist-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f hoist-field-action-binding.yaml
oc apply -f hoist-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
21.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 22. HTTP Sink Copy linkLink copied to clipboard!
Forwards an event to a HTTP endpoint
22.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the http-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| url * | URL | The URL to send data to | string |
| |
| method | Method | The HTTP method to use | string |
|
Fields marked with an asterisk (*) are mandatory.
22.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the http-sink Kamelet relies upon the presence of the following dependencies:
- camel:http
- camel:kamelet
- camel:core
22.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the http-sink.
22.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the http-sink Kamelet as a Knative sink by binding it to a Knative object.
http-sink-binding.yaml
22.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
22.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
http-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f http-sink-binding.yaml
oc apply -f http-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel http-sink -p "sink.url=https://my-service/path"
kamel bind channel:mychannel http-sink -p "sink.url=https://my-service/path"
This command creates the KameletBinding in the current namespace on the cluster.
22.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the http-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
http-sink-binding.yaml
22.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
22.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
http-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f http-sink-binding.yaml
oc apply -f http-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p "sink.url=https://my-service/path"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p "sink.url=https://my-service/path"
This command creates the KameletBinding in the current namespace on the cluster.
22.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 23. Insert Field Action Copy linkLink copied to clipboard!
Adds a custom field with a constant value to the message in transit
23.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the insert-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| field * | Field | The name of the field to be added | string | ||
| value * | Value | The value of the field | string |
Fields marked with an asterisk (*) are mandatory.
23.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the insert-field-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:core
- camel:jackson
- camel:kamelet
23.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the insert-field-action.
23.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the insert-field-action Kamelet as an intermediate step in a Knative binding.
insert-field-action-binding.yaml
23.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
23.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
insert-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-field-action-binding.yaml
oc apply -f insert-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' channel:mychannel
kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
23.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the insert-field-action Kamelet as an intermediate step in a Kafka binding.
insert-field-action-binding.yaml
23.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
23.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
insert-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-field-action-binding.yaml
oc apply -f insert-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind --name insert-field-action-binding timer-source?message='{"foo":"John"}' --step json-deserialize-action --step insert-field-action -p step-1.field='The Field' -p step-1.value='The Value' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
23.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 24. Insert Header Action Copy linkLink copied to clipboard!
Adds an header with a constant value to the message in transit
24.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the insert-header-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| name * | Name | The name of the header to be added. For Knative only, the name of the header requires a CloudEvent (ce-) prefix. | string | ||
| value * | Value | The value of the header | string |
Fields marked with an asterisk (*) are mandatory.
24.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the insert-header-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
24.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the insert-header-action.
24.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the insert-header-action Kamelet as an intermediate step in a Knative binding.
insert-header-action-binding.yaml
24.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
24.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
insert-header-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-header-action-binding.yaml
oc apply -f insert-header-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=The Name" -p "step-0.value=The Value" channel:mychannel
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=The Name" -p "step-0.value=The Value" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
24.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the insert-header-action Kamelet as an intermediate step in a Kafka binding.
insert-header-action-binding.yaml
24.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
24.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
insert-header-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-header-action-binding.yaml
oc apply -f insert-header-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=The Name" -p "step-0.value=The Value" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=The Name" -p "step-0.value=The Value" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
24.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 25. Is Tombstone Filter Action Copy linkLink copied to clipboard!
Filter based on the presence of body or not
25.1. Configuration Options Copy linkLink copied to clipboard!
The is-tombstone-filter-action Kamelet does not specify any configuration option.
25.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the is-tombstone-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
25.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the is-tombstone-filter-action.
25.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the is-tombstone-filter-action Kamelet as an intermediate step in a Knative binding.
is-tombstone-filter-action-binding.yaml
25.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
25.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
is-tombstone-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f is-tombstone-filter-action-binding.yaml
oc apply -f is-tombstone-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step is-tombstone-filter-action channel:mychannel
kamel bind timer-source?message=Hello --step is-tombstone-filter-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
25.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the is-tombstone-filter-action Kamelet as an intermediate step in a Kafka binding.
is-tombstone-filter-action-binding.yaml
25.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
25.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
is-tombstone-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f is-tombstone-filter-action-binding.yaml
oc apply -f is-tombstone-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step is-tombstone-filter-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step is-tombstone-filter-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
25.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 26. Jira Source Copy linkLink copied to clipboard!
Receive notifications about new issues from Jira.
26.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the jira-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| jiraUrl * | Jira URL | The URL of your instance of Jira | string |
| |
| password * | Password | The password to access Jira | string | ||
| username * | Username | The username to access Jira | string | ||
| jql | JQL | A query to filter issues | string |
|
Fields marked with an asterisk (*) are mandatory.
26.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the jira-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:jira
26.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the jira-source.
26.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the jira-source Kamelet as a Knative source by binding it to a Knative object.
jira-source-binding.yaml
26.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
26.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jira-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jira-source-binding.yaml
oc apply -f jira-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
26.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the jira-source Kamelet as a Kafka source by binding it to a Kafka topic.
jira-source-binding.yaml
26.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
26.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jira-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jira-source-binding.yaml
oc apply -f jira-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
26.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 27. JMS - AMQP 1.0 Kamelet Sink Copy linkLink copied to clipboard!
A Kamelet that can produce events to any AMQP 1.0 compliant message broker using the Apache Qpid JMS client
27.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the jms-amqp-10-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| destinationName * | Destination Name | The JMS destination name | string | ||
| remoteURI * | Broker URL | The JMS URL | string |
| |
| destinationType | Destination Type | The JMS destination type (i.e.: queue or topic) | string |
|
Fields marked with an asterisk (*) are mandatory.
27.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the jms-amqp-10-sink Kamelet relies upon the presence of the following dependencies:
- camel:jms
- camel:kamelet
- mvn:org.apache.qpid:qpid-jms-client:0.55.0
27.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the jms-amqp-10-sink.
27.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the jms-amqp-10-sink Kamelet as a Knative sink by binding it to a Knative object.
jms-amqp-10-sink-binding.yaml
27.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
27.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-amqp-10-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f jms-amqp-10-sink-binding.yaml
oc apply -f jms-amqp-10-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
This command creates the KameletBinding in the current namespace on the cluster.
27.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the jms-amqp-10-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
jms-amqp-10-sink-binding.yaml
27.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
27.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-amqp-10-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f jms-amqp-10-sink-binding.yaml
oc apply -f jms-amqp-10-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
This command creates the KameletBinding in the current namespace on the cluster.
27.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 28. JMS - AMQP 1.0 Kamelet Source Copy linkLink copied to clipboard!
A Kamelet that can consume events from any AMQP 1.0 compliant message broker using the Apache Qpid JMS client
28.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the jms-amqp-10-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| destinationName * | Destination Name | The JMS destination name | string | ||
| remoteURI * | Broker URL | The JMS URL | string |
| |
| destinationType | Destination Type | The JMS destination type (i.e.: queue or topic) | string |
|
Fields marked with an asterisk (*) are mandatory.
28.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the jms-amqp-10-source Kamelet relies upon the presence of the following dependencies:
- camel:jms
- camel:kamelet
- mvn:org.apache.qpid:qpid-jms-client:0.55.0
28.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the jms-amqp-10-source.
28.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the jms-amqp-10-source Kamelet as a Knative source by binding it to a Knative object.
jms-amqp-10-source-binding.yaml
28.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
28.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-amqp-10-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jms-amqp-10-source-binding.yaml
oc apply -f jms-amqp-10-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" channel:mychannel
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
28.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the jms-amqp-10-source Kamelet as a Kafka source by binding it to a Kafka topic.
jms-amqp-10-source-binding.yaml
28.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
28.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-amqp-10-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jms-amqp-10-source-binding.yaml
oc apply -f jms-amqp-10-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
28.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 29. JMS - IBM MQ Kamelet Sink Copy linkLink copied to clipboard!
A Kamelet that can produce events to an IBM MQ message queue using JMS.
29.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the jms-ibm-mq-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| channel * | IBM MQ Channel | Name of the IBM MQ Channel | string | ||
| destinationName * | Destination Name | The destination name | string | ||
| password * | Password | Password to authenticate to IBM MQ server | string | ||
| queueManager * | IBM MQ Queue Manager | Name of the IBM MQ Queue Manager | string | ||
| serverName * | IBM MQ Server name | IBM MQ Server name or address | string | ||
| serverPort * | IBM MQ Server Port | IBM MQ Server port | integer |
| |
| username * | Username | Username to authenticate to IBM MQ server | string | ||
| clientId | IBM MQ Client ID | Name of the IBM MQ Client ID | string | ||
| destinationType | Destination Type | The JMS destination type (queue or topic) | string |
|
Fields marked with an asterisk (*) are mandatory.
29.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the jms-ibm-mq-sink Kamelet relies upon the presence of the following dependencies:
- camel:jms
- camel:kamelet
- mvn:com.ibm.mq:com.ibm.mq.allclient:9.2.5.0
29.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the jms-ibm-mq-sink.
29.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the jms-ibm-mq-sink Kamelet as a Knative sink by binding it to a Knative object.
jms-ibm-mq-sink-binding.yaml
29.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
29.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-ibm-mq-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f jms-ibm-mq-sink-binding.yaml
oc apply -f jms-ibm-mq-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
29.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'
kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'
This command creates the KameletBinding in the current namespace on the cluster.
29.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the jms-ibm-mq-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
jms-ibm-mq-sink-binding.yaml
29.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
29.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-ibm-mq-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f jms-ibm-mq-sink-binding.yaml
oc apply -f jms-ibm-mq-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
29.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'
kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'
This command creates the KameletBinding in the current namespace on the cluster.
29.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 30. JMS - IBM MQ Kamelet Source Copy linkLink copied to clipboard!
A Kamelet that can read events from an IBM MQ message queue using JMS.
30.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the jms-ibm-mq-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| channel * | IBM MQ Channel | Name of the IBM MQ Channel | string | ||
| destinationName * | Destination Name | The destination name | string | ||
| password * | Password | Password to authenticate to IBM MQ server | string | ||
| queueManager * | IBM MQ Queue Manager | Name of the IBM MQ Queue Manager | string | ||
| serverName * | IBM MQ Server name | IBM MQ Server name or address | string | ||
| serverPort * | IBM MQ Server Port | IBM MQ Server port | integer |
| |
| username * | Username | Username to authenticate to IBM MQ server | string | ||
| clientId | IBM MQ Client ID | Name of the IBM MQ Client ID | string | ||
| destinationType | Destination Type | The JMS destination type (queue or topic) | string |
|
Fields marked with an asterisk (*) are mandatory.
30.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the jms-ibm-mq-source Kamelet relies upon the presence of the following dependencies:
- camel:jms
- camel:kamelet
- mvn:com.ibm.mq:com.ibm.mq.allclient:9.2.5.0
30.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the jms-ibm-mq-source.
30.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the jms-ibm-mq-source Kamelet as a Knative source by binding it to a Knative object.
jms-ibm-mq-source-binding.yaml
30.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
30.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-ibm-mq-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jms-ibm-mq-source-binding.yaml
oc apply -f jms-ibm-mq-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
30.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel
kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
30.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the jms-ibm-mq-source Kamelet as a Kafka source by binding it to a Kafka topic.
jms-ibm-mq-source-binding.yaml
30.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
30.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
jms-ibm-mq-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jms-ibm-mq-source-binding.yaml
oc apply -f jms-ibm-mq-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
30.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
30.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 31. Json Deserialize Action Copy linkLink copied to clipboard!
Deserialize payload to JSON
31.1. Configuration Options Copy linkLink copied to clipboard!
The json-deserialize-action Kamelet does not specify any configuration option.
31.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the json-deserialize-action Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:core
- camel:jackson
31.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the json-deserialize-action.
31.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the json-deserialize-action Kamelet as an intermediate step in a Knative binding.
json-deserialize-action-binding.yaml
31.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
31.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
json-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-deserialize-action-binding.yaml
oc apply -f json-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
31.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel
kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
31.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the json-deserialize-action Kamelet as an intermediate step in a Kafka binding.
json-deserialize-action-binding.yaml
31.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
31.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
json-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-deserialize-action-binding.yaml
oc apply -f json-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
31.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
31.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 32. Json Serialize Action Copy linkLink copied to clipboard!
Serialize payload to JSON
32.1. Configuration Options Copy linkLink copied to clipboard!
The json-serialize-action Kamelet does not specify any configuration option.
32.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the json-serialize-action Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:core
- camel:jackson
32.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the json-serialize-action.
32.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the json-serialize-action Kamelet as an intermediate step in a Knative binding.
json-serialize-action-binding.yaml
32.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
32.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
json-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-serialize-action-binding.yaml
oc apply -f json-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
32.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel
kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
32.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the json-serialize-action Kamelet as an intermediate step in a Kafka binding.
json-serialize-action-binding.yaml
32.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
32.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
json-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-serialize-action-binding.yaml
oc apply -f json-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
32.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
32.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 33. Kafka Sink Copy linkLink copied to clipboard!
Send data to Kafka topics.
The Kamelet is able to understand the following headers to be set:
-
key/ce-key: as message key -
partition-key/ce-partitionkey: as message partition key
Both the headers are optional.
33.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the kafka-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| bootstrapServers * | Brokers | Comma separated list of Kafka Broker URLs | string | ||
| password * | Password | Password to authenticate to kafka | string | ||
| topic * | Topic Names | Comma separated list of Kafka topic names | string | ||
| user * | Username | Username to authenticate to Kafka | string | ||
| saslMechanism | SASL Mechanism | The Simple Authentication and Security Layer (SASL) Mechanism used. | string |
| |
| securityProtocol | Security Protocol | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported | string |
|
Fields marked with an asterisk (*) are mandatory.
33.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the `kafka-sink Kamelet relies upon the presence of the following dependencies:
- camel:kafka
- camel:kamelet
33.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the kafka-sink.
33.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the kafka-sink Kamelet as a Knative sink by binding it to a Knative object.
kafka-sink-binding.yaml
33.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
33.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
kafka-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f kafka-sink-binding.yaml
oc apply -f kafka-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
33.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
33.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the kafka-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
kafka-sink-binding.yaml
33.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
33.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
kafka-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f kafka-sink-binding.yaml
oc apply -f kafka-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
33.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
33.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 34. Kafka Source Copy linkLink copied to clipboard!
Receive data from Kafka topics.
34.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the kafka-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| topic * | Topic Names | Comma separated list of Kafka topic names | string | ||
| bootstrapServers * | Brokers | Comma separated list of Kafka Broker URLs | string | ||
| securityProtocol | Security Protocol | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported | string |
| |
| saslMechanism | SASL Mechanism | The Simple Authentication and Security Layer (SASL) Mechanism used. | string |
| |
| user * | Username | Username to authenticate to Kafka | string | ||
| password * | Password | Password to authenticate to kafka | string | ||
| autoCommitEnable | Auto Commit Enable | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. | boolean |
| |
| allowManualCommit | Allow Manual Commit | Whether to allow doing manual commits | boolean |
| |
| autoOffsetReset | Auto Offset Reset | What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none | string |
| |
| pollOnError | Poll On Error Behavior | What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP | string |
| |
| deserializeHeaders | Automatically Deserialize Headers |
When enabled the Kamelet source will deserialize all message headers to String representation. The default is | boolean |
|
Fields marked with an asterisk (*) are mandatory.
34.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the `kafka-source Kamelet relies upon the presence of the following dependencies:
- camel:kafka
- camel:kamelet
- camel:core
34.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the kafka-source.
34.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the kafka-source Kamelet as a Knative source by binding it to a Knative object.
kafka-source-binding.yaml
34.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
34.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
kafka-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f kafka-source-binding.yaml
oc apply -f kafka-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
34.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
34.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the kafka-source Kamelet as a Kafka source by binding it to a Kafka topic.
kafka-source-binding.yaml
34.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
34.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
kafka-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f kafka-source-binding.yaml
oc apply -f kafka-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
34.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
34.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 35. Kafka Topic Name Matches Filter Action Copy linkLink copied to clipboard!
Filter based on kafka topic value compared to regex
35.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the topic-name-matches-filter-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| regex * | Regex | The Regex to Evaluate against the Kafka topic name | string |
Fields marked with an asterisk (*) are mandatory.
35.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the topic-name-matches-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
35.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the topic-name-matches-filter-action.
35.3.1. Kafka Action Copy linkLink copied to clipboard!
You can use the topic-name-matches-filter-action Kamelet as an intermediate step in a Kafka binding.
topic-name-matches-filter-action-binding.yaml
35.3.1.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
35.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
topic-name-matches-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f topic-name-matches-filter-action-binding.yaml
oc apply -f topic-name-matches-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
35.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
35.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 36. Log Sink Copy linkLink copied to clipboard!
A sink that logs all data that it receives, useful for debugging purposes.
36.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the log-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| showHeaders | Show Headers | Show the headers received | boolean |
| |
| showStreams | Show Streams | Show the stream bodies (they may not be available in following steps) | boolean |
|
Fields marked with an asterisk (*) are mandatory.
36.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the log-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:log
36.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the log-sink.
36.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the log-sink Kamelet as a Knative sink by binding it to a Knative object.
log-sink-binding.yaml
36.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
36.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
log-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f log-sink-binding.yaml
oc apply -f log-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
36.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel log-sink
kamel bind channel:mychannel log-sink
This command creates the KameletBinding in the current namespace on the cluster.
36.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the log-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
log-sink-binding.yaml
36.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
36.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
log-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f log-sink-binding.yaml
oc apply -f log-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
36.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic log-sink
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic log-sink
This command creates the KameletBinding in the current namespace on the cluster.
36.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 37. MariaDB Sink Copy linkLink copied to clipboard!
Send data to a MariaDB Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
37.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the mariadb-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured MariaDB Database | string | ||
| query * | Query | The Query to execute against the MariaDB Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured MariaDB Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
37.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the mariadb-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001
- mvn:org.mariadb.jdbc:mariadb-java-client
37.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the mariadb-sink.
37.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the mariadb-sink Kamelet as a Knative sink by binding it to a Knative object.
mariadb-sink-binding.yaml
37.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
37.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mariadb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mariadb-sink-binding.yaml
oc apply -f mariadb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
37.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
37.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the mariadb-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
mariadb-sink-binding.yaml
37.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
37.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mariadb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mariadb-sink-binding.yaml
oc apply -f mariadb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
37.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
37.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 38. Mask Fields Action Copy linkLink copied to clipboard!
Mask fields with a constant value in the message in transit
38.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the mask-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| fields * | Fields | Comma separated list of fields to mask | string | ||
| replacement * | Replacement | Replacement for the fields to be masked | string |
Fields marked with an asterisk (*) are mandatory.
38.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the mask-field-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:jackson
- camel:kamelet
- camel:core
38.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the mask-field-action.
38.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the mask-field-action Kamelet as an intermediate step in a Knative binding.
mask-field-action-binding.yaml
38.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
38.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mask-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f mask-field-action-binding.yaml
oc apply -f mask-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
38.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
38.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the mask-field-action Kamelet as an intermediate step in a Kafka binding.
mask-field-action-binding.yaml
38.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
38.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mask-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f mask-field-action-binding.yaml
oc apply -f mask-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
38.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
38.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 39. Message Timestamp Router Action Copy linkLink copied to clipboard!
Update the topic field as a function of the original topic name and the record’s timestamp field.
39.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the message-timestamp-router-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| timestampKeys * | Timestamp Keys | Comma separated list of Timestamp keys. The timestamp is taken from the first found field. | string | ||
| timestampFormat | Timestamp Format | Format string for the timestamp that is compatible with java.text.SimpleDateFormat. | string |
| |
| timestampKeyFormat | Timestamp Keys Format | Format of the timestamp keys. Possible values are 'timestamp' or any format string for the timestamp that is compatible with java.text.SimpleDateFormat. In case of 'timestamp' the field will be evaluated as milliseconds since 1970, so as a UNIX Timestamp. | string |
| |
| topicFormat | Topic Format | Format string which can contain '$[topic]' and '$[timestamp]' as placeholders for the topic and timestamp, respectively. | string |
|
Fields marked with an asterisk (*) are mandatory.
39.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the message-timestamp-router-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:jackson
- camel:kamelet
- camel:core
39.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the message-timestamp-router-action.
39.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the message-timestamp-router-action Kamelet as an intermediate step in a Knative binding.
message-timestamp-router-action-binding.yaml
39.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
39.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
message-timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f message-timestamp-router-action-binding.yaml
oc apply -f message-timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
39.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" channel:mychannel
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
39.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the message-timestamp-router-action Kamelet as an intermediate step in a Kafka binding.
message-timestamp-router-action-binding.yaml
39.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
39.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
message-timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f message-timestamp-router-action-binding.yaml
oc apply -f message-timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
39.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
39.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 40. MongoDB Sink Copy linkLink copied to clipboard!
Send documents to MongoDB.
This Kamelet expects a JSON as body.
Properties you can set as headers:
-
db-upsert/ce-dbupsert: if the database should create the element if it does not exist. Boolean value.
40.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the mongodb-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| collection * | MongoDB Collection | Sets the name of the MongoDB collection to bind to this endpoint. | string | ||
| database * | MongoDB Database | Sets the name of the MongoDB database to target. | string | ||
| hosts * | MongoDB Hosts | Comma separated list of MongoDB Host Addresses in host:port format. | string | ||
| createCollection | Collection | Create collection during initialisation if it doesn’t exist. | boolean |
| |
| password | MongoDB Password | User password for accessing MongoDB. | string | ||
| username | MongoDB Username | Username for accessing MongoDB. | string | ||
| writeConcern | Write Concern | Configure the level of acknowledgment requested from MongoDB for write operations, possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED, MAJORITY. | string |
Fields marked with an asterisk (*) are mandatory.
40.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the mongodb-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:mongodb
- camel:jackson
40.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the mongodb-sink.
40.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the mongodb-sink Kamelet as a Knative sink by binding it to a Knative object.
mongodb-sink-binding.yaml
40.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
40.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mongodb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mongodb-sink-binding.yaml
oc apply -f mongodb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
40.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
This command creates the KameletBinding in the current namespace on the cluster.
40.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the mongodb-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
mongodb-sink-binding.yaml
40.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
40.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mongodb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mongodb-sink-binding.yaml
oc apply -f mongodb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
40.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
This command creates the KameletBinding in the current namespace on the cluster.
40.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 41. MongoDB Source Copy linkLink copied to clipboard!
Consume documents from MongoDB.
If the persistentTailTracking option will be enabled, the consumer will keep track of the last consumed message and on the next restart, the consumption will restart from that message. In case of persistentTailTracking enabled, the tailTrackIncreasingField must be provided (by default it is optional).
If the persistentTailTracking option won’t be enabled, the consumer will consume the whole collection and wait in idle for new documents to consume.
41.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the mongodb-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| collection * | MongoDB Collection | Sets the name of the MongoDB collection to bind to this endpoint. | string | ||
| database * | MongoDB Database | Sets the name of the MongoDB database to target. | string | ||
| hosts * | MongoDB Hosts | Comma separated list of MongoDB Host Addresses in host:port format. | string | ||
| password * | MongoDB Password | User password for accessing MongoDB. | string | ||
| username * | MongoDB Username | Username for accessing MongoDB. The username must be present in the MongoDB’s authentication database (authenticationDatabase). By default, the MongoDB authenticationDatabase is 'admin'. | string | ||
| persistentTailTracking | MongoDB Persistent Tail Tracking | Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The next time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. | boolean |
| |
| tailTrackIncreasingField | MongoDB Tail Track Increasing Field | Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. | string |
Fields marked with an asterisk (*) are mandatory.
41.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the mongodb-source Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:mongodb
- camel:jackson
41.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the mongodb-source.
41.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the mongodb-source Kamelet as a Knative source by binding it to a Knative object.
mongodb-source-binding.yaml
41.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
41.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mongodb-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f mongodb-source-binding.yaml
oc apply -f mongodb-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
41.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
41.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the mongodb-source Kamelet as a Kafka source by binding it to a Kafka topic.
mongodb-source-binding.yaml
41.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
41.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mongodb-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f mongodb-source-binding.yaml
oc apply -f mongodb-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
41.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
41.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 42. MySQL Sink Copy linkLink copied to clipboard!
Send data to a MySQL Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
42.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the mysql-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured MySQL Database | string | ||
| query * | Query | The Query to execute against the MySQL Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured MySQL Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
42.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the mysql-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001
- mvn:mysql:mysql-connector-java
42.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the mysql-sink.
42.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the mysql-sink Kamelet as a Knative sink by binding it to a Knative object.
mysql-sink-binding.yaml
42.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
42.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mysql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mysql-sink-binding.yaml
oc apply -f mysql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
42.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
42.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the mysql-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
mysql-sink-binding.yaml
42.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
42.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
mysql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mysql-sink-binding.yaml
oc apply -f mysql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
42.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
42.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 43. PostgreSQL Sink Copy linkLink copied to clipboard!
Send data to a PostgreSQL Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
43.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the postgresql-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured PostgreSQL Database | string | ||
| query * | Query | The Query to execute against the PostgreSQL Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured PostgreSQL Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
43.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the postgresql-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.postgresql:postgresql
- mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001
43.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the postgresql-sink.
43.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the postgresql-sink Kamelet as a Knative sink by binding it to a Knative object.
postgresql-sink-binding.yaml
43.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
43.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
postgresql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f postgresql-sink-binding.yaml
oc apply -f postgresql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
43.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
43.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the postgresql-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
postgresql-sink-binding.yaml
43.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
43.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
postgresql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f postgresql-sink-binding.yaml
oc apply -f postgresql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
43.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
43.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 44. Predicate Filter Action Copy linkLink copied to clipboard!
Filter based on a JsonPath Expression
44.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the predicate-filter-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| expression * | Expression | The JsonPath Expression to evaluate, without the external parenthesis. Since this is a filter, the expression will be a negation, this means that if the foo field of the example is equals to John, the message will go ahead, otherwise it will be filtered out. | string |
|
Fields marked with an asterisk (*) are mandatory.
44.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the predicate-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
- camel:jsonpath
44.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the predicate-filter-action.
44.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the predicate-filter-action Kamelet as an intermediate step in a Knative binding.
predicate-filter-action-binding.yaml
44.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
44.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
predicate-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f predicate-filter-action-binding.yaml
oc apply -f predicate-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
44.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" channel:mychannel
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
44.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the predicate-filter-action Kamelet as an intermediate step in a Kafka binding.
predicate-filter-action-binding.yaml
44.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
44.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
predicate-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f predicate-filter-action-binding.yaml
oc apply -f predicate-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
44.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
44.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 45. Protobuf Deserialize Action Copy linkLink copied to clipboard!
Deserialize payload to Protobuf
45.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the protobuf-deserialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Protobuf schema to use during serialization (as single-line) | string |
|
Fields marked with an asterisk (*) are mandatory.
45.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the protobuf-deserialize-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:kamelet
- camel:core
- camel:jackson-protobuf
45.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the protobuf-deserialize-action.
45.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the protobuf-deserialize-action Kamelet as an intermediate step in a Knative binding.
protobuf-deserialize-action-binding.yaml
45.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
45.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
protobuf-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-deserialize-action-binding.yaml
oc apply -f protobuf-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
45.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel
kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
45.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the protobuf-deserialize-action Kamelet as an intermediate step in a Kafka binding.
protobuf-deserialize-action-binding.yaml
45.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
45.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
protobuf-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-deserialize-action-binding.yaml
oc apply -f protobuf-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
45.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
45.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 46. Protobuf Serialize Action Copy linkLink copied to clipboard!
Serialize payload to Protobuf
46.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the protobuf-serialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Protobuf schema to use during serialization (as single-line) | string |
|
Fields marked with an asterisk (*) are mandatory.
46.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the protobuf-serialize-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:kamelet
- camel:core
- camel:jackson-protobuf
46.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the protobuf-serialize-action.
46.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the protobuf-serialize-action Kamelet as an intermediate step in a Knative binding.
protobuf-serialize-action-binding.yaml
46.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
46.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
protobuf-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-serialize-action-binding.yaml
oc apply -f protobuf-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
46.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel
kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
46.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the protobuf-serialize-action Kamelet as an intermediate step in a Kafka binding.
protobuf-serialize-action-binding.yaml
46.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
46.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
protobuf-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-serialize-action-binding.yaml
oc apply -f protobuf-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
46.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
46.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 47. Regex Router Action Copy linkLink copied to clipboard!
Update the destination using the configured regular expression and replacement string
47.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the regex-router-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| regex * | Regex | Regular Expression for destination | string | ||
| replacement * | Replacement | Replacement when matching | string |
Fields marked with an asterisk (*) are mandatory.
47.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the regex-router-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:kamelet
- camel:core
47.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the regex-router-action.
47.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the regex-router-action Kamelet as an intermediate step in a Knative binding.
regex-router-action-binding.yaml
47.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
47.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
regex-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f regex-router-action-binding.yaml
oc apply -f regex-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
47.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" channel:mychannel
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
47.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the regex-router-action Kamelet as an intermediate step in a Kafka binding.
regex-router-action-binding.yaml
47.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
47.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
regex-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f regex-router-action-binding.yaml
oc apply -f regex-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
47.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
47.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 48. Replace Field Action Copy linkLink copied to clipboard!
Replace field with a different key in the message in transit.
- The required parameter 'renames' is a comma-separated list of colon-delimited renaming pairs like for example 'foo:bar,abc:xyz' and it represents the field rename mappings.
- The optional parameter 'enabled' represents the fields to include. If specified, only the named fields will be included in the resulting message.
- The optional parameter 'disabled' represents the fields to exclude. If specified, the listed fields will be excluded from the resulting message. This takes precedence over the 'enabled' parameter.
- The default value of 'enabled' parameter is 'all', so all the fields of the payload will be included.
- The default value of 'disabled' parameter is 'none', so no fields of the payload will be excluded.
48.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the replace-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| renames * | Renames | Comma separated list of field with new value to be renamed | string |
| |
| disabled | Disabled | Comma separated list of fields to be disabled | string | "none" | |
| enabled | Enabled | Comma separated list of fields to be enabled | string | "all" |
Fields marked with an asterisk (*) are mandatory.
48.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the replace-field-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:core
- camel:jackson
- camel:kamelet
48.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the replace-field-action.
48.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the replace-field-action Kamelet as an intermediate step in a Knative binding.
replace-field-action-binding.yaml
48.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
48.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
replace-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f replace-field-action-binding.yaml
oc apply -f replace-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
48.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" channel:mychannel
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
48.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the replace-field-action Kamelet as an intermediate step in a Kafka binding.
replace-field-action-binding.yaml
48.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
48.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
replace-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f replace-field-action-binding.yaml
oc apply -f replace-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
48.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
48.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 49. Salesforce Source Copy linkLink copied to clipboard!
Receive updates from Salesforce.
49.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the salesforce-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| clientId * | Consumer Key | The Salesforce application consumer key | string | ||
| clientSecret * | Consumer Secret | The Salesforce application consumer secret | string | ||
| password * | Password | The Salesforce user password | string | ||
| query * | Query | The query to execute on Salesforce | string |
| |
| topicName * | Topic Name | The name of the topic/channel to use | string |
| |
| userName * | Username | The Salesforce username | string | ||
| loginUrl | Login URL | The Salesforce instance login URL | string |
|
Fields marked with an asterisk (*) are mandatory.
49.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the salesforce-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:salesforce
- mvn:org.apache.camel.k:camel-k-kamelet-reify
- camel:kamelet
49.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the salesforce-source.
49.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the salesforce-source Kamelet as a Knative source by binding it to a Knative object.
salesforce-source-binding.yaml
49.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
49.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f salesforce-source-binding.yaml
oc apply -f salesforce-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
49.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" channel:mychannel
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
49.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the salesforce-source Kamelet as a Kafka source by binding it to a Kafka topic.
salesforce-source-binding.yaml
49.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
49.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f salesforce-source-binding.yaml
oc apply -f salesforce-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
49.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
49.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 50. Salesforce Create Sink Copy linkLink copied to clipboard!
Creates an object in Salesforce. The body of the message must contain the JSON of the salesforce object.
Example body: { "Phone": "555", "Name": "Antonia", "LastName": "Garcia" }
50.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the salesforce-create-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| clientId * | Consumer Key | The Salesforce application consumer key | string | ||
| clientSecret * | Consumer Secret | The Salesforce application consumer secret | string | ||
| password * | Password | The Salesforce user password | string | ||
| userName * | Username | The Salesforce username | string | ||
| loginUrl | Login URL | The Salesforce instance login URL | string |
| |
| sObjectName | Object Name | Type of the object | string |
|
Fields marked with an asterisk (*) are mandatory.
50.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the salesforce-create-sink Kamelet relies upon the presence of the following dependencies:
- camel:salesforce
- camel:kamelet
50.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the salesforce-create-sink.
50.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the salesforce-create-sink Kamelet as a Knative sink by binding it to a Knative object.
salesforce-create-sink-binding.yaml
50.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
50.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-create-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f salesforce-create-sink-binding.yaml
oc apply -f salesforce-create-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
50.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel salesforce-create-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
kamel bind channel:mychannel salesforce-create-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
50.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the salesforce-create-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
salesforce-create-sink-binding.yaml
50.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
50.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-create-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f salesforce-create-sink-binding.yaml
oc apply -f salesforce-create-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
50.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-create-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-create-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
50.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 51. Salesforce Delete Sink Copy linkLink copied to clipboard!
Removes an object from Salesforce. The body received must be a JSON containing two keys: sObjectId and sObjectName.
Example body: { "sObjectId": "XXXXX0", "sObjectName": "Contact" }
51.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the salesforce-delete-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| clientId * | Consumer Key | The Salesforce application consumer key | string | ||
| clientSecret * | Consumer Secret | The Salesforce application consumer secret | string | ||
| password * | Password | The Salesforce user password | string | ||
| userName * | Username | The Salesforce username | string | ||
| loginUrl | Login URL | The Salesforce instance login URL | string |
|
Fields marked with an asterisk (*) are mandatory.
51.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the salesforce-delete-sink Kamelet relies upon the presence of the following dependencies:
- camel:salesforce
- camel:kamelet
- camel:core
- camel:jsonpath
51.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the salesforce-delete-sink.
51.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the salesforce-delete-sink Kamelet as a Knative sink by binding it to a Knative object.
salesforce-delete-sink-binding.yaml
51.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
51.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-delete-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f salesforce-delete-sink-binding.yaml
oc apply -f salesforce-delete-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
51.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
kamel bind channel:mychannel salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
51.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the salesforce-delete-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
salesforce-delete-sink-binding.yaml
51.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
51.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-delete-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f salesforce-delete-sink-binding.yaml
oc apply -f salesforce-delete-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
51.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
51.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 52. Salesforce Update Sink Copy linkLink copied to clipboard!
Updates an object in Salesforce. The body received must contain a JSON key-value pair for each property to update and sObjectName and sObjectId must be provided as parameters.
Example of key-value pair: { "Phone": "1234567890", "Name": "Antonia" }
52.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the salesforce-update-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| clientId * | Consumer Key | The Salesforce application consumer key | string | ||
| clientSecret * | Consumer Secret | The Salesforce application consumer secret | string | ||
| password * | Password | The Salesforce user password | string | ||
| sObjectId * | Object Id | Id of the object. Only required if using key-value pair. | string | ||
| sObjectName * | Object Name | Type of the object. Only required if using key-value pair. | string |
| |
| userName * | Username | The Salesforce username | string | ||
| loginUrl | Login URL | The Salesforce instance login URL | string |
|
Fields marked with an asterisk (*) are mandatory.
52.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the salesforce-update-sink Kamelet relies upon the presence of the following dependencies:
- camel:salesforce
- camel:kamelet
52.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the salesforce-update-sink.
52.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the salesforce-update-sink Kamelet as a Knative sink by binding it to a Knative object.
salesforce-update-sink-binding.yaml
52.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
52.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-update-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f salesforce-update-sink-binding.yaml
oc apply -f salesforce-update-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
52.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username"
kamel bind channel:mychannel salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
52.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the salesforce-update-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
salesforce-update-sink-binding.yaml
52.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
52.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
salesforce-update-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f salesforce-update-sink-binding.yaml
oc apply -f salesforce-update-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
52.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
52.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 53. SFTP Sink Copy linkLink copied to clipboard!
Send data to an SFTP Server.
The Kamelet expects the following headers to be set:
-
file/ce-file: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
53.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the sftp-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the FTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the FTP server | string | ||
| username * | Username | The username to access the FTP server | string | ||
| fileExist | File Existence | How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore | string |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
|
Fields marked with an asterisk (*) are mandatory.
53.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the sftp-sink Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
53.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the sftp-sink.
53.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the sftp-sink Kamelet as a Knative sink by binding it to a Knative object.
sftp-sink-binding.yaml
53.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
53.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
sftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sftp-sink-binding.yaml
oc apply -f sftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
53.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind channel:mychannel sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
53.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the sftp-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
sftp-sink-binding.yaml
53.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
53.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
sftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sftp-sink-binding.yaml
oc apply -f sftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
53.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
53.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 54. SFTP Source Copy linkLink copied to clipboard!
Receive data from an SFTP Server.
54.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the sftp-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the SFTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the SFTP server | string | ||
| username * | Username | The username to access the SFTP server | string | ||
| idempotent | Idempotency | Skip already processed files. | boolean |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
| |
| recursive | Recursive | If a directory, will look for files in all the sub-directories as well. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
54.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the sftp-source Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
54.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the sftp-source.
54.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the sftp-source Kamelet as a Knative source by binding it to a Knative object.
sftp-source-binding.yaml
54.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
54.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
sftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f sftp-source-binding.yaml
oc apply -f sftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
54.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
54.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the sftp-source Kamelet as a Kafka source by binding it to a Kafka topic.
sftp-source-binding.yaml
54.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
54.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
sftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f sftp-source-binding.yaml
oc apply -f sftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
54.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
54.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 55. Slack Source Copy linkLink copied to clipboard!
Receive messages from a Slack channel.
55.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the slack-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| channel * | Channel | The Slack channel to receive messages from | string |
| |
| token * | Token | The token to access Slack. A Slack app is needed. This app needs to have channels:history and channels:read permissions. The Bot User OAuth Access Token is the kind of token needed. | string |
Fields marked with an asterisk (*) are mandatory.
55.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the slack-source Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:slack
- camel:jackson
55.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the slack-source.
55.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the slack-source Kamelet as a Knative source by binding it to a Knative object.
slack-source-binding.yaml
55.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
55.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
slack-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f slack-source-binding.yaml
oc apply -f slack-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
55.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
55.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the slack-source Kamelet as a Kafka source by binding it to a Kafka topic.
slack-source-binding.yaml
55.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
55.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
slack-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f slack-source-binding.yaml
oc apply -f slack-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
55.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
55.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 56. Microsoft SQL Server Sink Copy linkLink copied to clipboard!
Send data to a Microsoft SQL Server Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
56.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the sqlserver-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured SQL Server Database | string | ||
| query * | Query | The Query to execute against the SQL Server Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured SQL Server Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
56.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the sqlserver-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001
- mvn:com.microsoft.sqlserver:mssql-jdbc:9.2.1.jre11
56.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the sqlserver-sink.
56.3.1. Knative Sink Copy linkLink copied to clipboard!
You can use the sqlserver-sink Kamelet as a Knative sink by binding it to a Knative object.
sqlserver-sink-binding.yaml
56.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
56.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
sqlserver-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sqlserver-sink-binding.yaml
oc apply -f sqlserver-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
56.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
56.3.2. Kafka Sink Copy linkLink copied to clipboard!
You can use the sqlserver-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
sqlserver-sink-binding.yaml
56.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
56.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
sqlserver-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sqlserver-sink-binding.yaml
oc apply -f sqlserver-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
56.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
56.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 57. Telegram Source Copy linkLink copied to clipboard!
Receive all messages that people send to your Telegram bot.
To create a bot, contact the @botfather account using the Telegram app.
The source attaches the following headers to the messages:
-
chat-id/ce-chatid: the ID of the chat where the message comes from
57.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the telegram-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| authorizationToken * | Token | The token to access your bot on Telegram. You you can obtain it from the Telegram @botfather. | string |
Fields marked with an asterisk (*) are mandatory.
57.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the telegram-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:telegram
- camel:core
57.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the telegram-source.
57.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the telegram-source Kamelet as a Knative source by binding it to a Knative object.
telegram-source-binding.yaml
57.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
57.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
telegram-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f telegram-source-binding.yaml
oc apply -f telegram-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
57.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind telegram-source -p "source.authorizationToken=The Token" channel:mychannel
kamel bind telegram-source -p "source.authorizationToken=The Token" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
57.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the telegram-source Kamelet as a Kafka source by binding it to a Kafka topic.
telegram-source-binding.yaml
57.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
57.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
telegram-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f telegram-source-binding.yaml
oc apply -f telegram-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
57.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind telegram-source -p "source.authorizationToken=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind telegram-source -p "source.authorizationToken=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
57.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 58. Throttle Action Copy linkLink copied to clipboard!
The Throttle action allows you to ensure that a specific sink does not get overloaded.
58.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the throttle-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| messages * | Messages Number | The number of messages to send in the time period set | integer |
| |
| timePeriod | Time Period | Sets the time period during which the maximum request count is valid for, in milliseconds | string |
|
Fields marked with an asterisk (*) are mandatory.
58.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the throttle-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
58.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the throttle-action.
58.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the throttle-action Kamelet as an intermediate step in a Knative binding.
throttle-action-binding.yaml
58.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
58.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
throttle-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f throttle-action-binding.yaml
oc apply -f throttle-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
58.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step throttle-action -p "step-0.messages=10" channel:mychannel
kamel bind timer-source?message=Hello --step throttle-action -p "step-0.messages=10" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
58.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the throttle-action Kamelet as an intermediate step in a Kafka binding.
throttle-action-binding.yaml
58.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
58.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
throttle-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f throttle-action-binding.yaml
oc apply -f throttle-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
58.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step throttle-action -p "step-0.messages=1" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step throttle-action -p "step-0.messages=1" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
58.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 59. Timer Source Copy linkLink copied to clipboard!
Produces periodic events with a custom payload.
59.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the timer-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| message * | Message | The message to generate | string |
| |
| contentType | Content Type | The content type of the message being generated | string |
| |
| period | Period | The interval between two events in milliseconds | integer |
|
Fields marked with an asterisk (*) are mandatory.
59.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the timer-source Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:timer
- camel:kamelet
59.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the timer-source.
59.3.1. Knative Source Copy linkLink copied to clipboard!
You can use the timer-source Kamelet as a Knative source by binding it to a Knative object.
timer-source-binding.yaml
59.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
59.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
timer-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f timer-source-binding.yaml
oc apply -f timer-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
59.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind timer-source -p "source.message=hello world" channel:mychannel
kamel bind timer-source -p "source.message=hello world" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
59.3.2. Kafka Source Copy linkLink copied to clipboard!
You can use the timer-source Kamelet as a Kafka source by binding it to a Kafka topic.
timer-source-binding.yaml
59.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
59.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
timer-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f timer-source-binding.yaml
oc apply -f timer-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
59.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the source by using the following command:
kamel bind timer-source -p "source.message=hello world" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source -p "source.message=hello world" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
59.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 60. Timestamp Router Action Copy linkLink copied to clipboard!
Update the topic field as a function of the original topic name and the record timestamp.
60.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the timestamp-router-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| timestampFormat | Timestamp Format | Format string for the timestamp that is compatible with java.text.SimpleDateFormat. | string |
| |
| timestampHeaderName | Timestamp Header Name | The name of the header containing a timestamp | string |
| |
| topicFormat | Topic Format | Format string which can contain '$[topic]' and '$[timestamp]' as placeholders for the topic and timestamp, respectively. | string |
|
Fields marked with an asterisk (*) are mandatory.
60.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the timestamp-router-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:kamelet
- camel:core
60.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the timestamp-router-action.
60.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the timestamp-router-action Kamelet as an intermediate step in a Knative binding.
timestamp-router-action-binding.yaml
60.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
60.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f timestamp-router-action-binding.yaml
oc apply -f timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
60.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step timestamp-router-action channel:mychannel
kamel bind timer-source?message=Hello --step timestamp-router-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
60.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the timestamp-router-action Kamelet as an intermediate step in a Kafka binding.
timestamp-router-action-binding.yaml
60.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
60.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f timestamp-router-action-binding.yaml
oc apply -f timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
60.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step timestamp-router-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step timestamp-router-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
60.4. Kamelet source file Copy linkLink copied to clipboard!
Chapter 61. Value to Key Action Copy linkLink copied to clipboard!
Replace the Kafka record key with a new key formed from a subset of fields in the body
61.1. Configuration Options Copy linkLink copied to clipboard!
The following table summarizes the configuration options available for the value-to-key-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| fields * | Fields | Comma separated list of fields to be used to form the new key | string |
Fields marked with an asterisk (*) are mandatory.
61.2. Dependencies Copy linkLink copied to clipboard!
At runtime, the value-to-key-action Kamelet relies upon the presence of the following dependencies:
- github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT
- camel:core
- camel:jackson
- camel:kamelet
61.3. Usage Copy linkLink copied to clipboard!
This section describes how you can use the value-to-key-action.
61.3.1. Knative Action Copy linkLink copied to clipboard!
You can use the value-to-key-action Kamelet as an intermediate step in a Knative binding.
value-to-key-action-binding.yaml
61.3.1.1. Prerequisite Copy linkLink copied to clipboard!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
61.3.1.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
value-to-key-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f value-to-key-action-binding.yaml
oc apply -f value-to-key-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
61.3.1.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" channel:mychannel
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
61.3.2. Kafka Action Copy linkLink copied to clipboard!
You can use the value-to-key-action Kamelet as an intermediate step in a Kafka binding.
value-to-key-action-binding.yaml
61.3.2.1. Prerequisites Copy linkLink copied to clipboard!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
61.3.2.2. Procedure for using the cluster CLI Copy linkLink copied to clipboard!
-
Save the
value-to-key-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f value-to-key-action-binding.yaml
oc apply -f value-to-key-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
61.3.2.3. Procedure for using the Kamel CLI Copy linkLink copied to clipboard!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.