이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Kamelets Reference
Kamelets Reference
Abstract
Preface 링크 복사링크가 클립보드에 복사되었습니다!
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Supported Kamelets 링크 복사링크가 클립보드에 복사되었습니다!
The following Kamelets are supported in Camel K 1.6:
- Avro Deserialize action
- Avro Serialize action
- AWS 2 Kinesis sink
- AWS 2 Kinesis source
- AWS 2 Lambda sink
- AWS 2 Simple Notification System sink
- AWS 2 Simple Queue Service sink
- AWS 2 Simple Queue Service source
- AWS 2 Simple Queue Service FIFO sink
- AWS 2 S3 sink
- AWS 2 S3 source
- AWS 2 S3 streaming upload sink
- Cassandra sink (Technology Preview)
- Cassandra source (Technology Preview)
- ElasticSearch Index sink (Technology Preview)
- Extract Field action
- FTP sink
- FTP source
- Has Header Filter action
- Hoist Field action
- HTTP sink
- Insert Field action
- Insert Header action
- Is Tombstone Filter action
- Jira source (Technology Preview)
- JMS sink
- JMS source
- JSON Deserialize action
- JSON Serialize action
- Kafka sink
- Kafka source
- Kafka Topic name filter action (Kafka only)
- Log sink (for development and testing purposes)
- MariaDB sink
- Mask Fields action
- Message TimeStamp action
- MongoDB sink
- MongoDB source
- MySQL sink
- PostgreSQL sink
- Predicate filter action
- Protobuf Deserialize action
- Protobuf Serialize action
- Regex Router action
- Replace Field action
- Salesforce source
- SFTP sink
- SFTP source
- Slack source
- SQL Server sink
- Telegram source (Technology Preview)
- Timer source (for development and testing purposes)
- TimeStamp Router action
- Value to Key action
Chapter 2. Avro Deserialize Action 링크 복사링크가 클립보드에 복사되었습니다!
Deserialize payload to Avro
2.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the avro-deserialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Avro schema to use during serialization (as single-line, using JSON format) | string |
| |
| validate | Validate | Indicates if the content must be validated against the schema | boolean |
|
Fields marked with an asterisk (*) are mandatory.
2.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the avro-deserialize-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:kamelet
- camel:core
- camel:jackson-avro
2.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the avro-deserialize-action.
2.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the avro-deserialize-action Kamelet as an intermediate step in a Knative binding.
avro-deserialize-action-binding.yaml
2.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
2.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
avro-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-deserialize-action-binding.yaml
oc apply -f avro-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step avro-deserialize-action -p "step-2.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step json-deserialize-action channel:mychannel
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step avro-deserialize-action -p "step-2.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step json-deserialize-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
2.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the avro-deserialize-action Kamelet as an intermediate step in a Kafka binding.
avro-deserialize-action-binding.yaml
2.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
2.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
avro-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-deserialize-action-binding.yaml
oc apply -f avro-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step avro-deserialize-action -p "step-2.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step avro-deserialize-action -p "step-2.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
2.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 3. Avro Serialize Action 링크 복사링크가 클립보드에 복사되었습니다!
Serialize payload to Avro
3.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the avro-serialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Avro schema to use during serialization (as single-line, using JSON format) | string |
| |
| validate | Validate | Indicates if the content must be validated against the schema | boolean |
|
Fields marked with an asterisk (*) are mandatory.
3.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the avro-serialize-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:kamelet
- camel:core
- camel:jackson-avro
3.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the avro-serialize-action.
3.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the avro-serialize-action Kamelet as an intermediate step in a Knative binding.
avro-serialize-action-binding.yaml
3.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
3.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
avro-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-serialize-action-binding.yaml
oc apply -f avro-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" channel:mychannel
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
3.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the avro-serialize-action Kamelet as an intermediate step in a Kafka binding.
avro-serialize-action-binding.yaml
3.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
3.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
avro-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f avro-serialize-action-binding.yaml
oc apply -f avro-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step avro-serialize-action -p "step-1.schema={\"type\":\"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
3.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 4. AWS Kinesis Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to AWS Kinesis.
The Kamelet expects the following header:
-
partition/ce-partition: to set the Kinesis partition key
If the header won’t be set the exchange ID will be used.
The Kamelet is also able to recognize the following header:
-
sequence-number/ce-sequencenumber: to set the Sequence number
This header is optional.
4.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-kinesis-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| stream * | Stream Name | The Kinesis stream that you want to access (needs to be created in advance) | string |
Fields marked with an asterisk (*) are mandatory.
4.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-kinesis-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-kinesis
- camel:kamelet
4.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-kinesis-sink.
4.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-kinesis-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-kinesis-sink-binding.yaml
4.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
4.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-kinesis-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-kinesis-sink-binding.yaml
oc apply -f aws-kinesis-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
kamel bind channel:mychannel aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
This command creates the KameletBinding in the current namespace on the cluster.
4.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-kinesis-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-kinesis-sink-binding.yaml
4.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
4.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-kinesis-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-kinesis-sink-binding.yaml
oc apply -f aws-kinesis-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-kinesis-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.stream=The Stream Name"
This command creates the KameletBinding in the current namespace on the cluster.
4.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 5. AWS Kinesis Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive data from AWS Kinesis.
5.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-kinesis-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| stream * | Stream Name | The Kinesis stream that you want to access (needs to be created in advance) | string |
Fields marked with an asterisk (*) are mandatory.
5.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-kinesis-source Kamelet relies upon the presence of the following dependencies:
- camel:gson
- camel:kamelet
- camel:aws2-kinesis
5.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-kinesis-source.
5.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-kinesis-source Kamelet as a Knative source by binding it to a Knative object.
aws-kinesis-source-binding.yaml
5.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
5.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-kinesis-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-kinesis-source-binding.yaml
oc apply -f aws-kinesis-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" channel:mychannel
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
5.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-kinesis-source Kamelet as a Kafka source by binding it to a Kafka topic.
aws-kinesis-source-binding.yaml
5.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
5.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-kinesis-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-kinesis-source-binding.yaml
oc apply -f aws-kinesis-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
5.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 6. AWS Lambda Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send a payload to an AWS Lambda function
6.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-lambda-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| function * | Function Name | The Lambda Function name | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string |
Fields marked with an asterisk (*) are mandatory.
6.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-lambda-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:aws2-lambda
6.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-lambda-sink.
6.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-lambda-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-lambda-sink-binding.yaml
6.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
6.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-lambda-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-lambda-sink-binding.yaml
oc apply -f aws-lambda-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
6.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-lambda-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-lambda-sink-binding.yaml
6.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
6.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-lambda-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-lambda-sink-binding.yaml
oc apply -f aws-lambda-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
6.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 7. AWS SNS Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send message to an AWS SNS Topic
7.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-sns-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| topicNameOrArn * | Topic Name | The SQS Topic name or ARN | string | ||
| autoCreateTopic | Autocreate Topic | Setting the autocreation of the SNS topic. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
7.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-sns-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:aws2-sns
7.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-sns-sink.
7.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sns-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-sns-sink-binding.yaml
7.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
7.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sns-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sns-sink-binding.yaml
oc apply -f aws-sns-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
kamel bind channel:mychannel aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
This command creates the KameletBinding in the current namespace on the cluster.
7.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sns-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-sns-sink-binding.yaml
7.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
7.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sns-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sns-sink-binding.yaml
oc apply -f aws-sns-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sns-sink -p "sink.accessKey=The Access Key" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" -p "sink.topicNameOrArn=The Topic Name"
This command creates the KameletBinding in the current namespace on the cluster.
7.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 8. AWS SQS Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send message to an AWS SQS Queue
8.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-sqs-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| queueNameOrArn * | Queue Name | The SQS Queue name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateQueue | Autocreate Queue | Setting the autocreation of the SQS queue. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
8.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-sqs-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-sqs
- camel:core
- camel:kamelet
8.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-sqs-sink.
8.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sqs-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-sqs-sink-binding.yaml
8.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
8.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sqs-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-sink-binding.yaml
oc apply -f aws-sqs-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
8.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sqs-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-sqs-sink-binding.yaml
8.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
8.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sqs-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-sink-binding.yaml
oc apply -f aws-sqs-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
8.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 9. AWS SQS Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive data from AWS SQS.
9.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-sqs-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| queueNameOrArn * | Queue Name | The SQS Queue name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateQueue | Autocreate Queue | Setting the autocreation of the SQS queue. | boolean |
| |
| deleteAfterRead | Auto-delete Messages | Delete messages after consuming them | boolean |
|
Fields marked with an asterisk (*) are mandatory.
9.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-sqs-source Kamelet relies upon the presence of the following dependencies:
- camel:aws2-sqs
- camel:core
- camel:kamelet
- camel:jackson
9.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-sqs-source.
9.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sqs-source Kamelet as a Knative source by binding it to a Knative object.
aws-sqs-source-binding.yaml
9.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
9.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sqs-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-sqs-source-binding.yaml
oc apply -f aws-sqs-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
9.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sqs-source Kamelet as a Kafka source by binding it to a Kafka topic.
aws-sqs-source-binding.yaml
9.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
9.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sqs-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-sqs-source-binding.yaml
oc apply -f aws-sqs-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind aws-sqs-source -p "source.accessKey=The Access Key" -p "source.queueNameOrArn=The Queue Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
9.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 10. AWS 2 Simple Queue Service FIFO sink 링크 복사링크가 클립보드에 복사되었습니다!
Send message to an AWS SQS FIFO Queue
10.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-sqs-fifo-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| queueNameOrArn * | Queue Name | The SQS Queue name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateQueue | Autocreate Queue | Setting the autocreation of the SQS queue. | boolean |
| |
| contentBasedDeduplication | Content-Based Deduplication | Use content-based deduplication (should be enabled in the SQS FIFO queue first) | boolean |
|
Fields marked with an asterisk (*) are mandatory.
10.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-sqs-fifo-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-sqs
- camel:core
- camel:kamelet
10.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-sqs-fifo-sink.
10.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sqs-fifo-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-sqs-fifo-sink-binding.yaml
10.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
10.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sqs-fifo-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-fifo-sink-binding.yaml
oc apply -f aws-sqs-fifo-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
10.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-sqs-fifo-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-sqs-fifo-sink-binding.yaml
10.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
10.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-sqs-fifo-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-sqs-fifo-sink-binding.yaml
oc apply -f aws-sqs-fifo-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
10.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 11. AWS S3 Sink 링크 복사링크가 클립보드에 복사되었습니다!
Upload data to AWS S3.
The Kamelet expects the following headers to be set:
-
file/ce-file: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
11.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-s3-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS. | string | ||
| bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN. | string | ||
| region * | AWS Region | The AWS region to connect to. | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS. | string | ||
| autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
11.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-s3-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-s3
- camel:kamelet
11.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-s3-sink.
11.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-s3-sink-binding.yaml
11.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
11.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
11.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-s3-sink-binding.yaml
11.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
11.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-sink-binding.yaml
oc apply -f aws-s3-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
11.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 12. AWS S3 Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive data from AWS S3.
12.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-s3-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS | string | ||
| bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN | string | ||
| region * | AWS Region | The AWS region to connect to | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS | string | ||
| autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
| |
| deleteAfterRead | Auto-delete Objects | Delete objects after consuming them | boolean |
|
Fields marked with an asterisk (*) are mandatory.
12.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-s3-source Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:aws2-s3
12.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-s3-source.
12.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-source Kamelet as a Knative source by binding it to a Knative object.
aws-s3-source-binding.yaml
12.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
12.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-s3-source-binding.yaml
oc apply -f aws-s3-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
12.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-source Kamelet as a Kafka source by binding it to a Kafka topic.
aws-s3-source-binding.yaml
12.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
12.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f aws-s3-source-binding.yaml
oc apply -f aws-s3-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
12.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 13. AWS S3 Streaming upload Sink 링크 복사링크가 클립보드에 복사되었습니다!
Upload data to AWS S3 in streaming upload mode.
13.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the aws-s3-streaming-upload-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| accessKey * | Access Key | The access key obtained from AWS. | string | ||
| bucketNameOrArn * | Bucket Name | The S3 Bucket name or ARN. | string | ||
| keyName * | Key Name | Setting the key name for an element in the bucket through endpoint parameter. In Streaming Upload, with the default configuration, this will be the base for the progressive creation of files. | string | ||
| region * | AWS Region | The AWS region to connect to. | string |
| |
| secretKey * | Secret Key | The secret key obtained from AWS. | string | ||
| autoCreateBucket | Autocreate Bucket | Setting the autocreation of the S3 bucket bucketName. | boolean |
| |
| batchMessageNumber | Batch Message Number | The number of messages composing a batch in streaming upload mode | int |
| |
| batchSize | Batch Size | The batch size (in bytes) in streaming upload mode | int |
| |
| namingStrategy | Naming Strategy | The naming strategy to use in streaming upload mode. There are 2 enums and the value can be one of progressive, random | string |
| |
| restartingPolicy | Restarting Policy | The restarting policy to use in streaming upload mode. There are 2 enums and the value can be one of override, lastPart | string |
| |
| streamingUploadMode | Streaming Upload Mode | Setting the Streaming Upload Mode | boolean |
|
Fields marked with an asterisk (*) are mandatory.
13.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the aws-s3-streaming-upload-sink Kamelet relies upon the presence of the following dependencies:
- camel:aws2-s3
- camel:kamelet
13.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the aws-s3-streaming-upload-sink.
13.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-streaming-upload-sink Kamelet as a Knative sink by binding it to a Knative object.
aws-s3-streaming-upload-sink-binding.yaml
13.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
13.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-streaming-upload-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-streaming-upload-sink-binding.yaml
oc apply -f aws-s3-streaming-upload-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind channel:mychannel aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
13.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the aws-s3-streaming-upload-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
aws-s3-streaming-upload-sink-binding.yaml
13.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
13.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
aws-s3-streaming-upload-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f aws-s3-streaming-upload-sink-binding.yaml
oc apply -f aws-s3-streaming-upload-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key"
This command creates the KameletBinding in the current namespace on the cluster.
13.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 14. Cassandra Sink 링크 복사링크가 클립보드에 복사되었습니다!
The Cassandra Sink Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
Send data to a Cassandra Cluster.
This Kamelet expects the body as JSON Array. The content of the JSON Array will be used as input for the CQL Prepared Statement set in the query parameter.
14.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the cassandra-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname(s) cassandra server(s). Multiple hosts can be separated by comma. | string |
| |
| connectionPort * | Connection Port | Port number of cassandra server(s) | string |
| |
| keyspace * | Keyspace | Keyspace to use | string |
| |
| password * | Password | The password to use for accessing a secured Cassandra Cluster | string | ||
| preparedStatement * | Prepared Statement | The Prepared statement to execute against the Cassandra cluster table | string | ||
| username * | Username | The username to use for accessing a secured Cassandra Cluster | string | ||
| consistencyLevel | Consistency Level | Consistency level to use. The value can be one of ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, SERIAL, LOCAL_SERIAL, LOCAL_ONE | string |
|
Fields marked with an asterisk (*) are mandatory.
14.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the cassandra-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:cassandraql
14.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the cassandra-sink.
14.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the cassandra-sink Kamelet as a Knative sink by binding it to a Knative object.
cassandra-sink-binding.yaml
14.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
14.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
cassandra-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f cassandra-sink-binding.yaml
oc apply -f cassandra-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.preparedStatement=The Prepared Statement" -p "sink.username=The Username"
kamel bind channel:mychannel cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.preparedStatement=The Prepared Statement" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
14.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the cassandra-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
cassandra-sink-binding.yaml
14.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
14.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
cassandra-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f cassandra-sink-binding.yaml
oc apply -f cassandra-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.preparedStatement=The Prepared Statement" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic cassandra-sink -p "sink.connectionHost=localhost" -p sink.connectionPort=9042 -p "sink.keyspace=customers" -p "sink.password=The Password" -p "sink.preparedStatement=The Prepared Statement" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
14.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 15. Cassandra Source 링크 복사링크가 클립보드에 복사되었습니다!
The Cassandra Source Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
Query a Cassandra cluster table.
15.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the cassandra-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname(s) cassandra server(s). Multiple hosts can be separated by comma. | string |
| |
| connectionPort * | Connection Port | Port number of cassandra server(s) | string |
| |
| keyspace * | Keyspace | Keyspace to use | string |
| |
| password * | Password | The password to use for accessing a secured Cassandra Cluster | string | ||
| query * | Query | The query to execute against the Cassandra cluster table | string | ||
| username * | Username | The username to use for accessing a secured Cassandra Cluster | string | ||
| consistencyLevel | Consistency Level | Consistency level to use. The value can be one of ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, SERIAL, LOCAL_SERIAL, LOCAL_ONE | string |
| |
| resultStrategy | Result Strategy | The strategy to convert the result set of the query. Possible values are ALL, ONE, LIMIT_10, LIMIT_100… | string |
|
Fields marked with an asterisk (*) are mandatory.
15.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the cassandra-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:cassandraql
15.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the cassandra-source.
15.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the cassandra-source Kamelet as a Knative source by binding it to a Knative object.
cassandra-source-binding.yaml
15.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
15.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
cassandra-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f cassandra-source-binding.yaml
oc apply -f cassandra-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" channel:mychannel
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
15.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the cassandra-source Kamelet as a Kafka source by binding it to a Kafka topic.
cassandra-source-binding.yaml
15.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
15.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
cassandra-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f cassandra-source-binding.yaml
oc apply -f cassandra-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
15.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 16. ElasticSearch Index Sink 링크 복사링크가 클립보드에 복사되었습니다!
The ElasticSearch Index Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
This sink stores documents into ElasticSearch.
Input data must have JSON format according to the index used.
The Kamelet expects the following headers:
-
indexId/ce-indexid: as the index ID for Elasticsearch
If the header won’t be set, the index will be generated by the ES Cluster.
-
indexName/ce-indexname: as the index Name for Elasticsearch
If the header is not set, camel-k-index-es will be used as the index name.
16.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the elasticsearch-index-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| clusterName * | ElasticSearch Cluster Name | Name of the cluster. | string |
| |
| hostAddresses * | Host Addresses | Comma separated list with ip:port formatted remote transport addresses to use. | string |
| |
| enableSSL | Enable SSL | Do we want to connect using SSL? | boolean |
| |
| indexName | Index in ElasticSearch | The name of the index to act against. | string |
| |
| password | Password | Password to connect to ElasticSearch. | string | ||
| user | Username | Username to connect to ElasticSearch. | string |
Fields marked with an asterisk (*) are mandatory.
16.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the elasticsearch-index-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- mvn:org.apache.camel.k:camel-k-kamelet-reify
- camel:elasticsearch-rest
- camel:gson
- camel:bean
16.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the elasticsearch-index-sink.
16.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the elasticsearch-index-sink Kamelet as a Knative sink by binding it to a Knative object.
elasticsearch-index-sink-binding.yaml
16.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
16.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
elasticsearch-index-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f elasticsearch-index-sink-binding.yaml
oc apply -f elasticsearch-index-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
16.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel elasticsearch-index-sink -p "sink.clusterName=quickstart" -p "sink.hostAddresses=quickstart-es-http:9200"
kamel bind channel:mychannel elasticsearch-index-sink -p "sink.clusterName=quickstart" -p "sink.hostAddresses=quickstart-es-http:9200"
This command creates the KameletBinding in the current namespace on the cluster.
16.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the elasticsearch-index-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
elasticsearch-index-sink-binding.yaml
16.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
16.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
elasticsearch-index-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f elasticsearch-index-sink-binding.yaml
oc apply -f elasticsearch-index-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
16.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic elasticsearch-index-sink -p "sink.clusterName=quickstart" -p "sink.hostAddresses=quickstart-es-http:9200"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic elasticsearch-index-sink -p "sink.clusterName=quickstart" -p "sink.hostAddresses=quickstart-es-http:9200"
This command creates the KameletBinding in the current namespace on the cluster.
16.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 17. Extract Field Action 링크 복사링크가 클립보드에 복사되었습니다!
Extract a field from the body
17.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the extract-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| field * | Field | The name of the field to be added | string |
Fields marked with an asterisk (*) are mandatory.
17.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the extract-field-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:kamelet
- camel:core
- camel:jackson
17.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the extract-field-action.
17.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the extract-field-action Kamelet as an intermediate step in a Knative binding.
extract-field-action-binding.yaml
17.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
17.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
extract-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f extract-field-action-binding.yaml
oc apply -f extract-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" channel:mychannel
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
17.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the extract-field-action Kamelet as an intermediate step in a Kafka binding.
extract-field-action-binding.yaml
17.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
17.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
extract-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f extract-field-action-binding.yaml
oc apply -f extract-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step extract-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
17.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 18. FTP Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to an FTP Server.
The Kamelet expects the following headers to be set:
-
file/ce-file: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
18.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the ftp-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the FTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the FTP server | string | ||
| username * | Username | The username to access the FTP server | string | ||
| fileExist | File Existence | How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore | string |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
|
Fields marked with an asterisk (*) are mandatory.
18.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the ftp-sink Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
18.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the ftp-sink.
18.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ftp-sink Kamelet as a Knative sink by binding it to a Knative object.
ftp-sink-binding.yaml
18.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
18.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
ftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f ftp-sink-binding.yaml
oc apply -f ftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind channel:mychannel ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
18.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ftp-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
ftp-sink-binding.yaml
18.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
18.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
ftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f ftp-sink-binding.yaml
oc apply -f ftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic ftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
18.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 19. FTP Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive data from an FTP Server.
19.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the ftp-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the FTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the FTP server | string | ||
| username * | Username | The username to access the FTP server | string | ||
| idempotent | Idempotency | Skip already processed files. | boolean |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
| |
| recursive | Recursive | If a directory, will look for files in all the sub-directories as well. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
19.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the ftp-source Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
19.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the ftp-source.
19.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ftp-source Kamelet as a Knative source by binding it to a Knative object.
ftp-source-binding.yaml
19.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
19.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
ftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f ftp-source-binding.yaml
oc apply -f ftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
19.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
19.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the ftp-source Kamelet as a Kafka source by binding it to a Kafka topic.
ftp-source-binding.yaml
19.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
19.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
ftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f ftp-source-binding.yaml
oc apply -f ftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
19.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind ftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
19.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 20. Has Header Filter Action 링크 복사링크가 클립보드에 복사되었습니다!
Filter based on the presence of one header
20.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the has-header-filter-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| name * | Header Name |
The header name to evaluate. The header name must be passed by the source Kamelet. For Knative only, the name of the header requires a CloudEvent ( | string |
|
Fields marked with an asterisk (*) are mandatory.
20.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the has-header-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
20.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the has-header-filter-action.
20.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the has-header-filter-action Kamelet as an intermediate step in a Knative binding. For this example, the Knative mychannel provides a message header named ce-foo. The CloudEvents (ce-) prefix for the header name is required. The example in the Insert Header action shows how to add a message header to data.
has-header-filter-action-binding.yaml
20.3.1.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- Make sure you have "Red Hat Integration - Camel K" installed on the OpenShift cluster that you’re connected to.
-
The source Kamelet in the Kamelet Binding must pass a header with the name that you specify in the
has-header-filter-actionKamelet’snameproperty.
20.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
has-header-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f has-header-filter-action-binding.yaml
oc apply -f has-header-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind channel:mychannel --step has-header-filter-action -p "step-0.name=ce-foo" log-sink
kamel bind channel:mychannel --step has-header-filter-action -p "step-0.name=ce-foo" log-sink
This command creates the KameletBinding in the current namespace on the cluster.
20.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the has-header-filter-action Kamelet as an intermediate step in a Kafka binding. For this example, the kafka-source Kamelet provides a message header named foo. The example in the Insert Header action shows how to add a message header to data.
has-header-filter-action-binding.yaml
20.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
-
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named
my-topicin the current namespace. - Make sure that you have "Red Hat Integration - Camel K" installed on the OpenShift cluster that you’re connected to.
20.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
has-header-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f has-header-filter-action-binding.yaml
oc apply -f has-header-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind kafka-source -p "source.bootstrapServers=my-cluster-kafka-bootstrap.myproject.svc:9092" -p "source.password=XXX" -p "source.topic=my-topic" -p "source.user=XXX" -p "source.securityProtocol=PLAINTEXT" --step has-header-filter-action -p "step-0.name=foo" log-sink
kamel bind kafka-source -p "source.bootstrapServers=my-cluster-kafka-bootstrap.myproject.svc:9092" -p "source.password=XXX" -p "source.topic=my-topic" -p "source.user=XXX" -p "source.securityProtocol=PLAINTEXT" --step has-header-filter-action -p "step-0.name=foo" log-sink
This command creates the KameletBinding in the current namespace on the cluster.
20.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 21. Hoist Field Action 링크 복사링크가 클립보드에 복사되었습니다!
Wrap data in a single field
21.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the hoist-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| field * | Field | The name of the field that will contain the event | string |
Fields marked with an asterisk (*) are mandatory.
21.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the hoist-field-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:core
- camel:jackson
- camel:kamelet
21.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the hoist-field-action.
21.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the hoist-field-action Kamelet as an intermediate step in a Knative binding.
hoist-field-action-binding.yaml
21.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
21.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
hoist-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f hoist-field-action-binding.yaml
oc apply -f hoist-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" channel:mychannel
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
21.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the hoist-field-action Kamelet as an intermediate step in a Kafka binding.
hoist-field-action-binding.yaml
21.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
21.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
hoist-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f hoist-field-action-binding.yaml
oc apply -f hoist-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step hoist-field-action -p "step-0.field=The Field" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
21.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 22. HTTP Sink 링크 복사링크가 클립보드에 복사되었습니다!
Forwards an event to a HTTP endpoint
22.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the http-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| url * | URL | The URL to send data to | string |
| |
| method | Method | The HTTP method to use | string |
|
Fields marked with an asterisk (*) are mandatory.
22.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the http-sink Kamelet relies upon the presence of the following dependencies:
- camel:http
- camel:kamelet
- camel:core
22.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the http-sink.
22.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the http-sink Kamelet as a Knative sink by binding it to a Knative object.
http-sink-binding.yaml
22.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
22.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
http-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f http-sink-binding.yaml
oc apply -f http-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel http-sink -p "sink.url=https://my-service/path"
kamel bind channel:mychannel http-sink -p "sink.url=https://my-service/path"
This command creates the KameletBinding in the current namespace on the cluster.
22.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the http-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
http-sink-binding.yaml
22.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
22.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
http-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f http-sink-binding.yaml
oc apply -f http-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
22.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p "sink.url=https://my-service/path"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p "sink.url=https://my-service/path"
This command creates the KameletBinding in the current namespace on the cluster.
22.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 23. Insert Field Action 링크 복사링크가 클립보드에 복사되었습니다!
Adds a custom field with a constant value to the message in transit
23.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the insert-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| field * | Field | The name of the field to be added | string | ||
| value * | Value | The value of the field | string |
Fields marked with an asterisk (*) are mandatory.
23.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the insert-field-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:core
- camel:jackson
- camel:kamelet
23.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the insert-field-action.
23.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the insert-field-action Kamelet as an intermediate step in a Knative binding.
insert-field-action-binding.yaml
23.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
23.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
insert-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-field-action-binding.yaml
oc apply -f insert-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step insert-field-action -p "step-0.field=The Field" -p "step-0.value=The Value" channel:mychannel
kamel bind timer-source?message=Hello --step insert-field-action -p "step-0.field=The Field" -p "step-0.value=The Value" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
23.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the insert-field-action Kamelet as an intermediate step in a Kafka binding.
insert-field-action-binding.yaml
23.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
23.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
insert-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-field-action-binding.yaml
oc apply -f insert-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
23.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step insert-field-action -p "step-0.field=The Field" -p "step-0.value=The Value" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step insert-field-action -p "step-0.field=The Field" -p "step-0.value=The Value" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
23.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 24. Insert Header Action 링크 복사링크가 클립보드에 복사되었습니다!
Adds a header with a constant value to the message in transit.
24.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the insert-header-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| name * | Name |
The name of the header to add. For Knative only, the name of the header requires a CloudEvent ( | string | ||
| value * | Value | The value of the header | string |
Fields marked with an asterisk (*) are mandatory.
24.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the insert-header-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
24.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the insert-header-action.
24.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the insert-header-action Kamelet as an intermediate step in a Knative binding. The following example adds the ce-foo header to the data coming from the timer-source Kamelet. The CloudEvents (ce-) prefix for the header name is required.
insert-header-action-binding.yaml
24.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
24.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
insert-header-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-header-action-binding.yaml
oc apply -f insert-header-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=ce-foo" -p "step-0.value=The Value" channel:mychannel
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=ce-foo" -p "step-0.value=The Value" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
24.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the insert-header-action Kamelet as an intermediate step in a Kafka binding. The following example adds the foo header to the data coming from the timer-source Kamelet.
insert-header-action-binding.yaml
24.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
-
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named
my-topicin the current namespace. - Make sure that you have "Red Hat Integration - Camel K" installed on the OpenShift cluster that you’re connected to.
24.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
insert-header-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f insert-header-action-binding.yaml
oc apply -f insert-header-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=foo" -p "step-0.value=The Value" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step insert-header-action -p "step-0.name=foo" -p "step-0.value=The Value" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
24.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 25. Is Tombstone Filter Action 링크 복사링크가 클립보드에 복사되었습니다!
Filter based on the presence of body or not
25.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The is-tombstone-filter-action Kamelet does not specify any configuration option.
25.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the is-tombstone-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
25.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the is-tombstone-filter-action.
25.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the is-tombstone-filter-action Kamelet as an intermediate step in a Knative binding.
is-tombstone-filter-action-binding.yaml
25.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
25.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
is-tombstone-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f is-tombstone-filter-action-binding.yaml
oc apply -f is-tombstone-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step is-tombstone-filter-action channel:mychannel
kamel bind timer-source?message=Hello --step is-tombstone-filter-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
25.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the is-tombstone-filter-action Kamelet as an intermediate step in a Kafka binding.
is-tombstone-filter-action-binding.yaml
25.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
25.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
is-tombstone-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f is-tombstone-filter-action-binding.yaml
oc apply -f is-tombstone-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step is-tombstone-filter-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step is-tombstone-filter-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
25.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 26. Jira Source 링크 복사링크가 클립보드에 복사되었습니다!
The Jira Source Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
Receive notifications about new issues from Jira.
26.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the jira-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| jiraUrl * | Jira URL | The URL of your instance of Jira | string |
| |
| password * | Password | The password to access Jira | string | ||
| username * | Username | The username to access Jira | string | ||
| jql | JQL | A query to filter issues | string |
|
Fields marked with an asterisk (*) are mandatory.
26.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the jira-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:jira
26.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the jira-source.
26.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the jira-source Kamelet as a Knative source by binding it to a Knative object.
jira-source-binding.yaml
26.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
26.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
jira-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jira-source-binding.yaml
oc apply -f jira-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
26.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the jira-source Kamelet as a Kafka source by binding it to a Kafka topic.
jira-source-binding.yaml
26.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
26.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
jira-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jira-source-binding.yaml
oc apply -f jira-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
26.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
26.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 27. JMS - AMQP 1.0 Kamelet Sink 링크 복사링크가 클립보드에 복사되었습니다!
A Kamelet that can produce events to any AMQP 1.0 compliant message broker using the Apache Qpid JMS client
27.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the jms-amqp-10-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| destinationName * | Destination Name | The JMS destination name | string | ||
| remoteURI * | Broker URL | The JMS URL | string |
| |
| destinationType | Destination Type | The JMS destination type (i.e.: queue or topic) | string |
|
Fields marked with an asterisk (*) are mandatory.
27.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the jms-amqp-10-sink Kamelet relies upon the presence of the following dependencies:
- camel:jms
- camel:kamelet
- mvn:org.apache.qpid:qpid-jms-client:0.55.0
27.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the jms-amqp-10-sink.
27.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the jms-amqp-10-sink Kamelet as a Knative sink by binding it to a Knative object.
jms-amqp-10-sink-binding.yaml
27.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
27.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
jms-amqp-10-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f jms-amqp-10-sink-binding.yaml
oc apply -f jms-amqp-10-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
This command creates the KameletBinding in the current namespace on the cluster.
27.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the jms-amqp-10-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
jms-amqp-10-sink-binding.yaml
27.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
27.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
jms-amqp-10-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f jms-amqp-10-sink-binding.yaml
oc apply -f jms-amqp-10-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"
This command creates the KameletBinding in the current namespace on the cluster.
27.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 28. JMS - AMQP 1.0 Kamelet Source 링크 복사링크가 클립보드에 복사되었습니다!
A Kamelet that can consume events from any AMQP 1.0 compliant message broker using the Apache Qpid JMS client
28.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the jms-amqp-10-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| destinationName * | Destination Name | The JMS destination name | string | ||
| remoteURI * | Broker URL | The JMS URL | string |
| |
| destinationType | Destination Type | The JMS destination type (i.e.: queue or topic) | string |
|
Fields marked with an asterisk (*) are mandatory.
28.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the jms-amqp-10-source Kamelet relies upon the presence of the following dependencies:
- camel:jms
- camel:kamelet
- mvn:org.apache.qpid:qpid-jms-client:0.55.0
28.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the jms-amqp-10-source.
28.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the jms-amqp-10-source Kamelet as a Knative source by binding it to a Knative object.
jms-amqp-10-source-binding.yaml
28.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
28.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
jms-amqp-10-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jms-amqp-10-source-binding.yaml
oc apply -f jms-amqp-10-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" channel:mychannel
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
28.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the jms-amqp-10-source Kamelet as a Kafka source by binding it to a Kafka topic.
jms-amqp-10-source-binding.yaml
28.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
28.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
jms-amqp-10-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f jms-amqp-10-source-binding.yaml
oc apply -f jms-amqp-10-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
28.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 29. Json Deserialize Action 링크 복사링크가 클립보드에 복사되었습니다!
Deserialize payload to JSON
29.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The json-deserialize-action Kamelet does not specify any configuration option.
29.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the json-deserialize-action Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:core
- camel:jackson
29.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the json-deserialize-action.
29.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the json-deserialize-action Kamelet as an intermediate step in a Knative binding.
json-deserialize-action-binding.yaml
29.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
29.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
json-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-deserialize-action-binding.yaml
oc apply -f json-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
29.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel
kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
29.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the json-deserialize-action Kamelet as an intermediate step in a Kafka binding.
json-deserialize-action-binding.yaml
29.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
29.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
json-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-deserialize-action-binding.yaml
oc apply -f json-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
29.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
29.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 30. Json Serialize Action 링크 복사링크가 클립보드에 복사되었습니다!
Serialize payload to JSON
30.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The json-serialize-action Kamelet does not specify any configuration option.
30.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the json-serialize-action Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:core
- camel:jackson
30.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the json-serialize-action.
30.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the json-serialize-action Kamelet as an intermediate step in a Knative binding.
json-serialize-action-binding.yaml
30.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
30.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
json-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-serialize-action-binding.yaml
oc apply -f json-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
30.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel
kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
30.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the json-serialize-action Kamelet as an intermediate step in a Kafka binding.
json-serialize-action-binding.yaml
30.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
30.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
json-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f json-serialize-action-binding.yaml
oc apply -f json-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
30.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
30.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 31. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to Kafka topics.
The Kamelet is able to understand the following headers to be set:
-
key/ce-key: as message key -
partition-key/ce-partitionkey: as message partition key
Both the headers are optional.
31.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the kafka-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| bootstrapServers * | Brokers | Comma separated list of Kafka Broker URLs | string | ||
| password * | Password | Password to authenticate to kafka | string | ||
| topic * | Topic Names | Comma separated list of Kafka topic names | string | ||
| user * | Username | Username to authenticate to Kafka | string | ||
| saslMechanism | SASL Mechanism | The Simple Authentication and Security Layer (SASL) Mechanism used. | string |
| |
| securityProtocol | Security Protocol | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported | string |
|
Fields marked with an asterisk (*) are mandatory.
31.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the `kafka-sink Kamelet relies upon the presence of the following dependencies:
- camel:kafka
- camel:kamelet
31.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the kafka-sink.
31.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the kafka-sink Kamelet as a Knative sink by binding it to a Knative object.
kafka-sink-binding.yaml
31.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
31.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
kafka-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f kafka-sink-binding.yaml
oc apply -f kafka-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
31.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
31.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the kafka-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
kafka-sink-binding.yaml
31.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
31.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
kafka-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f kafka-sink-binding.yaml
oc apply -f kafka-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
31.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
31.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 32. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive data from Kafka topics.
32.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the kafka-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| topic * | Topic Names | Comma separated list of Kafka topic names | string | ||
| bootstrapServers * | Brokers | Comma separated list of Kafka Broker URLs | string | ||
| securityProtocol | Security Protocol | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported | string |
| |
| saslMechanism | SASL Mechanism | The Simple Authentication and Security Layer (SASL) Mechanism used. | string |
| |
| user * | Username | Username to authenticate to Kafka | string | ||
| password * | Password | Password to authenticate to kafka | string | ||
| autoCommitEnable | Auto Commit Enable | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. | boolean |
| |
| allowManualCommit | Allow Manual Commit | Whether to allow doing manual commits | boolean |
| |
| autoOffsetReset | Auto Offset Reset | What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none | string |
| |
| pollOnError | Poll On Error Behavior | What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP | string |
| |
| deserializeHeaders | Automatically Deserialize Headers |
When enabled the Kamelet source will deserialize all message headers to String representation. The default is | boolean |
|
Fields marked with an asterisk (*) are mandatory.
32.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the `kafka-source Kamelet relies upon the presence of the following dependencies:
- camel:kafka
- camel:kamelet
- camel:core
32.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the kafka-source.
32.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the kafka-source Kamelet as a Knative source by binding it to a Knative object.
kafka-source-binding.yaml
32.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
32.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
kafka-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f kafka-source-binding.yaml
oc apply -f kafka-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
32.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
32.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the kafka-source Kamelet as a Kafka source by binding it to a Kafka topic.
kafka-source-binding.yaml
32.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
32.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
kafka-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f kafka-source-binding.yaml
oc apply -f kafka-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
32.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
32.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 33. Kafka Topic Name Matches Filter Action 링크 복사링크가 클립보드에 복사되었습니다!
Filter based on kafka topic value compared to regex
33.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the topic-name-matches-filter-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| regex * | Regex | The Regex to Evaluate against the Kafka topic name | string |
Fields marked with an asterisk (*) are mandatory.
33.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the topic-name-matches-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
33.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the topic-name-matches-filter-action.
33.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the topic-name-matches-filter-action Kamelet as an intermediate step in a Knative binding.
topic-name-matches-filter-action-binding.yaml
33.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
33.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
topic-name-matches-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f topic-name-matches-filter-action-binding.yaml
oc apply -f topic-name-matches-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
33.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" channel:mychannel
kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
33.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the topic-name-matches-filter-action Kamelet as an intermediate step in a Kafka binding.
topic-name-matches-filter-action-binding.yaml
33.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
33.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
topic-name-matches-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f topic-name-matches-filter-action-binding.yaml
oc apply -f topic-name-matches-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
33.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
33.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 34. Log Sink 링크 복사링크가 클립보드에 복사되었습니다!
A sink that logs all data that it receives, useful for debugging purposes.
34.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the log-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| showHeaders | Show Headers | Show the headers received | boolean |
| |
| showStreams | Show Streams | Show the stream bodies (they may not be available in following steps) | boolean |
|
Fields marked with an asterisk (*) are mandatory.
34.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the log-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:log
34.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the log-sink.
34.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the log-sink Kamelet as a Knative sink by binding it to a Knative object.
log-sink-binding.yaml
34.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
34.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
log-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f log-sink-binding.yaml
oc apply -f log-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
34.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel log-sink
kamel bind channel:mychannel log-sink
This command creates the KameletBinding in the current namespace on the cluster.
34.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the log-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
log-sink-binding.yaml
34.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
34.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
log-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f log-sink-binding.yaml
oc apply -f log-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
34.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic log-sink
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic log-sink
This command creates the KameletBinding in the current namespace on the cluster.
34.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 35. MariaDB Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to a MariaDB Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
35.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the mariadb-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured MariaDB Database | string | ||
| query * | Query | The Query to execute against the MariaDB Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured MariaDB Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
35.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the mariadb-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.apache.commons:commons-dbcp2:2.7.0
- mvn:org.mariadb.jdbc:mariadb-java-client
35.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the mariadb-sink.
35.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mariadb-sink Kamelet as a Knative sink by binding it to a Knative object.
mariadb-sink-binding.yaml
35.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
35.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mariadb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mariadb-sink-binding.yaml
oc apply -f mariadb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
35.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
35.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mariadb-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
mariadb-sink-binding.yaml
35.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
35.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mariadb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mariadb-sink-binding.yaml
oc apply -f mariadb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
35.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
35.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 36. Mask Fields Action 링크 복사링크가 클립보드에 복사되었습니다!
Mask fields with a constant value in the message in transit
36.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the mask-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| fields * | Fields | Comma separated list of fields to mask | string | ||
| replacement * | Replacement | Replacement for the fields to be masked | string |
Fields marked with an asterisk (*) are mandatory.
36.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the mask-field-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:jackson
- camel:kamelet
- camel:core
36.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the mask-field-action.
36.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mask-field-action Kamelet as an intermediate step in a Knative binding.
mask-field-action-binding.yaml
36.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
36.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mask-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f mask-field-action-binding.yaml
oc apply -f mask-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
36.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
36.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mask-field-action Kamelet as an intermediate step in a Kafka binding.
mask-field-action-binding.yaml
36.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
36.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mask-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f mask-field-action-binding.yaml
oc apply -f mask-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
36.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
36.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 37. Message Timestamp Router Action 링크 복사링크가 클립보드에 복사되었습니다!
Update the topic field as a function of the original topic name and the record’s timestamp field.
37.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the message-timestamp-router-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| timestampKeys * | Timestamp Keys | Comma separated list of Timestamp keys. The timestamp is taken from the first found field. | string | ||
| timestampFormat | Timestamp Format | Format string for the timestamp that is compatible with java.text.SimpleDateFormat. | string |
| |
| timestampKeyFormat | Timestamp Keys Format | Format of the timestamp keys. Possible values are 'timestamp' or any format string for the timestamp that is compatible with java.text.SimpleDateFormat. In case of 'timestamp' the field will be evaluated as milliseconds since 1970, so as a UNIX Timestamp. | string |
| |
| topicFormat | Topic Format | Format string which can contain '$[topic]' and '$[timestamp]' as placeholders for the topic and timestamp, respectively. | string |
|
Fields marked with an asterisk (*) are mandatory.
37.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the message-timestamp-router-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:jackson
- camel:kamelet
- camel:core
37.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the message-timestamp-router-action.
37.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the message-timestamp-router-action Kamelet as an intermediate step in a Knative binding.
message-timestamp-router-action-binding.yaml
37.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
37.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
message-timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f message-timestamp-router-action-binding.yaml
oc apply -f message-timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
37.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" channel:mychannel
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
37.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the message-timestamp-router-action Kamelet as an intermediate step in a Kafka binding.
message-timestamp-router-action-binding.yaml
37.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
37.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
message-timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f message-timestamp-router-action-binding.yaml
oc apply -f message-timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
37.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
37.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 38. MongoDB Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send documents to MongoDB.
This Kamelet expects a JSON as body.
Properties you can set as headers:
-
db-upsert/ce-dbupsert: if the database should create the element if it does not exist. Boolean value.
38.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the mongodb-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| collection * | MongoDB Collection | Sets the name of the MongoDB collection to bind to this endpoint. | string | ||
| database * | MongoDB Database | Sets the name of the MongoDB database to target. | string | ||
| hosts * | MongoDB Hosts | Comma separated list of MongoDB Host Addresses in host:port format. | string | ||
| createCollection | Collection | Create collection during initialisation if it doesn’t exist. | boolean |
| |
| password | MongoDB Password | User password for accessing MongoDB. | string | ||
| username | MongoDB Username | Username for accessing MongoDB. | string | ||
| writeConcern | Write Concern | Configure the level of acknowledgment requested from MongoDB for write operations, possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED, MAJORITY. | string |
Fields marked with an asterisk (*) are mandatory.
38.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the mongodb-sink Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:mongodb
- camel:jackson
38.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the mongodb-sink.
38.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mongodb-sink Kamelet as a Knative sink by binding it to a Knative object.
mongodb-sink-binding.yaml
38.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
38.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mongodb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mongodb-sink-binding.yaml
oc apply -f mongodb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
38.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
This command creates the KameletBinding in the current namespace on the cluster.
38.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mongodb-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
mongodb-sink-binding.yaml
38.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
38.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mongodb-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mongodb-sink-binding.yaml
oc apply -f mongodb-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
38.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"
This command creates the KameletBinding in the current namespace on the cluster.
38.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 39. MongoDB Source 링크 복사링크가 클립보드에 복사되었습니다!
Consume documents from MongoDB.
If the persistentTailTracking option will be enabled, the consumer will keep track of the last consumed message and on the next restart, the consumption will restart from that message. In case of persistentTailTracking enabled, the tailTrackIncreasingField must be provided (by default it is optional).
If the persistentTailTracking option won’t be enabled, the consumer will consume the whole collection and wait in idle for new documents to consume.
39.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the mongodb-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| collection * | MongoDB Collection | Sets the name of the MongoDB collection to bind to this endpoint. | string | ||
| database * | MongoDB Database | Sets the name of the MongoDB database to target. | string | ||
| hosts * | MongoDB Hosts | Comma separated list of MongoDB Host Addresses in host:port format. | string | ||
| password * | MongoDB Password | User password for accessing MongoDB. | string | ||
| username * | MongoDB Username | Username for accessing MongoDB. The username must be present in the MongoDB’s authentication database (authenticationDatabase). By default, the MongoDB authenticationDatabase is 'admin'. | string | ||
| persistentTailTracking | MongoDB Persistent Tail Tracking | Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The next time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. | boolean |
| |
| tailTrackIncreasingField | MongoDB Tail Track Increasing Field | Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. | string |
Fields marked with an asterisk (*) are mandatory.
39.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the mongodb-source Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:mongodb
- camel:jackson
39.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the mongodb-source.
39.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mongodb-source Kamelet as a Knative source by binding it to a Knative object.
mongodb-source-binding.yaml
39.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
39.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mongodb-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f mongodb-source-binding.yaml
oc apply -f mongodb-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
39.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
39.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mongodb-source Kamelet as a Kafka source by binding it to a Kafka topic.
mongodb-source-binding.yaml
39.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
39.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mongodb-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f mongodb-source-binding.yaml
oc apply -f mongodb-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
39.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
39.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 40. MySQL Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to a MySQL Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
40.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the mysql-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured MySQL Database | string | ||
| query * | Query | The Query to execute against the MySQL Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured MySQL Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
40.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the mysql-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.apache.commons:commons-dbcp2:2.7.0
- mvn:mysql:mysql-connector-java
40.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the mysql-sink.
40.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mysql-sink Kamelet as a Knative sink by binding it to a Knative object.
mysql-sink-binding.yaml
40.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
40.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mysql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mysql-sink-binding.yaml
oc apply -f mysql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
40.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
40.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the mysql-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
mysql-sink-binding.yaml
40.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
40.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
mysql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f mysql-sink-binding.yaml
oc apply -f mysql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
40.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
40.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 41. PostgreSQL Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to a PostgreSQL Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
41.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the postgresql-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured PostgreSQL Database | string | ||
| query * | Query | The Query to execute against the PostgreSQL Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured PostgreSQL Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
41.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the postgresql-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.postgresql:postgresql
- mvn:org.apache.commons:commons-dbcp2:2.7.0
41.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the postgresql-sink.
41.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the postgresql-sink Kamelet as a Knative sink by binding it to a Knative object.
postgresql-sink-binding.yaml
41.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
41.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
postgresql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f postgresql-sink-binding.yaml
oc apply -f postgresql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
41.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
41.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the postgresql-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
postgresql-sink-binding.yaml
41.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
41.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
postgresql-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f postgresql-sink-binding.yaml
oc apply -f postgresql-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
41.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
41.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 42. Predicate Filter Action 링크 복사링크가 클립보드에 복사되었습니다!
Filter based on a JsonPath Expression
42.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the predicate-filter-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| expression * | Expression | The JsonPath Expression to evaluate, without the external parenthesis. Since this is a filter, the expression will be a negation, this means that if the foo field of the example is equals to John, the message will go ahead, otherwise it will be filtered out. | string |
|
Fields marked with an asterisk (*) are mandatory.
42.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the predicate-filter-action Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:kamelet
- camel:jsonpath
42.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the predicate-filter-action.
42.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the predicate-filter-action Kamelet as an intermediate step in a Knative binding.
predicate-filter-action-binding.yaml
42.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
42.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
predicate-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f predicate-filter-action-binding.yaml
oc apply -f predicate-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
42.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" channel:mychannel
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
42.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the predicate-filter-action Kamelet as an intermediate step in a Kafka binding.
predicate-filter-action-binding.yaml
42.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
42.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
predicate-filter-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f predicate-filter-action-binding.yaml
oc apply -f predicate-filter-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
42.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
42.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 43. Protobuf Deserialize Action 링크 복사링크가 클립보드에 복사되었습니다!
Deserialize payload to Protobuf
43.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the protobuf-deserialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Protobuf schema to use during serialization (as single-line) | string |
|
Fields marked with an asterisk (*) are mandatory.
43.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the protobuf-deserialize-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:kamelet
- camel:core
- camel:jackson-protobuf
43.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the protobuf-deserialize-action.
43.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the protobuf-deserialize-action Kamelet as an intermediate step in a Knative binding.
protobuf-deserialize-action-binding.yaml
43.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
43.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
protobuf-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-deserialize-action-binding.yaml
oc apply -f protobuf-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
43.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" --step protobuf-deserialize-action -p "step-2.schema=message Person { required string first = 1; required string last = 2; }" --step json-serialize-action channel:mychannel
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" --step protobuf-deserialize-action -p "step-2.schema=message Person { required string first = 1; required string last = 2; }" --step json-serialize-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
43.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the protobuf-deserialize-action Kamelet as an intermediate step in a Kafka binding.
protobuf-deserialize-action-binding.yaml
43.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
43.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
protobuf-deserialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-deserialize-action-binding.yaml
oc apply -f protobuf-deserialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
43.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" --step protobuf-deserialize-action -p "step-2.schema=message Person { required string first = 1; required string last = 2; }" --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" --step protobuf-deserialize-action -p "step-2.schema=message Person { required string first = 1; required string last = 2; }" --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
43.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 44. Protobuf Serialize Action 링크 복사링크가 클립보드에 복사되었습니다!
Serialize payload to Protobuf
44.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the protobuf-serialize-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| schema * | Schema | The Protobuf schema to use during serialization (as single-line) | string |
|
Fields marked with an asterisk (*) are mandatory.
44.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the protobuf-serialize-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:kamelet
- camel:core
- camel:jackson-protobuf
44.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the protobuf-serialize-action.
44.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the protobuf-serialize-action Kamelet as an intermediate step in a Knative binding.
protobuf-serialize-action-binding.yaml
44.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
44.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
protobuf-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-serialize-action-binding.yaml
oc apply -f protobuf-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
44.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" channel:mychannel
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
44.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the protobuf-serialize-action Kamelet as an intermediate step in a Kafka binding.
protobuf-serialize-action-binding.yaml
44.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
44.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
protobuf-serialize-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f protobuf-serialize-action-binding.yaml
oc apply -f protobuf-serialize-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
44.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind "timer-source?message={\"first\":\"Ada\",\"last\":\"Lovelace\"}" --step json-deserialize-action --step protobuf-serialize-action -p "step-1.schema=message Person { required string first = 1; required string last = 2; }" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
44.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 45. Regex Router Action 링크 복사링크가 클립보드에 복사되었습니다!
Update the destination using the configured regular expression and replacement string
45.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the regex-router-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| regex * | Regex | Regular Expression for destination | string | ||
| replacement * | Replacement | Replacement when matching | string |
Fields marked with an asterisk (*) are mandatory.
45.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the regex-router-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:kamelet
- camel:core
45.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the regex-router-action.
45.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the regex-router-action Kamelet as an intermediate step in a Knative binding.
regex-router-action-binding.yaml
45.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
45.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
regex-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f regex-router-action-binding.yaml
oc apply -f regex-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
45.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" channel:mychannel
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
45.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the regex-router-action Kamelet as an intermediate step in a Kafka binding.
regex-router-action-binding.yaml
45.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
45.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
regex-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f regex-router-action-binding.yaml
oc apply -f regex-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
45.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
45.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 46. Replace Field Action 링크 복사링크가 클립보드에 복사되었습니다!
Replace field with a different key in the message in transit
46.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the replace-field-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| disabled * | Disabled | Comma separated list of fields to be disabled | string | ||
| enabled * | Enabled | Comma separated list of fields to be enabled | string | ||
| renames * | Renames | Comma separated list of field with new value to be renamed | string |
|
Fields marked with an asterisk (*) are mandatory.
46.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the replace-field-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:core
- camel:jackson
- camel:kamelet
46.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the replace-field-action.
46.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the replace-field-action Kamelet as an intermediate step in a Knative binding.
replace-field-action-binding.yaml
46.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
46.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
replace-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f replace-field-action-binding.yaml
oc apply -f replace-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
46.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.disabled=The Disabled" -p "step-0.enabled=The Enabled" -p "step-0.renames=foo:bar,c1:c2" channel:mychannel
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.disabled=The Disabled" -p "step-0.enabled=The Enabled" -p "step-0.renames=foo:bar,c1:c2" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
46.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the replace-field-action Kamelet as an intermediate step in a Kafka binding.
replace-field-action-binding.yaml
46.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
46.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
replace-field-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f replace-field-action-binding.yaml
oc apply -f replace-field-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
46.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.disabled=The Disabled" -p "step-0.enabled=The Enabled" -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.disabled=The Disabled" -p "step-0.enabled=The Enabled" -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
46.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 47. Salesforce Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive updates from Salesforce.
47.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the salesforce-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| clientId * | Consumer Key | The Salesforce application consumer key | string | ||
| clientSecret * | Consumer Secret | The Salesforce application consumer secret | string | ||
| password * | Password | The Salesforce user password | string | ||
| query * | Query | The query to execute on Salesforce | string |
| |
| topicName * | Topic Name | The name of the topic/channel to use | string |
| |
| userName * | Username | The Salesforce username | string | ||
| loginUrl | Login URL | The Salesforce instance login URL | string |
|
Fields marked with an asterisk (*) are mandatory.
47.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the salesforce-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:salesforce
- mvn:org.apache.camel.k:camel-k-kamelet-reify
- camel:kamelet
47.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the salesforce-source.
47.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the salesforce-source Kamelet as a Knative source by binding it to a Knative object.
salesforce-source-binding.yaml
47.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
47.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
salesforce-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f salesforce-source-binding.yaml
oc apply -f salesforce-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
47.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" channel:mychannel
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
47.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the salesforce-source Kamelet as a Kafka source by binding it to a Kafka topic.
salesforce-source-binding.yaml
47.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
47.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
salesforce-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f salesforce-source-binding.yaml
oc apply -f salesforce-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
47.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
47.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 48. SFTP Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to an SFTP Server.
The Kamelet expects the following headers to be set:
-
file/ce-file: as the file name to upload
If the header won’t be set the exchange ID will be used as file name.
48.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the sftp-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the FTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the FTP server | string | ||
| username * | Username | The username to access the FTP server | string | ||
| fileExist | File Existence | How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore | string |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
|
Fields marked with an asterisk (*) are mandatory.
48.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the sftp-sink Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
48.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the sftp-sink.
48.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the sftp-sink Kamelet as a Knative sink by binding it to a Knative object.
sftp-sink-binding.yaml
48.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
48.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
sftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sftp-sink-binding.yaml
oc apply -f sftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
48.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind channel:mychannel sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
48.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the sftp-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
sftp-sink-binding.yaml
48.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
48.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
sftp-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sftp-sink-binding.yaml
oc apply -f sftp-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
48.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
48.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 49. SFTP Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive data from an SFTP Server.
49.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the sftp-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| connectionHost * | Connection Host | Hostname of the SFTP server | string | ||
| connectionPort * | Connection Port | Port of the FTP server | string |
| |
| directoryName * | Directory Name | The starting directory | string | ||
| password * | Password | The password to access the SFTP server | string | ||
| username * | Username | The username to access the SFTP server | string | ||
| idempotent | Idempotency | Skip already processed files. | boolean |
| |
| passiveMode | Passive Mode | Sets passive mode connection | boolean |
| |
| recursive | Recursive | If a directory, will look for files in all the sub-directories as well. | boolean |
|
Fields marked with an asterisk (*) are mandatory.
49.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the sftp-source Kamelet relies upon the presence of the following dependencies:
- camel:ftp
- camel:core
- camel:kamelet
49.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the sftp-source.
49.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the sftp-source Kamelet as a Knative source by binding it to a Knative object.
sftp-source-binding.yaml
49.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
49.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
sftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f sftp-source-binding.yaml
oc apply -f sftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
49.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
49.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the sftp-source Kamelet as a Kafka source by binding it to a Kafka topic.
sftp-source-binding.yaml
49.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
49.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
sftp-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f sftp-source-binding.yaml
oc apply -f sftp-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
49.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
49.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 50. Slack Source 링크 복사링크가 클립보드에 복사되었습니다!
Receive messages from a Slack channel.
50.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the slack-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| channel * | Channel | The Slack channel to receive messages from | string |
| |
| token * | Token | The token to access Slack. A Slack app is needed. This app needs to have channels:history and channels:read permissions. The Bot User OAuth Access Token is the kind of token needed. | string |
Fields marked with an asterisk (*) are mandatory.
50.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the slack-source Kamelet relies upon the presence of the following dependencies:
- camel:kamelet
- camel:slack
- camel:jackson
50.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the slack-source.
50.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the slack-source Kamelet as a Knative source by binding it to a Knative object.
slack-source-binding.yaml
50.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
50.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
slack-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f slack-source-binding.yaml
oc apply -f slack-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
50.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
50.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the slack-source Kamelet as a Kafka source by binding it to a Kafka topic.
slack-source-binding.yaml
50.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
50.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
slack-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f slack-source-binding.yaml
oc apply -f slack-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
50.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
50.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 51. Microsoft SQL Server Sink 링크 복사링크가 클립보드에 복사되었습니다!
Send data to a Microsoft SQL Server Database.
This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:
'INSERT INTO accounts (username,city) VALUES (:#username,:#city)'
The Kamelet needs to receive as input something like:
'{ "username":"oscerd", "city":"Rome"}'
51.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the sqlserver-sink Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| databaseName * | Database Name | The Database Name we are pointing | string | ||
| password * | Password | The password to use for accessing a secured SQL Server Database | string | ||
| query * | Query | The Query to execute against the SQL Server Database | string |
| |
| serverName * | Server Name | Server Name for the data source | string |
| |
| username * | Username | The username to use for accessing a secured SQL Server Database | string | ||
| serverPort | Server Port | Server Port for the data source | string |
|
Fields marked with an asterisk (*) are mandatory.
51.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the sqlserver-sink Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:sql
- mvn:org.apache.commons:commons-dbcp2:2.7.0
- mvn:com.microsoft.sqlserver:mssql-jdbc:9.2.1.jre11
51.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the sqlserver-sink.
51.3.1. Knative Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the sqlserver-sink Kamelet as a Knative sink by binding it to a Knative object.
sqlserver-sink-binding.yaml
51.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
51.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
sqlserver-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sqlserver-sink-binding.yaml
oc apply -f sqlserver-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
51.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
51.3.2. Kafka Sink 링크 복사링크가 클립보드에 복사되었습니다!
You can use the sqlserver-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
sqlserver-sink-binding.yaml
51.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
51.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
sqlserver-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f sqlserver-sink-binding.yaml
oc apply -f sqlserver-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
51.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
51.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 52. Telegram Source 링크 복사링크가 클립보드에 복사되었습니다!
The Telegram Source Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
Receive all messages that people send to your Telegram bot.
To create a bot, contact the @botfather account using the Telegram app.
The source attaches the following headers to the messages:
-
chat-id/ce-chatid: the ID of the chat where the message comes from
52.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the telegram-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| authorizationToken * | Token | The token to access your bot on Telegram. You you can obtain it from the Telegram @botfather. | string |
Fields marked with an asterisk (*) are mandatory.
52.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the telegram-source Kamelet relies upon the presence of the following dependencies:
- camel:jackson
- camel:kamelet
- camel:telegram
- camel:core
52.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the telegram-source.
52.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the telegram-source Kamelet as a Knative source by binding it to a Knative object.
telegram-source-binding.yaml
52.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
52.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
telegram-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f telegram-source-binding.yaml
oc apply -f telegram-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
52.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind telegram-source -p "source.authorizationToken=The Token" channel:mychannel
kamel bind telegram-source -p "source.authorizationToken=The Token" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
52.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the telegram-source Kamelet as a Kafka source by binding it to a Kafka topic.
telegram-source-binding.yaml
52.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
52.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
telegram-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f telegram-source-binding.yaml
oc apply -f telegram-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
52.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind telegram-source -p "source.authorizationToken=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind telegram-source -p "source.authorizationToken=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
52.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 53. Timer Source 링크 복사링크가 클립보드에 복사되었습니다!
Produces periodic events with a custom payload.
53.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the timer-source Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| message * | Message | The message to generate | string |
| |
| contentType | Content Type | The content type of the message being generated | string |
| |
| period | Period | The interval between two events in milliseconds | integer |
|
Fields marked with an asterisk (*) are mandatory.
53.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the timer-source Kamelet relies upon the presence of the following dependencies:
- camel:core
- camel:timer
- camel:kamelet
53.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the timer-source.
53.3.1. Knative Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the timer-source Kamelet as a Knative source by binding it to a Knative object.
timer-source-binding.yaml
53.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
53.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
timer-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f timer-source-binding.yaml
oc apply -f timer-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
53.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind timer-source -p "source.message=hello world" channel:mychannel
kamel bind timer-source -p "source.message=hello world" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
53.3.2. Kafka Source 링크 복사링크가 클립보드에 복사되었습니다!
You can use the timer-source Kamelet as a Kafka source by binding it to a Kafka topic.
timer-source-binding.yaml
53.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
53.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
timer-source-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the source by using the following command:
oc apply -f timer-source-binding.yaml
oc apply -f timer-source-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
53.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the source by using the following command:
kamel bind timer-source -p "source.message=hello world" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source -p "source.message=hello world" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
53.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 54. Timestamp Router Action 링크 복사링크가 클립보드에 복사되었습니다!
Update the topic field as a function of the original topic name and the record timestamp.
54.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the timestamp-router-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| timestampFormat | Timestamp Format | Format string for the timestamp that is compatible with java.text.SimpleDateFormat. | string |
| |
| timestampHeaderName | Timestamp Header Name | The name of the header containing a timestamp | string |
| |
| topicFormat | Topic Format | Format string which can contain '$[topic]' and '$[timestamp]' as placeholders for the topic and timestamp, respectively. | string |
|
Fields marked with an asterisk (*) are mandatory.
54.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the timestamp-router-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:kamelet
- camel:core
54.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the timestamp-router-action.
54.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the timestamp-router-action Kamelet as an intermediate step in a Knative binding.
timestamp-router-action-binding.yaml
54.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
54.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f timestamp-router-action-binding.yaml
oc apply -f timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
54.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step timestamp-router-action channel:mychannel
kamel bind timer-source?message=Hello --step timestamp-router-action channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
54.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the timestamp-router-action Kamelet as an intermediate step in a Kafka binding.
timestamp-router-action-binding.yaml
54.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
54.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
timestamp-router-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f timestamp-router-action-binding.yaml
oc apply -f timestamp-router-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
54.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step timestamp-router-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step timestamp-router-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.
54.4. Kamelet source file 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 55. Value to Key Action 링크 복사링크가 클립보드에 복사되었습니다!
Replace the Kafka record key with a new key formed from a subset of fields in the body
55.1. Configuration Options 링크 복사링크가 클립보드에 복사되었습니다!
The following table summarizes the configuration options available for the value-to-key-action Kamelet:
| Property | Name | Description | Type | Default | Example |
|---|---|---|---|---|---|
| fields * | Fields | Comma separated list of fields to be used to form the new key | string |
Fields marked with an asterisk (*) are mandatory.
55.2. Dependencies 링크 복사링크가 클립보드에 복사되었습니다!
At runtime, the value-to-key-action Kamelet relies upon the presence of the following dependencies:
- mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001
- camel:core
- camel:jackson
- camel:kamelet
55.3. Usage 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how you can use the value-to-key-action.
55.3.1. Knative Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the value-to-key-action Kamelet as an intermediate step in a Knative binding.
value-to-key-action-binding.yaml
55.3.1.1. Prerequisite 링크 복사링크가 클립보드에 복사되었습니다!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
55.3.1.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
value-to-key-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f value-to-key-action-binding.yaml
oc apply -f value-to-key-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
55.3.1.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" channel:mychannel
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
55.3.2. Kafka Action 링크 복사링크가 클립보드에 복사되었습니다!
You can use the value-to-key-action Kamelet as an intermediate step in a Kafka binding.
value-to-key-action-binding.yaml
55.3.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
55.3.2.2. Procedure for using the cluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
-
Save the
value-to-key-action-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the action by using the following command:
oc apply -f value-to-key-action-binding.yaml
oc apply -f value-to-key-action-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
55.3.2.3. Procedure for using the Kamel CLI 링크 복사링크가 클립보드에 복사되었습니다!
Configure and run the action by using the following command:
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.