Red Hat build of Apache Camel for Quarkus 用 Kamelets リファレンス


Red Hat build of Apache Camel 4.10

Red Hat build of Apache Camel for Quarkus 用 Kamelets リファレンス

概要

Kamelets は、アプリケーション統合の代替アプローチを提供します。Camel コンポーネントを直接使用する代わりに、Kamelets (独自のルートテンプレート)を設定して接続を作成できます。</para>

はじめに

Red Hat build of Apache Camel ドキュメントに関するフィードバック

エラーを報告したり、ドキュメントの改善を提案したりするには、Red Hat Jira アカウントにログインし、課題を送信してください。Red Hat Jira アカウントをお持ちでない場合は、アカウントを作成するように求められます。

手順

  1. 次のリンクをクリックして チケットを作成 します。
  2. Summary に課題の簡単な説明を入力します。
  3. Description に課題や機能拡張の詳細な説明を入力します。問題があるドキュメントのセクションへの URL も記載してください。
  4. Submit をクリックすると、課題が作成され、適切なドキュメントチームに転送されます。

1. AWS DynamoDB Sink

AWS DynamoDB サービスにデータを送信します。送信されたデータは、指定された AWS DynamoDB テーブルのアイテムを挿入/更新/削除します。

アクセスキー/シークレットキーは、AWS DynamoDB サービスに対する基本的な認証方法です。Kamelet は以下のオプション {useDefaultCredentialsProvider} も提供するため、これらのパラメーターはオプションとなります。

デフォルトの Credentials Provider を使用する場合、AWS DynamoDB クライアントはこのプロバイダーを介して認証情報を読み込み、静的な認証情報を使用しません。このため、この Kamelet では、アクセスキーとシークレットキーを必須パラメーターとしていません。

この Kamelet は、ボディーとして JSON フィールドを想定しています。JSON フィールドとテーブルの属性値とのマッピングはキーで行われるので、以下のような入力があった場合は、

{"username":"oscerd", "city":"Rome"}

Kamelet は、指定された AWS DynamoDB テーブルにアイテムを挿入/更新し、属性 {username}{city} をそれぞれ設定します。JSON オブジェクトには、項目を定義するプライマリーキー値を含む必要があることに注意してください。

1.1. 設定オプション

次の表は、aws-ddb-sink Kamelet で利用できる設定オプションをまとめたものです。

プロパティー名前説明デフォルト

region *

AWS Region

以下に接続する AWS リージョン

string

 

"eu-west-1"

table *

Table

参照する DynamoDB テーブルの名前

string

  

accessKey

Access Key

AWS から取得したアクセスキー

string

  

操作 (operation)

Operation

実行する操作 (PutItem、UpdateItem、DeleteItem のいずれか)。

string

"PutItem"

"PutItem"

overrideEndpoint

Endpoint Overwrite

エンドポイント URI を上書きする必要性を設定します。このオプションは uriEndpointOverride 設定と併用する必要があります。

boolean

false

 

secretKey

Secret Key

AWS から取得したシークレットキー

string

  

uriEndpointOverride

Overwrite Endpoint URI

オーバーライドするエンドポイント URI を設定します。このオプションは overrideEndpoint オプションと組み合わせて使用する必要があります。

string

  

useDefaultCredentialsProvider

Default Credentials Provider

デフォルトの認証情報プロバイダー経由でクレデンシャルをロードすること、または静的クレデンシャルが渡されることを DynamoDB クライアントは想定すべきかどうかを設定します。

boolean

false

 

useProfileCredentialsProvider

Profile Credentials Provider

DynamoDB クライアントがプロファイル認証情報プロバイダーを通じて認証情報をロードするかどうかを設定します。

boolean

false

 

useSessionCredentials

Session Credentials

DynamoDB クライアントがセッション認証情報を使用するかどうかを設定します。これは、ユーザーが DynamoDB で操作を実行するために IAM ロールを引き受ける必要がある場合に役立ちます。

boolean

false

 

profileCredentialsName

Profile Credentials Name

プロファイルクレデンシャルプロバイダーを使用している場合、このパラメーターはプロファイル名を設定します。

string

  

sessionToken

Session Token

ユーザーに IAM ロールが想定される場合に使用される Amazon AWS セッショントークン。

string (パスワード形式)

  

* = アスタリスクの付いたフィールドは 必須 です。

1.2. 依存関係

1.3. 使用方法

1.3.1. Knative Sink

aws-ddb-sink Kamelet を Knative オブジェクトにバインドすることで、Knative のシンクとして使用することができます。

aws-ddb-sink-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-ddb-sink-binding
spec:
  source:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-ddb-sink
    properties:
      region: "eu-west-1"
      table: "The Table"
Copy to Clipboard

1.3.2. Knative Sink

aws-ddb-sink Kamelet を Knative オブジェクトにバインドすることで、Knative のシンクとして使用することができます。

aws-ddb-sink-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-ddb-sink-binding
spec:
  source:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-ddb-sink
    properties:
      region: "eu-west-1"
      table: "The Table"
Copy to Clipboard

1.3.3. Kafka Sink

aws-ddb-sink Kamelet を Kafka トピックにバインドすることで、Kafka のシンクとして使用することができます。

aws-ddb-sink-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-ddb-sink-binding
spec:
  source:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-ddb-sink
    properties:
      region: "eu-west-1"
      table: "The Table"
Copy to Clipboard

1.4. Kamelets ソースファイル

https://github.com/apache/camel-kamelets/blob/4.10.x/kamelets/aws-ddb-sink.kamelet.yaml

2. Avro デシリアライズアクション

ペイロードを Avro にデシリアライズ

2.1. 設定オプション

次の表は、avro-deserialize-action Kamelet で利用可能な設定オプションをまとめたものです。

プロパティー名前説明デフォルト

スキーマ *

スキーマ

シリアライゼーション時に使用する Avro スキーマ (JSON 形式を使用)

string

 

"{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"

validate

Validate

コンテンツがスキーマに対して検証される必要があるかどうかを示します。

boolean

true

 

* = アスタリスクの付いたフィールドは 必須 です。

2.2. 依存関係

2.3. 使用方法

2.3.1. Knative Action

avro-deserialize-action Kamelet を Knative バインディングの中間ステップとして使用することができます。

avro-deserialize-action-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: avro-deserialize-action-binding
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: timer-source
    properties:
      message: '{"first":"Ada","last":"Lovelace"}'
  steps:
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: json-deserialize-action
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: avro-serialize-action
    properties:
      schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: avro-deserialize-action
    properties:
      schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: json-serialize-action
  sink:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
Copy to Clipboard

2.3.2. Kafka Action

avro-deserialize-action Kamelet を Kafka バインディングの中間ステップとして使用することができます。

avro-deserialize-action-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: avro-deserialize-action-binding
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: timer-source
    properties:
      message: '{"first":"Ada","last":"Lovelace"}'
  steps:
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: json-deserialize-action
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: avro-serialize-action
    properties:
      schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: avro-deserialize-action
    properties:
      schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: json-serialize-action
  sink:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic
Copy to Clipboard

2.4. Kamelets ソースファイル

https://github.com/apache/camel-kamelets/blob/4.10.x/kamelets/avro-deserialize-action.kamelet.yaml

3. Avro シリアルアクション

ペイロードと Avro へのシリアライズ

3.1. 設定オプション

次の表は、avro-serialize-action Kamelet で利用可能な設定オプションをまとめたものです。

プロパティー名前説明デフォルト

スキーマ *

スキーマ

シリアライゼーション時に使用する Avro スキーマ (JSON 形式を使用)

string

 

"{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"

validate

Validate

コンテンツがスキーマに対して検証される必要があるかどうかを示します。

boolean

true

 

* = アスタリスクの付いたフィールドは 必須 です。

3.2. 依存関係

3.3. 使用方法

3.3.1. Knative Action

avro-serialize-action Kamelet を Knative バインディングの中間ステップとして使用できます。

avro-serialize-action-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: avro-serialize-action-binding
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: timer-source
    properties:
      message: '{"first":"Ada","last":"Lovelace"}'
  steps:
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: json-deserialize-action
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: avro-serialize-action
    properties:
      schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"
  sink:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
Copy to Clipboard

3.3.2. Kafka Action

avro-serialize-action Kamelet を Kafka バインディングの中間ステップとして使用することができます。

avro-serialize-action-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: avro-serialize-action-binding
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: timer-source
    properties:
      message: '{"first":"Ada","last":"Lovelace"}'
  steps:
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: json-deserialize-action
  - ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: avro-serialize-action
    properties:
      schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}"
  sink:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic
Copy to Clipboard

3.4. Kamelets ソースファイル

https://github.com/apache/camel-kamelets/blob/4.10.x/kamelets/avro-serialize-action.kamelet.yaml

4. AWS Kinesis Sink

データを AWS Kinesis に送信します。

Kamelet には以下のヘッダーが必要です。

  • partition/ce-partition: Kinesis のパーティションキーを設定

ヘッダーが設定されていない場合、エクスチェンジ ID が使用されます。

Kamelet は以下のヘッダーを認識することもできます。

  • sequence-number / ce-sequencenumber: シーケンス番号を設定します。

このヘッダーは任意です。

4.1. 設定オプション

次の表は、aws-kinesis-sink Kamelet で利用できる設定オプションをまとめたものです。

プロパティー名前説明デフォルト

region *

AWS Region

以下に接続する AWS リージョン

string

 

"eu-west-1"

stream *

ストリーム名

アクセスする Kinesis ストリーム (事前に作成されている必要があります)

string

  

accessKey

Access Key

AWS から取得したアクセスキー

string

  

secretKey

Secret Key

AWS から取得したシークレットキー

string

  

useDefaultCredentialsProvider

Default Credentials Provider

true の場合、Kinesis クライアントはデフォルトの認証情報プロバイダー経由で認証情報を読み込みます。false の場合は、基本的な認証方法 (アクセスキーとシークレットキー) が使用されます。

boolean

false

 

useProfileCredentialsProvider

Profile Credentials Provider

Kinesis クライアントがプロファイル認証情報プロバイダーを通じて認証情報をロードするかどうかを設定します。

boolean

false

 

useSessionCredentials

Session Credentials

Kinesis クライアントがセッション認証情報を使用するかどうかを設定します。これは、ユーザーが Kinesis で操作を実行するために IAM ロールを引き受ける必要がある場合に役立ちます。

boolean

false

 

profileCredentialsName

Profile Credentials Name

プロファイルクレデンシャルプロバイダーの使用が設定されていない場合、プロファイル名。

string

  

sessionToken

Session Token

ユーザーに IAM ロールが想定される場合に使用される Amazon AWS セッショントークン。

string (パスワード形式)

  

uriEndpointOverride

Overwrite Endpoint URI

オーバーライドするエンドポイント URI。このオプションを使用するには、overrideEndpoint オプションも選択する必要があります。

string

  

overrideEndpoint

Endpoint Overwrite

エンドポイント URI をオーバーライドするには、このオプションを選択します。このオプションを使用するには、uriEndpointOverride オプションの URI も指定する必要があります。

boolean

false

 

* = アスタリスクの付いたフィールドは 必須 です。

4.2. 依存関係

4.3. 使用方法

4.3.1. Knative Sink

aws-kinesis-sink Kamelet を Knative オブジェクトにバインドすることで、Knative のシンクとして使用することができます。

aws-kinesis-sink-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-kinesis-sink-binding
spec:
  source:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-kinesis-sink
    properties:
      accessKey: "The Access Key"
      region: "eu-west-1"
      secretKey: "The Secret Key"
      stream: "The Stream Name"
Copy to Clipboard

4.3.2. Kafka Sink

aws-kinesis-sink Kamelet を Kafka シンクとして使用することは、これを Kafka トピックにバインドできます。

aws-kinesis-sink-binding.yaml

apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-kinesis-sink-binding
spec:
  source:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-kinesis-sink
    properties:
      accessKey: "The Access Key"
      region: "eu-west-1"
      secretKey: "The Secret Key"
      stream: "The Stream Name"


:leveloffset: 2


[id="aws_kinesis_sink_kamelets_source_file"]
== Kamelets source file

link:https://github.com/apache/camel-kamelets/blob/4.10.x/kamelets/aws-kinesis-sink.kamelet.yaml[]

:leveloffset: 1
:leveloffset: +1

[id="aws-kinesis-source"]
= AWS Kinesis Source

Receive data from AWS Kinesis.

[id="aws_kinesis_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-kinesis-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *region * *| AWS Region| The AWS region to connect to| `string` | | `"eu-west-1"`
| *stream * *| Stream Name| The Kinesis stream that you want to access (needs to be created in advance)| `string` | |

| accessKey | Access Key| The access key obtained from AWS| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS| `string` | |
| useDefaultCredentialsProvider |  Default Credentials Provider |  If true, the Kinesis client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key). |  `boolean` |  `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the Kinesis client should expect to load credentials through a profile credentials provider. |  `boolean` |  `false` |
| useSessionCredentials |  Session Credentials |  Set whether the Kinesis client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Kinesis. |  `boolean` |  `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  `string` |  |
| sessionToken |  Session Token |  Amazon AWS Session Token used when the user needs to assume a IAM role. |  `string` (_password format_) |  |
| uriEndpointOverride |  Overwrite Endpoint URI |  The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. |  `string` |  |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  `boolean` |  `false` |
| delay |  Delay |  The number of milliseconds before the next poll of the selected stream. |  `integer` |  `500` |
| asyncClient |  Async Client |  If we want to a KinesisAsyncClient instance set it to true. |  `boolean` |  `false` |
| useKclConsumers |  KCL Consumer |  If we want to a KCL Consumer set it to true |  `boolean` |  `false` |
|===

*** = Fields marked with an asterisk are *mandatory*.


[id="aws_kinesis_source_dependencies"]

== Dependencies



[id="aws_kinesis_source_usage"]
== Usage




:leveloffset: +1

[id="aws_kinesis_source_knative_source"]
=== Knative Source

You can use the `aws-kinesis-source` Kamelet as a Knative source by binding it to a Knative object.

.aws-kinesis-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-kinesis-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-kinesis-source properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" stream: "The Stream Name" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="aws_kinesis_source_knative_source"]
=== Knative Source

You can use the `aws-kinesis-source` Kamelet as a Knative source by binding it to a Knative object.

.aws-kinesis-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-kinesis-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-kinesis-source properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" stream: "The Stream Name" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="aws_kinesis_source_kafka_source"]
=== Kafka Source

You can use the `aws-kinesis-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.aws-kinesis-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-kinesis-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-kinesis-source properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" stream: "The Stream Name" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="aws_kinesis_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-kinesis-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-lambda-sink"]
= AWS Lambda Sink

Send a payload to an AWS Lambda function

[id="aws_lambda_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-lambda-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *function {empty}* *| Function Name| The Lambda Function name| `string` | |
| *region {empty}* *| AWS Region| The AWS region to connect to| `string` | | `"eu-west-1"`

| accessKey | Access Key| The access key obtained from AWS| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS| `string` | |
| useDefaultCredentialsProvider |  Default Credentials Provider |  If true, the Lambda client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key). |  `boolean` |  `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the Lambda client should expect to load credentials through a profile credentials provider. |  `boolean` |  `false` |
| useSessionCredentials |  Session Credentials |  Set whether the Lambda client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in Lambda. |  `boolean` |  `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  `string` |  |
| sessionToken | Session Token |Amazon AWS Session Token used when the user needs to assume a IAM role. | `string` (_password format_) | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_lambda_sink_dependencies"]

== Dependencies



[id="aws_lambda_sink_usage"]
== Usage




:leveloffset: +1

[id="aws_lambda_sink_knative_sink"]
=== Knative Sink

You can use the `aws-lambda-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-lambda-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-lambda-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-lambda-sink properties: accessKey: "The Function Name" region: "eu-west-1" secretKey: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3

:leveloffset: +1

[id="aws_lambda_sink_knative_sink"]
=== Knative Sink

You can use the `aws-lambda-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-lambda-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-lambda-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-lambda-sink properties: accessKey: "The Function Name" region: "eu-west-1" secretKey: "eu-west-1" secretKey: "The Secret Key"

[id="aws_lambda_sink_kafka_sink"]
=== Kafka Sink

You can use the `aws-lambda-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.aws-lambda-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-lambda-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-lambda-sink properties: accessKey: "The Access Key" function: "The Function Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3


[id="aws_lambda_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-lambda-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-redshift-sink"]
= AWS Redshift Sink

Send data to an AWS Redshift Database.

This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:

`INSERT INTO accounts (username,city) VALUES (:#username,:#city)`

The Kamelet needs to receive as input something like:

`{ "username":"oscerd", "city":"Rome"}`

[id="aws_redshift_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-redshift-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *databaseName {empty}* *| Database Name| The Database Name we are pointing| `string` | |
| *query {empty}* *| Query| The Query to execute against the AWS Redshift Database| `string` | | `"INSERT INTO accounts (username,city) VALUES (:#username,:#city)"`
| *serverName {empty}* *| Server Name| Server Name for the data source| `string` | | `"localhost"`

| username | Username| The username to use for accessing a secured AWS Redshift Database| `string` | |
| password | Password| The password to use for accessing a secured AWS Redshift Database| `string` | |
| serverPort| Server Port| Server Port for the data source| string| `5439`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_redshift_sink_dependencies"]

== Dependencies



[id="aws_redshift_sink_usage"]
== Usage

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////






:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1

[id="aws_redshift_sink_knative_sink"]
=== Knative Sink

You can use the `aws-redshift-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-redshift-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-redshift-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-redshift-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

:leveloffset: 3

:leveloffset: +1

[id="aws_redshift_sink_knative_sink"]
=== Knative Sink

You can use the `aws-redshift-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-redshift-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-redshift-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-redshift-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="aws_redshift_sink_kafka_sink"]
=== Kafka Sink

You can use the `aws-redshift-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.aws-redshift-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-redshift-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-redshift-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

:leveloffset: 3


[id="aws_redshift_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-redshift-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-sns-sink"]
= AWS SNS Sink

Send message to an AWS SNS Topic

[id="aws_sns_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-sns-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *region {empty}* *| AWS Region| The AWS region to connect to| `string` | | `"eu-west-1"`
| *topicNameOrArn {empty}* *| Topic Name| The SQS Topic name or ARN| `string` | |

| accessKey | Access Key| The access key obtained from AWS| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS| `string` | |
| autoCreateTopic| Autocreate Topic| Setting the autocreation of the SNS topic.| `boolean` | `false`|
| useDefaultCredentialsProvider |  Default Credentials Provider |  If true, the SNS client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key). |  `boolean` |  `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the SNS client should expect to load credentials through a profile credentials provider. |  `boolean` |  `false` |
| useSessionCredentials |  Session Credentials |  Set whether the SNS client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in SNS. |  `boolean` |  `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  `string` |  |
| sessionToken |  Session Token |  Amazon AWS Session Token used when the user needs to assume a IAM role. |  `string` (_password format_) |  |
| uriEndpointOverride |  Overwrite Endpoint URI |  The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. |  `string` |  |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  `boolean` |  `false` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_sns_sink_dependencies"]

== Dependencies



[id="aws_sns_sink_usage"]
== Usage




:leveloffset: +1

[id="aws_sns_sink_knative_sink"]
=== Knative Sink

You can use the `aws-sns-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-sns-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sns-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sns-sink properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" topicNameOrArn: "The Topic Name"

:leveloffset: 3

:leveloffset: +1

[id="aws_sns_sink_knative_sink"]
=== Knative Sink

You can use the `aws-sns-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-sns-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sns-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sns-sink properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" topicNameOrArn: "The Topic Name"

[id="aws_sns_sink_kafka_sink"]
=== Kafka Sink

You can use the `aws-sns-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.aws-sns-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sns-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sns-sink properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" topicNameOrArn: "The Topic Name"

:leveloffset: 3


[id="aws_sns_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-sns-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-sqs-sink"]
= AWS SQS Sink

Send message to an AWS SQS Queue

[id="aws_sqs_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-sqs-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *queueNameOrArn {empty}* *| Queue Name| The SQS Queue name or ARN| `string` | |
| *region {empty}* *| AWS Region| The AWS region to connect to| `string` | | `"eu-west-1"`

| accessKey | Access Key| The access key obtained from AWS| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS| `string` | |
| autoCreateQueue| Autocreate Queue| Setting the autocreation of the SQS queue.| `boolean` | `false`|
| amazonAWSHost |  AWS Host |  The hostname of the Amazon AWS cloud. |  `string` |  `amazonaws.com` |
| protocol |  Protocol |  The underlying protocol used to communicate with SQS. |  `string` |  `https` |  `http` or `https`
| useDefaultCredentialsProvider |  Default Credentials Provider |  If true, the SQS client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key). |  `boolean` |  `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the SQS client should expect to load credentials through a profile credentials provider. |  `boolean` |  `false` |
| useSessionCredentials |  Session Credentials |  Set whether the SQS client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in SQS. |  `boolean` |  `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  `string` |  |
| sessionToken |  Session Token |  Amazon AWS Session Token used when the user needs to assume a IAM role. |  `string` (_password format_) |  |
| uriEndpointOverride |  Overwrite Endpoint URI |  The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. |  `string` |  |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  `boolean` |  `false` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_sqs_sink_dependencies"]

== Dependencies



[id="aws_sqs_sink_usage"]
== Usage




:leveloffset: +1

[id="aws_sqs_sink_knative_sink"]
=== Knative Sink

You can use the `aws-sqs-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-sqs-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3

:leveloffset: +1

[id="aws_sqs_sink_knative_sink"]
=== Knative Sink

You can use the `aws-sqs-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-sqs-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key"

[id="aws_sqs_sink_kafka_sink"]
=== Kafka Sink

You can use the `aws-sqs-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.aws-sqs-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3


[id="aws_sqs_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-sqs-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-sqs-source"]
= AWS SQS Source

Receive data from AWS SQS.

[id="aws_sqs_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-sqs-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *queueNameOrArn {empty}* *| Queue Name| The SQS Queue name or ARN| `string` | |
| *region {empty}* *| AWS Region| The AWS region to connect to| `string` | | `"eu-west-1"`

| accessKey | Access Key| The access key obtained from AWS| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS| `string` | |
| autoCreateQueue| Autocreate Queue| Setting the autocreation of the SQS queue.| `boolean` | `false`|
| deleteAfterRead| Auto-delete Messages| Delete messages after consuming them| `boolean` | `true`|
| autoCreateQueue |  Autocreate Queue |  Setting the autocreation of the SQS queue. |  `boolean` |  `false` |
| amazonAWSHost |  AWS Host |  The hostname of the Amazon AWS cloud. |  `string` |  `amazonaws.com` |
| protocol |  Protocol |  The underlying protocol used to communicate with SQS |  `string` |  `https` |  `http` or `https`
| queueURL |  Queue URL |  The full SQS Queue URL (required if using KEDA) |  `string` |  |
| useDefaultCredentialsProvider |  Default Credentials Provider |  If true, the SQS client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key). |  `boolean` |  `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the SQS client should expect to load credentials through a profile credentials provider. |  `boolean` |  `false` |
| useSessionCredentials |  Session Credentials |  Set whether the SQS client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in SQS. |  `boolean` |  `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  `string` |  |
| sessionToken |  Session Token |  Amazon AWS Session Token used when the user needs to assume a IAM role. |  `string` (_password format_) |  |
| uriEndpointOverride |  Overwrite Endpoint URI |  The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. |  `string` |  |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  `boolean` |  `false` |
| delay |  Delay |  The number of milliseconds before the next poll of the selected stream |  `integer` |  `500` |
| greedy |  Greedy Scheduler |  If greedy is enabled, then the polling happens immediately again, if the previous run polled 1 or more messages. |  `boolean` |  `false` |
| maxMessagesPerPoll |  Max Messages Per Poll |  The maximum number of messages to return. Amazon SQS never returns more messages than this value (however, fewer messages might be returned). Valid values 1 to 10. Default 1. |  `integer` |  `1` (minimum: `1`, maximum: `10`)` |
| waitTimeSeconds |  Wait Time Seconds |  The duration (in seconds) for which the call waits for a message to arrive in the queue before returning. If a message is available, the call returns sooner than WaitTimeSeconds. If no messages are available and the wait time expires, the call does not return a message list. |  `integer` |  `minimum: `0` |
| visibilityTimeout |  Visibility Timeout |  The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request. |  `integer` |  `minimum: `0` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_sqs_source_dependencies"]

== Dependencies



[id="aws_sqs_source_usage"]
== Usage




:leveloffset: +1

[id="aws_sqs_source_knative_source"]
=== Knative Source

You can use the `aws-sqs-source` Kamelet as a Knative source by binding it to a Knative object.

.aws-sqs-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-source properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="aws_sqs_source_knative_source"]
=== Knative Source

You can use the `aws-sqs-source` Kamelet as a Knative source by binding it to a Knative object.

.aws-sqs-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-source properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="aws_sqs_source_kafka_source"]
=== Kafka Source

You can use the `aws-sqs-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.aws-sqs-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-source properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="aws_sqs_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-sqs-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-sqs-fifo-sink"]
= AWS SQS FIFO Sink

Send message to an AWS SQS FIFO Queue

[id="aws_sqs_fifo_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-sqs-fifo-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *queueNameOrArn {empty}* *| Queue Name| The SQS Queue name or ARN| `string` | |
| *region {empty}* *| AWS Region| The AWS region to connect to| `string` | | `"eu-west-1"`

| accessKey | Access Key| The access key obtained from AWS| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS| `string` | |
| autoCreateQueue| Autocreate Queue| Setting the autocreation of the SQS queue.| `boolean` | `false`|
| contentBasedDeduplication| Content-Based Deduplication| Use content-based deduplication (should be enabled in the SQS FIFO queue first)| `boolean` | `false`|
| contentBasedDeduplication |  Content-Based Deduplication |  Use content-based deduplication (should be enabled in the SQS FIFO queue first) |  boolean | `false` |
| autoCreateQueue |  Autocreate Queue |  Setting the autocreation of the SQS queue.  |  boolean | `false` |
| amazonAWSHost |  AWS Host |  The hostname of the Amazon AWS cloud.  |  string | `amazonaws.com` |
| protocol |  Protocol |  The underlying protocol used to communicate with SQS | string | `https` | `http` or `https`
| useDefaultCredentialsProvider |  Default Credentials Provider |  Set whether the SQS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. |  boolean | `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the SQS client should expect to load credentials through a profile credentials provider. |  boolean | `false` |
| useSessionCredentials |  Session Credentials |  Set whether the SQS client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in SQS. |  boolean | `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  string |  |
| sessionToken | Session Token | Amazon AWS Session Token used when the user needs to assume a IAM role. | string || password
| uriEndpointOverride |  Overwrite Endpoint URI | The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. | string  | |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  boolean | `false` |


|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_sqs_fifo_sink_dependencies"]

== Dependencies



[id="aws_sqs_fifo_sink_usage"]
== Usage




:leveloffset: +1

[id="aws_sqs_fifo_sink_knative_sink"]
=== Knative Sink

You can use the `aws-sqs-fifo-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-sqs-fifo-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-fifo-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3

:leveloffset: +1

[id="aws_sqs_fifo_sink_knative_sink"]
=== Knative Sink

You can use the `aws-sqs-fifo-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-sqs-fifo-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-fifo-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key"

[id="aws_sqs_fifo_sink_kafka_sink"]
=== Kafka Sink

You can use the `aws-sqs-fifo-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.aws-sqs-fifo-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-sqs-fifo-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3



[id="aws_sqs_fifo_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-sqs-fifo-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-s3-sink"]
= AWS S3 Sink

Upload data to AWS S3.

The Kamelet expects the following headers to be set:

- `file` / `ce-file`: as the file name to upload

If the header is not set,  the exchange ID is used as file name.

[id="aws_s3_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-s3-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *bucketNameOrArn {empty}* *| Bucket Name| The S3 Bucket name or ARN.| `string` | |
| *region {empty}* *| AWS Region| The AWS region to connect to.| `string` | | `"eu-west-1"`

| accessKey| Access Key| The access key obtained from AWS.| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS.| `string` | |
| autoCreateBucket| Autocreate Bucket| Setting the autocreation of the S3 bucket bucketName.| `boolean` | `false`|
| useDefaultCredentialsProvider |  Default Credentials Provider |  If true, the S3 client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key). |  boolean | `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the S3 client should expect to load credentials through a profile credentials provider. |  boolean | `false` |
| useSessionCredentials |  Session Credentials |  Set whether the S3 client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in S3. |  boolean | `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  string | |

| sessionToken | Session Token | Amazon AWS Session Token used when the user needs to assume a IAM role. | string | | password
| uriEndpointOverride |  Overwrite Endpoint URI | The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. | string |  |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  boolean | `false` |
| forcePathStyle |  Force Path Style |  Forces path style when accessing AWS S3 buckets. |  boolean | `false` |
| keyName | Key Name | The key name for saving an element in the bucket. | string | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_s3_sink_dependencies"]

== Dependencies



[id="aws_s3_sink_usage"]
== Usage




:leveloffset: +1

[id="aws_s3_sink_knative_sink"]
=== Knative Sink

You can use the `aws-s3-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-s3-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3

:leveloffset: +1

[id="aws_s3_sink_knative_sink"]
=== Knative Sink

You can use the `aws-s3-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-s3-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key"

[id="aws_s3_sink_kafka_sink"]
=== Kafka Sink

You can use the `aws-s3-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.aws-s3-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3



[id="aws_s3_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-s3-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-s3-source"]
= AWS S3 Source

Receive data from AWS S3.

[id="aws_s3_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-s3-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *bucketNameOrArn {empty}* *| Bucket Name| The S3 Bucket name or ARN| `string` | |
| *region {empty}* *| AWS Region| The AWS region to connect to| `string` | | `"eu-west-1"`

| accessKey | Access Key| The access key obtained from AWS| `string` | |
| secretKey | Secret Key| The secret key obtained from AWS| `string` | |
| autoCreateBucket| Autocreate Bucket| Setting the autocreation of the S3 bucket bucketName.| `boolean` | `false` |
| deleteAfterRead| Auto-delete Objects| Delete objects after consuming them| `boolean` | `true` |

| moveAfterRead |  Move Objects After Delete |  Move objects from S3 bucket to a different bucket after they have been retrieved. |  boolean | `false` |
| destinationBucket | Destination Bucket |  Define the destination bucket where an object must be moved when moveAfterRead is set to true. |  string | |
| destinationBucketPrefix | Destination Bucket Prefix |Define the destination bucket prefix to use when an object must be moved, and moveAfterRead is set to true. | string | |
| destinationBucketSuffix |  Destination Bucket Suffix |  Define the destination bucket suffix to use when an object must be moved, and moveAfterRead is set to true. |  string | |
| accessKey | Access Key | The access key obtained from AWS. | string | |  password
| secretKey | Secret Key | The secret key obtained from AWS. | string || password
| autoCreateBucket |  Autocreate Bucket |  Specifies to automatically create the S3 bucket. |  boolean | `false` |
| prefix |  Prefix |  The AWS S3 bucket prefix to consider while searching. |  string | `{folder/}` |
| ignoreBody |  Ignore Body |  If true, the S3 Object body is ignored. Setting this to true overrides any behavior defined by the `includeBody` option. If false, the S3 object is put in the body. |  boolean | `false` |
| useDefaultCredentialsProvider |  Default Credentials Provider |  If true, the S3 client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key). |  boolean | `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the S3 client should expect to load credentials through a profile credentials provider. |  boolean | `false` |
| useSessionCredentials |  Session Credentials |  Set whether the S3 client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in S3. |  boolean | `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  string |  |
| sessionToken | Session Token  | Amazon AWS Session Token used when the user needs to assume a IAM role. | string || password
| uriEndpointOverride |  Overwrite Endpoint URI | The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. | string | |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  boolean | `false` |
| forcePathStyle |  Force Path Style |  Forces path style when accessing AWS S3 buckets. |  boolean | `false` |
| delay |  Delay |  The number of milliseconds before the next poll of the selected bucket. |  integer | `500` |
| maxMessagesPerPoll |  Max Messages Per Poll |  Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited. |  integer | `10` |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_s3_source_dependencies"]

== Dependencies



[id="aws_s3_source_usage"]
== Usage




:leveloffset: +1

[id="aws_s3_source_knative_source"]
=== Knative Source

You can use the `aws-s3-source` Kamelet as a Knative source by binding it to a Knative object.

.aws-s3-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-source properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="aws_s3_source_knative_source"]
=== Knative Source

You can use the `aws-s3-source` Kamelet as a Knative source by binding it to a Knative object.

.aws-s3-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-source properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="aws_s3_source_kafka_source"]
=== Kafka Source

You can use the `aws-s3-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.aws-s3-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-source properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: kind KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="aws_s3_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-s3-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="aws-s3-streaming-upload-sink"]

= AWS S3 Streaming upload Sink

Upload data to AWS S3 in streaming upload mode.

[id="aws_s3_streaming_upload_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `aws-s3-streaming-upload-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| accessKey | Access Key| The access key obtained from AWS.| `string` | |
| *bucketNameOrArn {empty}* *| Bucket Name| The S3 Bucket name or ARN.| `string` | |
| *keyName {empty}* *| Key Name| Setting the key name for an element in the bucket through endpoint parameter. In Streaming Upload, with the default configuration, this is the base for the progressive creation of files.| `string` | |
| *region {empty}* *| AWS Region| The AWS region to connect to.| `string` | | `"eu-west-1"`
| secretKey | Secret Key| The secret key obtained from AWS.| `string` | |
| autoCreateBucket| Autocreate Bucket| Setting the autocreation of the S3 bucket bucketName.| `boolean` | `false`|

| autoCreateBucket |  Autocreate Bucket |  Setting the autocreation of the S3 bucket bucketName. |  boolean | `false` |
| restartingPolicy |  Restarting Policy |  The restarting policy to use in streaming upload mode. There are 2 enums and the value can be one of override, lastPart |  string | `"lastPart"` |
| batchMessageNumber |  Batch Message Number |  The number of messages composing a batch in streaming upload mode |  integer | `10` |
| batchSize |  Batch Size |  The batch size (in bytes) in streaming upload mode |  integer | `1000000` |
| streamingUploadTimeout |  Streaming Upload Timeout |  While streaming upload mode is true, this option set the timeout to complete upload |  integer | |
| namingStrategy | Naming Strategy  | The naming strategy to use in streaming upload mode. There are 2 enums and the value can be one of progressive, random  | string  | "progressive"  |
| keyName |  Key Name |  Setting the key name for an element in the bucket through endpoint parameter. In Streaming Upload, with the default configuration, this is the base for the progressive creation of files. |  string | |

| useDefaultCredentialsProvider | Default Credentials Provider | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | boolean | `false` |
| useProfileCredentialsProvider |  Profile Credentials Provider |  Set whether the S3 client should expect to load credentials through a profile credentials provider. |  boolean | `false` |
| useSessionCredentials |  Session Credentials |  Set whether the S3 client should expect to use Session Credentials. This is useful in situation in which the user needs to assume a IAM role for doing operations in S3. |  boolean | `false` |
| profileCredentialsName |  Profile Credentials Name |  If using a profile credentials provider this parameter sets the profile name. |  string | |
| sessionToken | Session Token  | Amazon AWS Session Token used when the user needs to assume a IAM role. | string | | password
| uriEndpointOverride | Overwrite Endpoint URI | The overriding endpoint URI. To use this option, you must also select the `overrideEndpoint` option. | string | |
| overrideEndpoint |  Endpoint Overwrite |  Select this option to override the endpoint URI. To use this option, you must also provide a URI for the `uriEndpointOverride` option. |  boolean | `false` |
| forcePathStyle |  Force Path Style |  Forces path style when accessing AWS S3 buckets. |  boolean | `false` |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="aws_s3_streaming_upload_sink_dependencies"]

== Dependencies



[id="aws_s3_streaming_upload_sink_usage"]
== Usage




:leveloffset: +1

[id="aws_s3_streaming_upload_sink_knative_sink"]
=== Knative Sink

You can use the `aws-s3-streaming-upload-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-s3-streaming-upload-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-streaming-upload-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-streaming-upload-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" keyName: "The Key Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3

:leveloffset: +1

[id="aws_s3_streaming_upload_sink_knative_sink"]
=== Knative Sink

You can use the `aws-s3-streaming-upload-sink` Kamelet as a Knative sink by binding it to a Knative object.

.aws-s3-streaming-upload-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-streaming-upload-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-streaming-upload-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" keyName: "The Key Name" region: "eu-west-1" secretKey: "The Secret Key"

[id="aws_s3_streaming_upload_sink_kafka_sink"]
=== Kafka Sink

You can use the `aws-s3-streaming-upload-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.aws-s3-streaming-upload-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: aws-s3-streaming-upload-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: aws-s3-streaming-upload-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" keyName: "The Key Name" region: "eu-west-1" secretKey: "The Secret Key"

:leveloffset: 3


[id="aws_s3_streaming_upload_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}aws-s3-streaming-upload-sink.kamelet.yaml[]



:leveloffset: 3
////
:leveloffset: +1

[id="azure-servicebus-sink"]
= Azure Servicebus Sink

*Provided by: "Red Hat"*

Send Messages to Azure Servicebus.

[id="azure_servicebus_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `azure-servicebus-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionString {empty}* *| Connection String| Connection String for Azure Servicebus instance| `string` | |
| *topicOrQueueName {empty}* *| Topic Or Queue Name| Topic Or Queue Name for the Azure Servicebus instance| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="azure_servicebus_sink_dependencies"]

== Dependencies



[id="azure_servicebus_sink_usage"]
== Usage




:leveloffset: +1

[id="azure_servicebus_sink_knative_sink"]
=== Knative Sink

You can use the `azure-servicebus-sink` Kamelet as a Knative sink by binding it to a Knative object.

.azure-servicebus-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-servicebus-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-servicebus-sink properties: connectionString: "The Connection String" topicOrQueueName: "The Topic Or Queue Name"

[id="azure_servicebus_sink_prerequisite"]
==== Prerequisites
Ensure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="azure_servicebus_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `azure-servicebus-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f azure-servicebus-sink-binding.yaml

[id="azure_servicebus_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel azure-servicebus-sink -p "sink.connectionString=The Connection String" -p "sink.topicOrQueueName=The Topic Or Queue Name"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="azure_servicebus_sink_knative_sink"]
=== Knative Sink

You can use the `azure-servicebus-sink` Kamelet as a Knative sink by binding it to a Knative object.

.azure-servicebus-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-servicebus-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-servicebus-sink properties: connectionString: "The Connection String" topicOrQueueName: "The Topic Or Queue Name"

[id="azure_servicebus_sink_prerequisite"]
==== Prerequisites
Ensure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="azure_servicebus_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `azure-servicebus-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f azure-servicebus-sink-binding.yaml

[id="azure_servicebus_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel azure-servicebus-sink -p "sink.connectionString=The Connection String" -p "sink.topicOrQueueName=The Topic Or Queue Name"

This command creates the KameletBinding in the current namespace on the cluster.

[id="azure_servicebus_sink_kafka_sink"]
=== Kafka Sink

You can use the `azure-servicebus-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.azure-servicebus-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-servicebus-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-servicebus-sink properties: connectionString: "The Connection String" topicOrQueueName: "The Topic Or Queue Name"

[id="azure_servicebus_sink_prerequisites"]
==== Prerequisites

Ensure that you have installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Also ensure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="azure_servicebus_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `azure-servicebus-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f azure-servicebus-sink-binding.yaml

[id="azure_servicebus_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-servicebus-sink -p "sink.connectionString=The Connection String" -p "sink.topicOrQueueName=The Topic Or Queue Name"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="azure_servicebus_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}azure-servicebus-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1


[id="azure-servicebus-source"]
= Azure Servicebus Source

*Provided by: "Red Hat"*

Consume Messages from Azure Servicebus.

The subscribtion name parameter needs to be populated in case of consuming from a Topic.

[id="azure_servicebus_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `azure-servicebus-source` Kamelet.

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionString {empty}* *| Connection String| Connection String for Azure Servicebus instance| `string` | |
| *topicOrQueueName {empty}* *| Topic Or Queue Name| Topic Or Queue Name for the Azure Servicebus instance| `string` | |
| serviceBusReceiveMode| Servicebus Receive Mode| Sets the receive mode for the receiver| string| `"PEEK_LOCK"`|
| subscriptionName| Subscription Name| Sets the name of the subscription in the topic to listen to. This parameter is mandatory in case of topic.| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="azure_servicebus_source_dependencies"]
== Dependencies

At runtime, the `azure-servicebus-source` Kamelet relies upon the presence of the following dependencies.

- `camel:azure-servicebus`
- `camel:kamelet`
- `camel:core`

[id="azure_servicebus_source_usage"]
== Usage




:leveloffset: +1

[id="azure_servicebus_source_knative_source"]
=== Knative Source

You can use the `azure-servicebus-source` Kamelet as a Knative source by binding it to a Knative object.

.azure-servicebus-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-servicebus-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-servicebus-source properties: connectionString: "The Connection String" topicOrQueueName: "The Topic Or Queue Name" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="azure_servicebus_source_prerequisite"]
==== Prerequisites
Ensure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="azure_servicebus_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `azure-servicebus-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f azure-servicebus-source-binding.yaml

[id="azure_servicebus_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind azure-servicebus-source -p "source.connectionString=The Connection String" -p "source.topicOrQueueName=The Topic Or Queue Name" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="azure_servicebus_source_knative_source"]
=== Knative Source

You can use the `azure-servicebus-source` Kamelet as a Knative source by binding it to a Knative object.

.azure-servicebus-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-servicebus-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-servicebus-source properties: connectionString: "The Connection String" topicOrQueueName: "The Topic Or Queue Name" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="azure_servicebus_source_prerequisite"]
==== Prerequisites
Ensure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="azure_servicebus_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `azure-servicebus-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f azure-servicebus-source-binding.yaml

[id="azure_servicebus_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind azure-servicebus-source -p "source.connectionString=The Connection String" -p "source.topicOrQueueName=The Topic Or Queue Name" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="azure_servicebus_source_kafka_source"]
=== Kafka Source

You can use the `azure-servicebus-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.azure-servicebus-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-servicebus-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-servicebus-source properties: connectionString: "The Connection String" topicOrQueueName: "The Topic Or Queue Name" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="azure_servicebus_source_prerequisites"]
==== Prerequisites

Ensure that you have installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Also ensure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="azure_servicebus_source_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `azure-servicebus-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command.
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f azure-servicebus-source-binding.yaml

[id="azure_servicebus_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command.

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind azure-servicebus-source -p "source.connectionString=The Connection String" -p "source.topicOrQueueName=The Topic Or Queue Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="azure_servicebus_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}azure-servicebus-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="azure-storage-blob-sink"]
= Azure Storage Blob Sink

Upload data to Azure Storage Blob.

[IMPORTANT]
====
The Azure Storage Blob Sink Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.

These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
====

The Kamelet expects the following headers to be set:

- `file` / `ce-file`: as the file name to upload

If the header is not set,  the exchange ID is used as file name.

[id="azure_storage_blob_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `azure-storage-blob-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| accessKey | Access Key| The Azure Storage Blob access Key.| `string` | |
| *accountName {empty}* *| Account Name| The Azure Storage Blob account name.| `string` | |
| *containerName {empty}* *| Container Name| The Azure Storage Blob container name.| `string` | |
|credentialType| Credential Type| Determines the credential strategy to adopt. Possible values are SHARED_ACCOUNT_KEY, SHARED_KEY_CREDENTIAL and AZURE_IDENTITY| string| `"SHARED_ACCOUNT_KEY"`|
| operation| Operation Name| The operation to perform.| string| `"uploadBlockBlob"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="azure_storage_blob_sink_dependencies"]

== Dependencies



[id="azure_storage_blob_sink_usage"]
== Usage




:leveloffset: +1

[id="azure_storage_blob_sink_knative_sink"]
=== Knative Sink

You can use the `azure-storage-blob-sink` Kamelet as a Knative sink by binding it to a Knative object.

.azure-storage-blob-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name"

:leveloffset: 3

:leveloffset: +1

[id="azure_storage_blob_sink_knative_sink"]
=== Knative Sink

You can use the `azure-storage-blob-sink` Kamelet as a Knative sink by binding it to a Knative object.

.azure-storage-blob-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name"

[id="azure_storage_blob_sink_kafka_sink"]
=== Kafka Sink

You can use the `azure-storage-blob-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.azure-storage-blob-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" containerName:

[NOTE]
====
If you want to use this kamelet with uri definition, you need to enclose accessKey to the RAW.

For example:
Copy to Clipboard

to ("kamelet:azure-storage-blob-sink?accessKey=RAW (ACCESS_KEY)&…​");

====

:leveloffset: 3


[id="azure_storage_blob_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}azure-storage-blob-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="azure-storage-blob-source"]
= Azure Storage Blob Source

Consume Files from Azure Storage Blob.

[IMPORTANT]
====
The Azure Storage Blob Source Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.

These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
====

[id="azure_storage_blob_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `azure-storage-blob-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| accessKey | Access Key| The Azure Storage Blob access Key.| `string` | |
| *accountName {empty}* *| Account Name| The Azure Storage Blob account name.| `string` | |
| *containerName {empty}* *| Container Name| The Azure Storage Blob container name.| `string` | |
| *period {empty}* *| Period Between Polls| The interval between fetches to the Azure Storage Container in milliseconds| integer| `10000`|
| credentialType| Credential Type| Determines the credential strategy to adopt. Possible values are SHARED_ACCOUNT_KEY, SHARED_KEY_CREDENTIAL and AZURE_IDENTITY| string| `"SHARED_ACCOUNT_KEY"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="azure_storage_blob_source_dependencies"]

== Dependencies



[id="azure_storage_blob_source_usage"]
== Usage




:leveloffset: +1

[id="azure_storage_blob_source_knative_source"]
=== Knative Source

You can use the `azure-storage-blob-source` Kamelet as a Knative source by binding it to a Knative object.

.azure-storage-blob-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-blob-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-blob-source properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="azure_storage_blob_source_knative_source"]
=== Knative Source

You can use the `azure-storage-blob-source` Kamelet as a Knative source by binding it to a Knative object.

.azure-storage-blob-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-blob-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-blob-source properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="azure_storage_blob_source_kafka_source"]
=== Kafka Source

You can use the `azure-storage-blob-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.azure-storage-blob-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-blob-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-blob-source properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" sink: ref: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="azure_storage_blob_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}azure-storage-blob-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1



= Azure Storage Queue Sink

Send Messages to Azure Storage queues.


[IMPORTANT]
====
The Azure Storage Queue Sink Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.

These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
====

The Kamelet is able to understand the following headers to be set:

- `expiration` / `ce-expiration`: as the time to live of the message in the queue.

If the header is not set,  the default of 7 days is used.

The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S — parses as 20.345 seconds, P2D — parses as 2 days.

[id="azure_storage_queue_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `azure-storage-queue-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| accessKey | Access Key| The Azure Storage Queue access Key.| `string` | |
| *accountName {empty}* *| Account Name| The Azure Storage Queue account name.| `string` | |
| *queueName {empty}* *| Queue Name| The Azure Storage Queue container name.| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="azure_storage_queue_sink_dependencies"]

== Dependencies



[id="azure_storage_queue_sink_usage"]
== Usage




:leveloffset: +1

[id="azure_storage_queue_sink_knative_sink"]
=== Knative Sink

You can use the `azure-storage-queue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.azure-storage-queue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-queue-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-queue-sink properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" queueName: "The Account Name

:leveloffset: 3

:leveloffset: +1

[id="azure_storage_queue_sink_knative_sink"]
=== Knative Sink

You can use the `azure-storage-queue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.azure-storage-queue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-queue-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-queue-sink properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" queueName: "The Account Name

[id="azure_storage_queue_sink_kafka_sink"]
=== Kafka Sink

You can use the `azure-storage-queue-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.azure-storage-queue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-queue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-queue-sink properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name""

:leveloffset: 3


[id="azure_storage_queue_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}azure-storage-queue-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1



= Azure Storage Queue Source

Receive Messages from Azure Storage queues.

[IMPORTANT]
====
The Azure Storage Queue Source Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.

These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
====

[id="azure_storage_queue_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `azure-storage-queue-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| accessKey | Access Key| The Azure Storage Queue access Key.| `string` | |
| *accountName {empty}* *| Account Name| The Azure Storage Queue account name.| `string` | |
| *queueName {empty}* *| Queue Name| The Azure Storage Queue container name.| `string` | |
| maxMessages| Maximum Messages| Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages is returned. By default it considers 1 message to be retrieved, the allowed range is 1 to 32 messages.| int| `1`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="azure_storage_queue_source_dependencies"]

== Dependencies



[id="azure_storage_queue_source_usage"]
== Usage




:leveloffset: +1

[id="azure_storage_queue_source_knative_source"]
=== Knative Source

You can use the `azure-storage-queue-source` Kamelet as a Knative source by binding it to a Knative object.

.azure-storage-queue-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-queue-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-queue-source properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="azure_storage_queue_source_knative_source"]
=== Knative Source

You can use the `azure-storage-queue-source` Kamelet as a Knative source by binding it to a Knative object.

.azure-storage-queue-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-queue-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-queue-source properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="azure_storage_queue_source_kafka_source"]
=== Kafka Source

You can use the `azure-storage-queue-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.azure-storage-queue-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: azure-storage-queue-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: azure-storage-queue-source properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" sink: ref: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="azure_storage_queue_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}azure-storage-queue-source.kamelet.yaml[]



:leveloffset: 3
////
:leveloffset: +1

[id="cassandra-sink"]
= Cassandra Sink

Send data to a Cassandra Cluster.

This Kamelet expects the body as JSON Array. The content of the JSON Array is used as input for the CQL Prepared Statement set in the query parameter.

[id="cassandra_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `cassandra-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* *| Connection Host| Hostname(s) cassandra server(s). Multiple hosts can be separated by comma.| `string` | | `"localhost"`
| *connectionPort {empty}* *| Connection Port| Port number of cassandra server(s)| `string` | | `9042`
| *keyspace {empty}* *| Keyspace| Keyspace to use| `string` | | `"customers"`
| *query {empty}* *| Query| The query to execute against the Cassandra cluster table| `string` | |

| password | Password| The password to use for accessing a secured Cassandra Cluster| `string` | |
| username | Username| The username to use for accessing a secured Cassandra Cluster| `string` | |
| consistencyLevel| Consistency Level| Consistency level to use. The value can be one of ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, SERIAL, LOCAL_SERIAL, LOCAL_ONE| string| `"ANY"`|
| prepareStatements |  Prepare Statements |  If true, specifies to use PreparedStatements as the query. If false, specifies to use regular Statements as the query. |  boolean | `true` |
| extraTypeCodecs | Extra Type Codecs | To use a specific comma separated list of Extra Type codecs. | string | | `"BLOB_TO_ARRAY"`
| jsonPayload |  JSON Payload |  If we want to transform the payload in json or not |  boolean | `true` |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="cassandra_sink_dependencies"]

== Dependencies



[id="cassandra_sink_usage"]
== Usage

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1

[id="cassandra_sink_knative_sink"]
=== Knative Sink

You can use the `cassandra-sink` Kamelet as a Knative sink by binding it to a Knative object.

.cassandra-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: cassandra-sink-binding spec: source: ref: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: cassandra-sink properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Password" query: "The Query" username: "The Username"

:leveloffset: 3

:leveloffset: +1

[id="cassandra_sink_knative_sink"]
=== Knative Sink

You can use the `cassandra-sink` Kamelet as a Knative sink by binding it to a Knative object.

.cassandra-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: cassandra-sink-binding spec: source: ref: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: cassandra-sink properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Password" query: "The Query" username: "The Username"

[id="cassandra_sink_kafka_sink"]
=== Kafka Sink

You can use the `cassandra-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.cassandra-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: cassandra-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: cassandra-sink properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Password" query: "The Query" username: "The Username" properties: "The Query" username: "The Username"

:leveloffset: 3


[id="cassandra_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}cassandra-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="cassandra-source"]
= Cassandra Source

Query a Cassandra cluster table.

[id="cassandra_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `cassandra-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* *| Connection Host| Hostname(s) cassandra server(s). Multiple hosts can be separated by comma.| `string` | | `"localhost"`
| *connectionPort {empty}* *| Connection Port| Port number of cassandra server(s)| `string` | | `9042`
| *keyspace {empty}* *| Keyspace| Keyspace to use| `string` | | `"customers"`
| *query {empty}* *| Query| The query to execute against the Cassandra cluster table| `string` | |

| password | Password| The password to use for accessing a secured Cassandra Cluster| `string` | |
| username | Username| The username to use for accessing a secured Cassandra Cluster| `string` | |
| consistencyLevel| Consistency Level| Consistency level to use. The value can be one of ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, SERIAL, LOCAL_SERIAL, LOCAL_ONE| string| `"QUORUM"`|
| resultStrategy| Result Strategy| The strategy to convert the result set of the query. Possible values are ALL, ONE, LIMIT_10, LIMIT_100...| string| `"ALL"`|
| extraTypeCodecs |  Extra Type Codecs |  To use a specific comma separated list of Extra Type codecs.  |  string | | `"BLOB_TO_ARRAY"`

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="cassandra_source_dependencies"]

== Dependencies



[id="cassandra_source_usage"]
== Usage

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1

[id="cassandra_source_knative_source"]
=== Knative Source

You can use the `cassandra-source` Kamelet as a Knative source by binding it to a Knative object.

.cassandra-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: cassandra-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: cassandra-source properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Query" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel)

:leveloffset: 3

:leveloffset: +1

[id="cassandra_source_knative_source"]
=== Knative Source

You can use the `cassandra-source` Kamelet as a Knative source by binding it to a Knative object.

.cassandra-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: cassandra-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: cassandra-source properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Query" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel)

[id="cassandra_source_kafka_source"]
=== Kafka Source

You can use the `cassandra-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.cassandra-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: cassandra-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: cassandra-source properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Password" query: "The Query" username: "The Username" sink: ref: kind: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="cassandra_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}cassandra-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="ceph-sink"]
= Ceph Sink

Upload data to an Ceph Bucket managed by a Object Storage Gateway.

In the header, you can optionally set the `file` / `ce-file` property to specify the name of the file to upload.

If you do not set the property in the header, the Kamelet uses the exchange ID for the file name.

[id="ceph_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `ceph-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *bucketName {empty}* *| Bucket Name| The Ceph Bucket name.| `string` | |
| *cephUrl {empty}* *| Ceph Url Address| Set the Ceph Object Storage Address Url.| `string` | | `"http://ceph-storage-address.com"`
| *zoneGroup {empty}* *| Bucket Zone Group| The bucket zone group.| `string` | |

| accessKey | Access Key| The access key.| `string` | |
| secretKey | Secret Key| The secret key.| `string` | |
| autoCreateBucket| Autocreate Bucket| Specifies to automatically create the bucket.| `boolean` | `false`|
| keyName| Key Name| The key name for saving an element in the bucket.| `string` | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.

[id="ceph_sink_dependencies"]

== Dependencies



[id="ceph_sink_usage"]
== Usage




:leveloffset: +1

[id="ceph_sink_knative_sink"]
=== Knative Sink

You can use the `ceph-sink` Kamelet as a Knative sink by binding it to a Knative object.

.ceph-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ceph-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ceph-sink properties: accessKey: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group"

:leveloffset: 3

:leveloffset: +1

[id="ceph_sink_knative_sink"]
=== Knative Sink

You can use the `ceph-sink` Kamelet as a Knative sink by binding it to a Knative object.

.ceph-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ceph-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ceph-sink properties: accessKey: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group"

[id="ceph_sink_kafka_sink"]
=== Kafka Sink

You can use the `ceph-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.ceph-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ceph-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ceph-sink properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group"

:leveloffset: 3


[id="ceph_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}ceph-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="ceph-source"]
= Ceph Source

Receive data from an Ceph Bucket, managed by a Object Storage Gateway.

[id="ceph_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `ceph-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *bucketName {empty}* *| Bucket Name| The Ceph Bucket name.| `string` | |
| *cephUrl {empty}* *| Ceph Url Address| Set the Ceph Object Storage Address Url.| `string` | | `"http://ceph-storage-address.com"`
| *zoneGroup {empty}* *| Bucket Zone Group| The bucket zone group.| `string` | |

| accessKey | Access Key| The access key.| `string` | |
| secretKey | Secret Key| The secret key.| `string` | |
| autoCreateBucket| Autocreate Bucket| Specifies to automatically create the bucket.| `boolean` | `false`|
| delay| Delay| The number of milliseconds before the next poll of the selected bucket.| integer| `500`|
| deleteAfterRead| Auto-delete Objects| Specifies to delete objects after consuming them.| `boolean` | `true`|
| ignoreBody| Ignore Body| If true, the Object body is ignored. Setting this to true overrides any behavior defined by the `includeBody` option. If false, the object is put in the body.| `boolean` | `false`|
| includeBody| Include Body| If true, the exchange is consumed and put into the body and closed. If false, the Object stream is put raw into the body and the headers are set with the object metadata.| `boolean` | `true`|
| prefix| Prefix| The bucket prefix to consider while searching.| `string` | | `"folder/"`
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.

[id="ceph_source_dependencies"]

== Dependencies



[id="ceph_source_usage"]
== Usage




:leveloffset: +1

[id="ceph_source_knative_source"]
=== Knative Source

You can use the `ceph-source` Kamelet as a Knative source by binding it to a Knative object.

.ceph-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ceph-source properties: accessKey: "The Access Key" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: "The Secret Key" zoneGroup

:leveloffset: 3

:leveloffset: +1

[id="ceph_source_knative_source"]
=== Knative Source

You can use the `ceph-source` Kamelet as a Knative source by binding it to a Knative object.

.ceph-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ceph-source properties: accessKey: "The Access Key" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: "The Secret Key" zoneGroup

[id="ceph_source_kafka_source"]
=== Kafka Source

You can use the `ceph-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.ceph-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ceph-source properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" sink: ref: kind: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="ceph_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}ceph-source.kamelet.yaml[]

:leveloffset: 3
// include::../../../modules/camel/kamelets-reference/kamelets/elasticsearch-index-sink.adoc[leveloffset=+1]
:leveloffset: +1

[id="extract-field-action"]
= Extract Field Action

Extract a field from the body

[id="extract_field_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `extract-field-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *field {empty}* *| Field| The name of the field to be added| `string` | |

| headerOutput |  Header Output |  If enabled, the action stores the extracted field in an header named `CamelKameletsExtractFieldName` |  `boolean` |  `false` |
| headerOutputName |  Header Output Name |  A custom name for the header containing the extracted field |  `string` |  `"none"` |
| strictHeaderCheck |  Strict Header Check |  If enabled, the action checks if the header output name (custom or default) has been used already in the exchange. If so, the extracted field is stored in the message body, if not, the extracted field is stored in the selected header (custom or default). |  `boolean` |  `false` |
| trimField |  Trim Field |  If enabled we return the Raw extracted field |  `boolean` |  `false` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="extract_field_action_dependencies"]

== Dependencies



[id="extract_field_action_usage"]
== Usage




:leveloffset: +1

[id="extract_field_action_knative_action"]
=== Knative Action

You can use the `extract-field-action` Kamelet as an intermediate step in a Knative binding.

.extract-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: extract-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: extract-field-action properties: field: "The Field" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="extract_field_action_knative_action"]
=== Knative Action

You can use the `extract-field-action` Kamelet as an intermediate step in a Knative binding.

.extract-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: extract-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: extract-field-action properties: field: "The Field" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="extract_field_action_kafka_action"]
=== Kafka Action

You can use the `extract-field-action` Kamelet as an intermediate step in a Kafka binding.

.extract-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: extract-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: extract-field-action properties: field: "The Field" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="extract_field_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}extract-field-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="ftp-sink"]
= FTP Sink

Send data to an FTP Server.

The Kamelet expects the following headers to be set:

- `file` / `ce-file`: as the file name to upload

If the header is not set,  the exchange ID is used as file name.

[id="ftp_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `ftp-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* *| Connection Host| Hostname of the FTP server| `string` | |
| *connectionPort {empty}* *| Connection Port| Port of the FTP server| string| `21`|
| *directoryName {empty}* *| Directory Name| The starting directory| `string` | |

| password | Password| The password to access the FTP server| `string` | |
| username | Username| The username to access the FTP server| `string` | |
| fileExist| File Existence| How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore| string| `"Override"`|
| passiveMode| Passive Mode| Sets passive mode connection| `boolean` | `false`|
| binary |  Binary |  Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). |  boolean | `false` |
| autoCreate |  Autocreate Missing Directories |  Automatically create starting directory. |  boolean | `true` |
| delete |  Delete |  If true, the file is deleted after it is processed successfully. |  boolean | `false` |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="ftp_sink_dependencies"]

== Dependencies



[id="ftp_sink_usage"]
== Usage




:leveloffset: +1

[id="ftp_sink_knative_sink"]
=== Knative Sink

You can use the `ftp-sink` Kamelet as a Knative sink by binding it to a Knative object.

.ftp-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Password" username: "The Username"

:leveloffset: 3

:leveloffset: +1

[id="ftp_sink_knative_sink"]
=== Knative Sink

You can use the `ftp-sink` Kamelet as a Knative sink by binding it to a Knative object.

.ftp-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Password" username: "The Username"

[id="ftp_sink_kafka_sink"]
=== Kafka Sink

You can use the `ftp-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.ftp-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftp-sink-binding spec: source: ref: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Password" username: "The Username" username: "The Password

:leveloffset: 3


[id="ftp_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}ftp-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="ftp-source"]
= FTP Source

Receive data from an FTP Server.

[id="ftp_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `ftp-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* *| Connection Host| Hostname of the FTP server| `string` | |
| *connectionPort {empty}* *| Connection Port| Port of the FTP server| string| `21`|
| *directoryName {empty}* *| Directory Name| The starting directory| `string` | |

| password | Password| The password to access the FTP server| `string` | |
| username | Username| The username to access the FTP server| `string` | |
| idempotent| Idempotency| Skip already processed files.| `boolean` | `true`|
| passiveMode| Passive Mode| Sets passive mode connection| `boolean` | `false`|
| recursive| Recursive| If a directory, looks for files in all the subdirectories as well.| `boolean` | `false`|
| binary |  Binary |  Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). |  boolean | `false` |
| autoCreate |  Autocreate Missing Directories |  Automatically create starting directory. |  boolean | `true` |
| delete |  Delete |  If true, the file is deleted after it is processed successfully. |  boolean | `false` |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="ftp_source_dependencies"]

== Dependencies



[id="ftp_source_usage"]
== Usage




:leveloffset: +1

[id="ftp_source_knative_source"]
=== Knative Source

You can use the `ftp-source` Kamelet as a Knative source by binding it to a Knative object.

.ftp-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftp-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: ftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="ftp_source_knative_source"]
=== Knative Source

You can use the `ftp-source` Kamelet as a Knative source by binding it to a Knative object.

.ftp-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftp-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: ftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="ftp_source_kafka_source"]
=== Kafka Source

You can use the `ftp-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.ftp-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftp-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: ftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3



[id="ftp_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}ftp-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="ftps-sink"]
= FTPS Sink

Send data to an FTPS Server.

The Kamelet expects the following headers to be set:

- `file` / `ce-file`: as the file name to upload

If the header is not set,  the exchange ID is used as file name.

[id="ftps_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `ftps-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* *| Connection Host| Hostname of the FTP server| `string` | |
| *connectionPort {empty}* *| Connection Port| Port of the FTP server| string| `21`|
| *directoryName {empty}* *| Directory Name| The starting directory| `string` | |

| password | Password| The password to access the FTP server| `string` | |
| username | Username| The username to access the FTP server| `string` | |
| fileExist| File Existence| How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore| string| `"Override"`|
| passiveMode| Passive Mode| Sets passive mode connection| `boolean` | `false`|
| binary |  Binary |  Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). |  boolean | `false` |
| autoCreate |  Autocreate Missing Directories |  Automatically create starting directory. |  boolean | `true` |
| delete |  Delete |  If true, the file is deleted after it is processed successfully. |  boolean | `false` |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="ftps_sink_dependencies"]

== Dependencies



[id="ftps_sink_usage"]
== Usage




:leveloffset: +1

[id="ftps_sink_knative_sink"]
=== Knative Sink

You can use the `ftps-sink` Kamelet as a Knative sink by binding it to a Knative object.

.ftps-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftps-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ftps-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Password" username: "The Username" username:

:leveloffset: 3

:leveloffset: +1

[id="ftps_sink_knative_sink"]
=== Knative Sink

You can use the `ftps-sink` Kamelet as a Knative sink by binding it to a Knative object.

.ftps-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftps-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ftps-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Password" username: "The Username" username:

[id="ftps_sink_kafka_sink"]
=== Kafka Sink

You can use the `ftps-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.ftps-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftps-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: ftps-sink properties: connectionHost: "The Directory Name" directoryName: "The Directory Name" password: "The Password" username: "The Username" username: "The Username"

:leveloffset: 3


[id="ftps_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}ftps-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="ftps-source"]
= FTPS Source

Receive data from an FTPS Server.

[id="ftps_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `ftps-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* *| Connection Host| Hostname of the FTPS server| `string` | |
| *connectionPort {empty}* *| Connection Port| Port of the FTPS server| string| `21`|
| *directoryName {empty}* *| Directory Name| The starting directory| `string` | |

| password | Password| The password to access the FTPS server| `string` | |
| username | Username| The username to access the FTPS server| `string` | |
| idempotent| Idempotency| Skip already processed files.| `boolean` | `true`|
| passiveMode| Passive Mode| Sets passive mode connection| `boolean` | `false`|
| recursive| Recursive| If a directory, looks for files in all the subdirectories as well.| `boolean` | `false`|
| binary |  Binary |  Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). |  boolean | `false` |
| autoCreate |  Autocreate Missing Directories |  Automatically create starting directory. |  boolean | `true` |
| delete |  Delete |  If true, the file is deleted after it is processed successfully. |  boolean | `false` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="ftps_source_dependencies"]

== Dependencies



[id="ftps_source_usage"]
== Usage




:leveloffset: +1

[id="ftps_source_knative_source"]
=== Knative Source

You can use the `ftps-source` Kamelet as a Knative source by binding it to a Knative object.

.ftps-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftps-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: ftps-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="ftps_source_knative_source"]
=== Knative Source

You can use the `ftps-source` Kamelet as a Knative source by binding it to a Knative object.

.ftps-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftps-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: ftps-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="ftps_source_kafka_source"]
=== Kafka Source

You can use the `ftps-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.ftps-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: ftps-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: ftps-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3



[id="ftps_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}ftps-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="has-header-filter-action"]
= Has Header Filter Action

Filter based on the presence of one header

[id="has_header_filter_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `has-header-filter-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *name {empty}* *| Header Name| The header name to evaluate. The header name must be passed by the source Kamelet. For Knative only, if you are using Cloud Events, you must include the CloudEvent (ce-) prefix in the header name.| `string` | | `"headerName"`
| value | Header Value | An optional header value to compare the header to. | string  |  | headerValue

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="has_header_filter_action_dependencies"]

== Dependencies



[id="has_header_filter_action_usage"]
== Usage




:leveloffset: +1

[id="has_header_filter_action_knative_action"]
=== Knative Action

You can use the `has-header-filter-action` Kamelet as an intermediate step in a Knative binding.

.has-header-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: has-header-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "my-header" value: "my-value" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: has-header-filter-action properties: name: "my-header" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="has_header_filter_action_knative_action"]
=== Knative Action

You can use the `has-header-filter-action` Kamelet as an intermediate step in a Knative binding.

.has-header-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: has-header-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "my-header" value: "my-value" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: has-header-filter-action properties: name: "my-header" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="has_header_filter_action_kafka_action"]
=== Kafka Action

You can use the `has-header-filter-action` Kamelet as an intermediate step in a Kafka binding.

.has-header-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: has-header-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "my-header" value: "my-value" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: has-header-filter-action properties: name: "my-header" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="has_header_filter_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}has-header-filter-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="hoist-field-action"]
= Hoist Field Action

Wrap data in a single field

[id="hoist_field_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `hoist-field-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *field {empty}* *| Field| The name of the field to contain the event| `string` | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="hoist_field_action_dependencies"]

== Dependencies



[id="hoist_field_action_usage"]
== Usage




:leveloffset: +1

[id="hoist_field_action_knative_action"]
=== Knative Action

You can use the `hoist-field-action` Kamelet as an intermediate step in a Knative binding.

.hoist-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: hoist-field-action properties: field: "The Field" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="hoist_field_action_knative_action"]
=== Knative Action

You can use the `hoist-field-action` Kamelet as an intermediate step in a Knative binding.

.hoist-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: hoist-field-action properties: field: "The Field" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="hoist_field_action_kafka_action"]
=== Kafka Action

You can use the `hoist-field-action` Kamelet as an intermediate step in a Kafka binding.

.hoist-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: hoist-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: hoist-field-action properties: field: "The Field" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3



[id="hoist_field_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}hoist-field-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="http-sink"]
= HTTP Sink

Forwards an event to a HTTP endpoint

[id="http_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `http-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *url {empty}* *| URL| The URL to send data to| `string` | | `"https://my-service/path"`

| method| Method| The HTTP method to use| string| `"POST"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="http_sink_dependencies"]

== Dependencies



[id="http_sink_usage"]
== Usage




:leveloffset: +1

[id="http_sink_knative_sink"]
=== Knative Sink

You can use the `http-sink` Kamelet as a Knative sink by binding it to a Knative object.

.http-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: http-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: http-sink properties: url: "https://my-service/path"

:leveloffset: 3

:leveloffset: +1

[id="http_sink_knative_sink"]
=== Knative Sink

You can use the `http-sink` Kamelet as a Knative sink by binding it to a Knative object.

.http-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: http-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: http-sink properties: url: "https://my-service/path"

[id="http_sink_kafka_sink"]
=== Kafka Sink

You can use the `http-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.http-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: http-sink-binding spec: source: ref: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: http-sink properties: url: "https://my-service/path"

:leveloffset: 3


[id="http_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}http-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="insert-field-action"]
= Insert Field Action

Adds a custom field with a constant value to the message in transit

[id="insert_field_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `insert-field-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *field {empty}* *| Field| The name of the field to be added| `string` | |
| *value {empty}* *| Value| The value of the field| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="insert_field_action_dependencies"]

== Dependencies



[id="insert_field_action_usage"]
== Usage




:leveloffset: +1

[id="insert_field_action_knative_action"]
=== Knative Action

You can use the `insert-field-action` Kamelet as an intermediate step in a Knative binding.

.insert-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"foo":"John"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-field-action properties: field: "The Field" value: "The Value" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="insert_field_action_knative_action"]
=== Knative Action

You can use the `insert-field-action` Kamelet as an intermediate step in a Knative binding.

.insert-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"foo":"John"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-field-action properties: field: "The Field" value: "The Value" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="insert_field_action_kafka_action"]
=== Kafka Action

You can use the `insert-field-action` Kamelet as an intermediate step in a Kafka binding.

.insert-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: insert-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"foo":"John"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-field-action properties: field: "The Field" value: "The Value" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="insert_field_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}insert-field-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="insert-header-action"]
= Insert Header Action

Adds an header with a constant value to the message in transit

[id="insert_header_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `insert-header-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *name {empty}* *| Name| The name of the header to be added. For Knative only, the name of the header requires a CloudEvent (ce-) prefix.| `string` | |
| *value {empty}* *| Value| The value of the header| `string` | |
|===

{empty}* = Fields marked with an asterisk are mandatory*.


[id="insert_header_action_dependencies"]

== Dependencies



[id="insert_header_action_usage"]
== Usage




:leveloffset: +1

[id="insert_header_action_knative_action"]
=== Knative Action

You must use the `insert-header-action` Kamelet as an intermediate step in a Knative binding.

.insert-header-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: insert-header-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "The Name" value: "The Value" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="insert_header_action_knative_action"]
=== Knative Action

You must use the `insert-header-action` Kamelet as an intermediate step in a Knative binding.

.insert-header-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: insert-header-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "The Name" value: "The Value" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="insert_header_action_kafka_action"]
=== Kafka Action

You must use the `insert-header-action` Kamelet as an intermediate step in a Kafka binding.

.insert-header-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: insert-header-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "The Name" value: "The Value" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="insert_header_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}insert-header-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="is-tombstone-filter-action"]
= Is Tombstone Filter Action

Filter based on the presence of body or not

[id="is_tombstone_filter_action_configuration_options"]
== Configuration Options

The `is-tombstone-filter-action` Kamelet does not specify any configuration option.


[id="is_tombstone_filter_action_dependencies"]

== Dependencies



[id="is_tombstone_filter_action_usage"]
== Usage




:leveloffset: +1

[id="is_tombstone_filter_action_knative_action"]
=== Knative Action

You can use the `is-tombstone-filter-action` Kamelet as an intermediate step in a Knative binding.

.is-tombstone-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: is-tombstone-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: is-tombstone-filter-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

:leveloffset: 3

:leveloffset: +1

[id="is_tombstone_filter_action_knative_action"]
=== Knative Action

You can use the `is-tombstone-filter-action` Kamelet as an intermediate step in a Knative binding.

.is-tombstone-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: is-tombstone-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: is-tombstone-filter-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="is_tombstone_filter_action_kafka_action"]
=== Kafka Action

You can use the `is-tombstone-filter-action` Kamelet as an intermediate step in a Kafka binding.

.is-tombstone-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: is-tombstone-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: is-tombstone-filter-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

:leveloffset: 3


[id="is_tombstone_filter_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}is-tombstone-filter-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="jira-add-comment-sink"]
= Jira Add Comment Sink

Add a new comment to an existing issue in Jira.

The Kamelet expects the following headers to be set:

* `issueKey` / `ce-issueKey`: as the issue code.

The comment is set in the body of the message.

[id="jira_add_comment_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jira-add-comment-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *jiraUrl {empty}* *| Jira URL| The URL of your instance of Jira| `string` | | `"http://my_jira.com:8081"`
| password | Password| The password or the API Token to access Jira| `string` | |
| username | Username| The username to access Jira| `string` | |
| personal-token | Personal Token | Personal Token | string | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.

[id="jira_add_comment_sink_dependencies"]

== Dependencies



[id="jira_add_comment_sink_usage"]
== Usage




:leveloffset: +1

[id="jira_add_comment_sink_knative_sink"]
=== Knative Sink

You can use the `jira-add-comment-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-add-comment-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-167" sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

:leveloffset: 3

:leveloffset: +1

[id="jira_add_comment_sink_knative_sink"]
=== Knative Sink

You can use the `jira-add-comment-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-add-comment-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-167" sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_add_comment_sink_kafka_sink"]
=== Kafka Sink

You can use the `jira-add-comment-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.jira-add-comment-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-167" sink: ref: kind Kamelet apiVersion: camel.apache.org/v1 name: jira-add-comment-sink properties: jiraUrl: "jira server url" username: "username" password: "password"

:leveloffset: 3


[id="jira_add_comment_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jira-add-comment-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="jira-add-issue-sink"]
= Jira Add Issue Sink

Add a new issue to Jira.

The Kamelet expects the following headers to be set:

* `projectKey` / `ce-projectKey`: as the Jira project key.

* `issueTypeName` / `ce-issueTypeName`: as the name of the issue type (example: Bug, Enhancement).

* `issueSummary` / `ce-issueSummary`: as the title or summary of the issue.

* `issueAssignee` / `ce-issueAssignee`: as the user assigned to the issue (Optional).

* `issuePriorityName` / `ce-issuePriorityName`: as the priority name of the issue (example: Critical, Blocker, Trivial) (Optional).

* `issueComponents` / `ce-issueComponents`: as list of string with the valid component names (Optional).

* `issueDescription` / `ce-issueDescription`: as the issue description (Optional).

The issue description can be set from the body of the message or the `issueDescription`/`ce-issueDescription` in the header, however the body takes precedence.

[id="jira_add_issue_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jira-add-issue-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *jiraUrl {empty}* *| Jira URL| The URL of your instance of Jira| `string` | | `"http://my_jira.com:8081"`
| password | Password| The password or the API Token to access Jira| `string` | |
| username | Username| The username to access Jira| `string` | |
| personal-token | Personal Token | Personal Token | string | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.

[id="jira_add_issue_sink_dependencies"]

== Dependencies



[id="jira_add_issue_sink_usage"]
== Usage




:leveloffset: +1

[id="jira_add_issue_sink_knative_sink"]
=== Knative Sink

You can use the `jira-add-issue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-add-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-add-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "projectKey" value: "MYP" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_add_issue_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_add_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-add-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-add-issue-sink-binding.yaml

[id="jira_add_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-add-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=projectKey -p step-0.value=MYP --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Bug --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a bug" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Low jira-add-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jira_add_issue_sink_knative_sink"]
=== Knative Sink

You can use the `jira-add-issue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-add-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-add-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "projectKey" value: "MYP" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_add_issue_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_add_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-add-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-add-issue-sink-binding.yaml

[id="jira_add_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-add-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=projectKey -p step-0.value=MYP --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Bug --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a bug" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Low jira-add-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

[id="jira_add_issue_sink_kafka_sink"]
=== Kafka Sink

You can use the `jira-add-issue-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.jira-add-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-add-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "projectKey" value: "MYP" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jira-add-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_add_issue_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_add_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-add-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-add-issue-sink-binding.yaml

[id="jira_add_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-add-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=projectKey -p step-0.value=MYP --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Bug --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a bug" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Low jira-add-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jira_add_issue_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jira-add-issue-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="jira-transition-issue-sink"]
= Jira Transition Issue Sink

Sets a new status (transition to) of an existing issue in Jira.

The Kamelet expects the following headers to be set:

* `issueKey` / `ce-issueKey`: as the issue unique code.

* `issueTransitionId` / `ce-issueTransitionId`: as the new status (transition) code. You should carefully check the project workflow as each transition may have conditions to check before the transition is made.

The comment of the transition is set in the body of the message.

[id="jira_transition_issue_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jira-transition-issue-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *jiraUrl {empty}* *| Jira URL| The URL of your instance of Jira| `string` | | `"http://my_jira.com:8081"`
| password | Password| The password or the API Token to access Jira| `string` | |
| username | Username| The username to access Jira| `string` | |
| personal-token | Personal Token | Personal Token | string | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.

[id="jira_transition_issue_sink_dependencies"]

== Dependencies



[id="jira_transition_issue_sink_usage"]
== Usage




:leveloffset: +1

[id="jira_transition_issue_sink_knative_sink"]
=== Knative Sink

You can use the `jira-transition-issue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-transition-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_transition_issue_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_transition_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-transition-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-transition-issue-sink-binding.yaml

[id="jira_transition_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jira_transition_issue_sink_knative_sink"]
=== Knative Sink

You can use the `jira-transition-issue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-transition-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_transition_issue_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_transition_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-transition-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-transition-issue-sink-binding.yaml

[id="jira_transition_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

[id="jira_transition_issue_sink_kafka_sink"]
=== Kafka Sink

You can use the `jira-transition-issue-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.jira-transition-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind Kamelet apiVersion: camel.apache.org/v1 name: jira-transition-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_transition_issue_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_transition_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-transition-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-transition-issue-sink-binding.yaml

[id="jira_transition_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jira_transition_issue_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jira-transition-issue-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="jira-update-issue-sink"]
= Jira Update Issue Sink

Update fields of an existing issue in Jira. The Kamelet expects the following headers to be set:

* `issueKey` / `ce-issueKey`: as the issue code in Jira.

* `issueTypeName` / `ce-issueTypeName`: as the name of the issue type (example: Bug, Enhancement).

* `issueSummary` / `ce-issueSummary`: as the title or summary of the issue.

* `issueAssignee` / `ce-issueAssignee`: as the user assigned to the issue (Optional).

* `issuePriorityName` / `ce-issuePriorityName`: as the priority name of the issue (example: Critical, Blocker, Trivial) (Optional).

* `issueComponents` / `ce-issueComponents`: as list of string with the valid component names (Optional).

* `issueDescription` / `ce-issueDescription`: as the issue description (Optional).

The issue description can be set from the body of the message or the `issueDescription`/`ce-issueDescription` in the header, however the body takes precedence.

[id="jira_update_issue_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jira-update-issue-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *jiraUrl {empty}* *| Jira URL| The URL of your instance of Jira| `string` | | `"http://my_jira.com:8081"`
| password | Password| The password or the API Token to access Jira| `string` | |
| username | Username| The username to access Jira| `string` | |
| personal-token | Personal Token | Personal Token | string | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.

[id="jira_update_issue_sink_dependencies"]

== Dependencies



[id="jira_update_issue_sink_usage"]
== Usage




:leveloffset: +1

[id="jira_update_issue_sink_knative_sink"]
=== Knative Sink

You can use the `jira-update-issue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-update-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-163" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_update_issue_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_update_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-update-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-update-issue-sink-binding.yaml

[id="jira_update_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-update-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a story 123" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jira_update_issue_sink_knative_sink"]
=== Knative Sink

You can use the `jira-update-issue-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jira-update-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-163" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_update_issue_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_update_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-update-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-update-issue-sink-binding.yaml

[id="jira_update_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-update-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a story 123" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

[id="jira_update_issue_sink_kafka_sink"]
=== Kafka Sink

You can use the `jira-update-issue-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.jira-update-issue-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: jira-update-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueKey" value: "MYP-163" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueTypeName" value: "Bug" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issueSummary" value: "The issue summary" - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: insert-header-action properties: name: "issuePriorityName" value: "Low" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jira-update-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password"

[id="jira_update_issue_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_update_issue_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-update-issue-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-update-issue-sink-binding.yaml

[id="jira_update_issue_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jira-update-issue-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTypeName -p step-1.value=Story --step insert-header-action -p step-2.name=issueSummary -p step-2.value="This is a story 123" --step insert-header-action -p step-3.name=issuePriorityName -p step-3.value=Highest jira-update-issue-sink?jiraUrl="jira url"\&username="username"\&password="password"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jira_update_issue_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jira-update-issue-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="jira-source"]
= Jira Source

Receive notifications about new issues from Jira.

[id="jira_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jira-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *jiraUrl {empty}* *| Jira URL| The URL of your instance of Jira| `string` | | `"http://my_jira.com:8081"`

| password | Password| The password to access Jira| `string` | |
| username | Username| The username to access Jira| `string` | |
| jql| JQL| A query to filter issues| `string` | | `"project=MyProject"`
| personal-token | Personal Token | Personal Token | string | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="jira_source_dependencies"]

== Dependencies



[id="jira_source_usage"]
== Usage




:leveloffset: +1

[id="jira_source_knative_source"]
=== Knative Source

You can use the `jira-source` Kamelet as a Knative source by binding it to a Knative object.

.jira-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jira-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: jira-source properties: jiraUrl: "http://my_jira.com:8081" password: "The Password" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jira_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_source_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-source-binding.yaml

[id="jira_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jira_source_knative_source"]
=== Knative Source

You can use the `jira-source` Kamelet as a Knative source by binding it to a Knative object.

.jira-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jira-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: jira-source properties: jiraUrl: "http://my_jira.com:8081" password: "The Password" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jira_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_source_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-source-binding.yaml

[id="jira_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="jira_source_kafka_source"]
=== Kafka Source

You can use the `jira-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.jira-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jira-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: jira-source properties: jiraUrl: "http://my_jira.com:8081" password: "The Password" username: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="jira_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jira_source_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jira-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jira-source-binding.yaml

[id="jira_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jira_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jira-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="jms-sink"]
= JMS - AMQP 1.0 Kamelet Sink

A Kamelet that can produce events to any AMQP 1.0 compliant message broker using the Apache Qpid JMS client

[id="jms_amqp_10_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jms-amqp-10-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *destinationName {empty}* *| Destination Name| The JMS destination name| `string` | |
| *remoteURI {empty}* *| Broker URL| The JMS URL| `string` | | `"amqp://my-host:31616"`

| destinationType| Destination Type| The JMS destination type (i.e.: queue or topic)| string| `"queue"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="jms_amqp_10_sink_dependencies"]

== Dependencies



[id="jms_amqp_10_sink_usage"]
== Usage




:leveloffset: +1

[id="jms_amqp_10_sink_knative_sink"]
=== Knative Sink

You can use the `jms-amqp-10-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jms-amqp-10-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-amqp-10-sink properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616"

[id="jms_amqp_10_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_amqp_10_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jms-amqp-10-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-amqp-10-sink-binding.yaml

[id="jms_amqp_10_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jms_amqp_10_sink_knative_sink"]
=== Knative Sink

You can use the `jms-amqp-10-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jms-amqp-10-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-amqp-10-sink properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616"

[id="jms_amqp_10_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_amqp_10_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jms-amqp-10-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-amqp-10-sink-binding.yaml

[id="jms_amqp_10_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"

This command creates the KameletBinding in the current namespace on the cluster.

[id="jms_amqp_10_sink_kafka_sink"]
=== Kafka Sink

You can use the `jms-amqp-10-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.jms-amqp-10-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-amqp-10-sink properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616"

[id="jms_amqp_10_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_amqp_10_sink_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jms-amqp-10-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-amqp-10-sink-binding.yaml

[id="jms_amqp_10_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jms_amqp_10_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jms-amqp-10-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="jms-source"]
= JMS - AMQP 1.0 Source

A Kamelet that can consume events from any AMQP 1.0 compliant message broker using the Apache Qpid JMS client

[id="jms_amqp_10_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jms-amqp-10-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *destinationName {empty}* *| Destination Name| The JMS destination name| `string` | |
| *remoteURI {empty}* *| Broker URL| The JMS URL| `string` | | `"amqp://my-host:31616"`

| destinationType| Destination Type| The JMS destination type (i.e.: queue or topic)| string| `"queue"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="jms_amqp_10_source_dependencies"]

== Dependencies



[id="jms_amqp_10_source_usage"]
== Usage




:leveloffset: +1

[id="jms_amqp_10_source_knative_source"]
=== Knative Source

You can use the `jms-amqp-10-source` Kamelet as a Knative source by binding it to a Knative object.

.jms-amqp-10-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-amqp-10-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-amqp-10-source properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jms_amqp_10_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_amqp_10_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `jms-amqp-10-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-amqp-10-source-binding.yaml

[id="jms_amqp_10_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jms_amqp_10_source_knative_source"]
=== Knative Source

You can use the `jms-amqp-10-source` Kamelet as a Knative source by binding it to a Knative object.

.jms-amqp-10-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-amqp-10-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-amqp-10-source properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jms_amqp_10_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_amqp_10_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `jms-amqp-10-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-amqp-10-source-binding.yaml

[id="jms_amqp_10_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="jms_amqp_10_source_kafka_source"]
=== Kafka Source

You can use the `jms-amqp-10-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.jms-amqp-10-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-amqp-10-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-amqp-10-source properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="jms_amqp_10_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_amqp_10_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `jms-amqp-10-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-amqp-10-source-binding.yaml

[id="jms_amqp_10_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jms_amqp_10_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jms-amqp-10-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="jms-ibm-mq-sink"]
= JMS - IBM MQ Kamelet Sink

A Kamelet that can produce events to an IBM MQ message queue using JMS.

[id="jms_ibm_mq_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jms-ibm-mq-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *channel {empty}* *| IBM MQ Channel| Name of the IBM MQ Channel| `string` | |
| *destinationName {empty}* *| Destination Name| The destination name| `string` | |
| *queueManager {empty}* *| IBM MQ Queue Manager| Name of the IBM MQ Queue Manager| `string` | |
| *serverName {empty}* *| IBM MQ Server name| IBM MQ Server name or address| `string` | |
| *serverPort {empty}* *| IBM MQ Server Port| IBM MQ Server port| integer| `1414`|

| password | Password| Password to authenticate to IBM MQ server| `string` | |
| username | Username| Username to authenticate to IBM MQ server| `string` | |
| clientId| IBM MQ Client ID| Name of the IBM MQ Client ID| `string` | |
| destinationType| Destination Type| The JMS destination type (queue or topic)| string| `"queue"`|

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="jms_ibm_mq_sink_dependencies"]

== Dependencies



[id="jms_ibm_mq_sink_usage"]
== Usage




:leveloffset: +1

[id="jms_ibm_mq_sink_knative_sink"]
=== Knative Sink

You can use the `jms-ibm-mq-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jms-ibm-mq-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager:: QM1 チャネル:DEV.APP.SVRCONN ユーザー名:アプリパスワード:passw0rd

[id="jms_ibm_mq_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_ibm_mq_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `jms-ibm-mq-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-ibm-mq-sink-binding.yaml

[id="jms_ibm_mq_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jms_ibm_mq_sink_knative_sink"]
=== Knative Sink

You can use the `jms-ibm-mq-sink` Kamelet as a Knative sink by binding it to a Knative object.

.jms-ibm-mq-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager:: QM1 チャネル:DEV.APP.SVRCONN ユーザー名:アプリパスワード:passw0rd

[id="jms_ibm_mq_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_ibm_mq_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `jms-ibm-mq-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-ibm-mq-sink-binding.yaml

[id="jms_ibm_mq_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'

This command creates the KameletBinding in the current namespace on the cluster.

[id="jms_ibm_mq_sink_kafka_sink"]
=== Kafka Sink

You can use the `jms-ibm-mq-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.jms-ibm-mq-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-ibm-mq-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-ibm-mq-sink properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager:: QM1 チャネル:DEV.APP.SVRCONN ユーザー名:アプリパスワード:passw0rd

[id="jms_ibm_mq_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_ibm_mq_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `jms-ibm-mq-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-ibm-mq-sink-binding.yaml

[id="jms_ibm_mq_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jms-ibm-mq-sink-binding timer-source?message="Hello IBM MQ!" 'jms-ibm-mq-sink?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd'

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jms_ibm_mq_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jms-ibm-mq-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="jms-ibm-mq-source"]
= JMS - IBM MQ Source

A Kamelet that can read events from an IBM MQ message queue using JMS.

[id="jms_ibm_mq_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jms-ibm-mq-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *channel {empty}* *| IBM MQ Channel| Name of the IBM MQ Channel| `string` | |
| *destinationName {empty}* *| Destination Name| The destination name| `string` | |
| *queueManager {empty}* *| IBM MQ Queue Manager| Name of the IBM MQ Queue Manager| `string` | |
| *serverName {empty}* *| IBM MQ Server name| IBM MQ Server name or address| `string` | |
| *serverPort {empty}* *| IBM MQ Server Port| IBM MQ Server port| integer| `1414`|

| password | Password| Password to authenticate to IBM MQ server| `string` | |
| username | Username| Username to authenticate to IBM MQ server| `string` | |
| clientId| IBM MQ Client ID| Name of the IBM MQ Client ID| `string` | |
| destinationType| Destination Type| The JMS destination type (queue or topic)| string| `"queue"`|

| sslCipherSuite | CipherSuite | CipherSuite to use for enabling TLS | string ||

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="jms_ibm_mq_source_dependencies"]

== Dependencies



[id="jms_ibm_mq_source_usage"]
== Usage




:leveloffset: +1

[id="jms_ibm_mq_source_knative_source"]
=== Knative Source

You can use the `jms-ibm-mq-source` Kamelet as a Knative source by binding it to a Knative object.

.jms-ibm-mq-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-ibm-mq-source properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager:: QM1 チャネル:DEV.APP.SVRCONN ユーザー名:アプリパスワード:passw0rd sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jms_ibm_mq_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_ibm_mq_source_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jms-ibm-mq-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-ibm-mq-source-binding.yaml

[id="jms_ibm_mq_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="jms_ibm_mq_source_knative_source"]
=== Knative Source

You can use the `jms-ibm-mq-source` Kamelet as a Knative source by binding it to a Knative object.

.jms-ibm-mq-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-ibm-mq-source properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager:: QM1 チャネル:DEV.APP.SVRCONN ユーザー名:アプリパスワード:passw0rd sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jms_ibm_mq_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_ibm_mq_source_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jms-ibm-mq-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-ibm-mq-source-binding.yaml

[id="jms_ibm_mq_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="jms_ibm_mq_source_kafka_source"]
=== Kafka Source

You can use the `jms-ibm-mq-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.jms-ibm-mq-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jms-ibm-mq-source properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager:: QM1 チャネル:DEV.APP.SVRCONN ユーザー名:アプリパスワード:passw0rd sink: ref: kind KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="jms_ibm_mq_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="jms_ibm_mq_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `jms-ibm-mq-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jms-ibm-mq-source-binding.yaml

[id="jms_ibm_mq_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3



[id="jms_ibm_mq_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jms-ibm-mq-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="jslt-action"]
= JSLT Action

Apply a JSLT query or transformation on JSON.

[id="jslt_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `jslt-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *template {empty}* *| Template| The inline template for JSLT Transformation| `string` | | `"file://template.json"`

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="jslt_action_dependencies"]

== Dependencies



[id="jslt_action_usage"]
== Usage




:leveloffset: +1

[id="jslt_action_knative_action"]
=== Knative Action

You can use the `jslt-action` Kamelet as an intermediate step in a Knative binding.

.jslt-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jslt_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="jslt_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jslt-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jslt-action-binding.yaml

[id="jslt_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.


If the template points to a file that is not in the current directory, and if `file://` or `classpath://` is used, supply the transformation using the secret or the configmap.


To view examples, see https://github.com/apache/camel-k-examples/blob/main/generic-examples/http/PlatformHttpsServer.java[with secret] and https://github.com/apache/camel-k-examples/blob/main/generic-examples/traits/jvm/Classpath.java[with configmap].
For details about necessary traits, see https://camel.apache.org/camel-k/1.11.x/traits/mount.html[Mount trait] and https://camel.apache.org/camel-k/1.11.x/traits/jvm.html[JVM classpath trait].




:leveloffset: 3

:leveloffset: +1

[id="jslt_action_knative_action"]
=== Knative Action

You can use the `jslt-action` Kamelet as an intermediate step in a Knative binding.

.jslt-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="jslt_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="jslt_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jslt-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jslt-action-binding.yaml

[id="jslt_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.


If the template points to a file that is not in the current directory, and if `file://` or `classpath://` is used, supply the transformation using the secret or the configmap.


To view examples, see https://github.com/apache/camel-k-examples/blob/main/generic-examples/http/PlatformHttpsServer.java[with secret] and https://github.com/apache/camel-k-examples/blob/main/generic-examples/traits/jvm/Classpath.java[with configmap].
For details about necessary traits, see https://camel.apache.org/camel-k/1.11.x/traits/mount.html[Mount trait] and https://camel.apache.org/camel-k/1.11.x/traits/jvm.html[JVM classpath trait].




[id="jslt_action_kafka_action"]
=== Kafka Action

You can use the `jslt-action` Kamelet as an intermediate step in a Kafka binding.

.jslt-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="jslt_action_prerequisites"]
==== Prerequisites

Ensure that you have installed the *AMQ Streams* operator in your OpenShift cluster and create a topic named `my-topic` in the current namespace.
Also, you must have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you are connected to.

[id="jslt_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `jslt-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f jslt-action-binding.yaml

[id="jslt_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="jslt_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}jslt-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="json-deserialize-action"]
= Json Deserialize Action

Deserialize payload to JSON

[id="json_deserialize_action_configuration_options"]
== Configuration Options

The `json-deserialize-action` Kamelet does not specify any configuration option.


[id="json_deserialize_action_dependencies"]

== Dependencies



[id="json_deserialize_action_usage"]
== Usage




:leveloffset: +1

[id="json_deserialize_action_knative_action"]
=== Knative Action

You can use the `json-deserialize-action` Kamelet as an intermediate step in a Knative binding.

.json-deserialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: json-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="json_deserialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="json_deserialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `json-deserialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f json-deserialize-action-binding.yaml

[id="json_deserialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="json_deserialize_action_knative_action"]
=== Knative Action

You can use the `json-deserialize-action` Kamelet as an intermediate step in a Knative binding.

.json-deserialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: json-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="json_deserialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="json_deserialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `json-deserialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f json-deserialize-action-binding.yaml

[id="json_deserialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="json_deserialize_action_kafka_action"]
=== Kafka Action

You can use the `json-deserialize-action` Kamelet as an intermediate step in a Kafka binding.

.json-deserialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: json-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="json_deserialize_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="json_deserialize_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `json-deserialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f json-deserialize-action-binding.yaml

[id="json_deserialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="json_deserialize_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}json-deserialize-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="json-serialize-action"]
= Json Serialize Action

Serialize payload to JSON

[id="json_serialize_action_configuration_options"]
== Configuration Options

The `json-serialize-action` Kamelet does not specify any configuration option.


[id="json_serialize_action_dependencies"]
== Dependencies
id="json_serialize_action_usage"]
== Usage




:leveloffset: +1

[id="json_serialize_action_knative_action"]
=== Knative Action

You can use the `json-serialize-action` Kamelet as an intermediate step in a Knative binding.

.json-serialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="json_serialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="json_serialize_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `json-serialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f json-serialize-action-binding.yaml

[id="json_serialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="json_serialize_action_knative_action"]
=== Knative Action

You can use the `json-serialize-action` Kamelet as an intermediate step in a Knative binding.

.json-serialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="json_serialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="json_serialize_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `json-serialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f json-serialize-action-binding.yaml

[id="json_serialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="json_serialize_action_kafka_action"]
=== Kafka Action

You can use the `json-serialize-action` Kamelet as an intermediate step in a Kafka binding.

.json-serialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-serialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="json_serialize_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="json_serialize_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `json-serialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f json-serialize-action-binding.yaml

[id="json_serialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="json_serialize_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}json-serialize-action.kamelet.yaml[]



:leveloffset: 3
// include::../../../modules/camel/kamelets-reference/kamelets/kafka-batch-manual-commit-action.adoc[leveloffset=+1]
// include::../../../modules/camel/kamelets-reference/kamelets/kafka-batch-source.adoc[leveloffset=+1]
// include::../../../modules/camel/kamelets-reference/kamelets/kafka-manual-commit-action.adoc[leveloffset=+1]
:leveloffset: +1

[id="kafka-sink"]
= Kafka Sink

Send data to Kafka topics.

The Kamelet is able to understand the following headers to be set:

- `key` / `ce-key`: as message key

- `partition-key` / `ce-partitionkey`: as message partition key

Both the headers are optional.

[id="kafka_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `kafka-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *bootstrapServers {empty}* *| Brokers| Comma separated list of Kafka Broker URLs| `string` | |
| *topic {empty}* *| Topic Names| Comma separated list of Kafka topic names| `string` | |
| *user {empty}* *| Username| Username to authenticate to Kafka| `string` | |

| password | Password| Password to authenticate to kafka| `string` | |
| saslMechanism| SASL Mechanism| The Simple Authentication and Security Layer (SASL) Mechanism used.| string| `"PLAIN"`|
| securityProtocol| Security Protocol| Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported| string| `"SASL_SSL"`|

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="kafka_sink_dependencies"]
== Dependencies

At runtime, the `kafka-sink Kamelet relies upon the presence of the following dependencies:

- `camel:kafka`
- `camel:kamelet`
- `camel:core`

[id="kafka_sink_usage"]
== Usage




:leveloffset: +1

[id="kafka_sink_knative_sink"]
=== Knative Sink

You can use the `kafka-sink` Kamelet as a Knative sink by binding it to a Knative object.

.kafka-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Topic Names" user: "The Topic Names" user: "The Username" topic: "The Topic Names"

[id="kafka_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="kafka_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `kafka-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f kafka-sink-binding.yaml

[id="kafka_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="kafka_sink_knative_sink"]
=== Knative Sink

You can use the `kafka-sink` Kamelet as a Knative sink by binding it to a Knative object.

.kafka-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Topic Names" user: "The Topic Names" user: "The Username" topic: "The Topic Names"

[id="kafka_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="kafka_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `kafka-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f kafka-sink-binding.yaml

[id="kafka_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

[id="kafka_sink_kafka_sink"]
=== Kafka Sink

You can use the `kafka-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.kafka-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: kafka-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username"

[id="kafka_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="kafka_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `kafka-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f kafka-sink-binding.yaml

[id="kafka_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="kafka_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}kafka-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="kafka-source"]
= Kafka Source

Receive data from Kafka topics.

[id="kafka_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `kafka-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *topic {empty}* *| Topic Names| Comma separated list of Kafka topic names| `string` | |
| *bootstrapServers {empty}* *| Brokers| Comma separated list of Kafka Broker URLs| `string` | |
| *user {empty}* *| Username| Username to authenticate to Kafka| `string` | |
| *password{empty}* *| Password| Password to authenticate to kafka| `string` | |

| securityProtocol| Security Protocol| Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported| string| `"SASL_SSL"`|
| saslMechanism| SASL Mechanism| The Simple Authentication and Security Layer (SASL) Mechanism used.| string| `"PLAIN"`|
| autoCommitEnable| Auto Commit Enable| If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer.| `boolean` | `true`|
| allowManualCommit| Allow Manual Commit| Whether to allow doing manual commits| `boolean` | `false`|
| autoOffsetReset| Auto Offset Reset| What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none| string| `"latest"`|
| pollOnError| Poll On Error Behavior| What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP| string| `"ERROR_HANDLER"`|
| deserializeHeaders| Automatically Deserialize Headers| When enabled the Kamelet source deserializes all message headers to String representation. The default is `false`.| `boolean` | `true`|
| consumerGroup| Consumer Group | A string that uniquely identifies the group of consumers to which this source belongs | string | "my-group-id"|
| topicIsPattern |  Topic Is Pattern |  Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. | boolean | `false` |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="kafka_source_dependencies"]
== Dependencies

At runtime, the `kafka-source Kamelet relies upon the presence of the following dependencies:

- `mvn:org.apache.camel.kamelets:camel-kamelets-utils`
- `camel:kafka`
- `camel:kamelet`
- `camel:core`

[id="kafka_source_usage"]
== Usage




:leveloffset: +1

[id="kafka_source_knative_source"]
=== Knative Source

You can use the `kafka-source` Kamelet as a Knative source by binding it to a Knative object.

.kafka-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: kafka-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: kafka-source properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="kafka_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="kafka_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `kafka-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f kafka-source-binding.yaml

[id="kafka_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="kafka_source_knative_source"]
=== Knative Source

You can use the `kafka-source` Kamelet as a Knative source by binding it to a Knative object.

.kafka-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: kafka-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: kafka-source properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="kafka_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="kafka_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `kafka-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f kafka-source-binding.yaml

[id="kafka_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="kafka_source_kafka_source"]
=== Kafka Source

You can use the `kafka-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.kafka-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: kafka-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: kafka-source properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="kafka_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="kafka_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `kafka-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f kafka-source-binding.yaml

[id="kafka_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka-source -p "source.bootstrapServers=The Brokers" -p "source.password=The Password" -p "source.topic=The Topic Names" -p "source.user=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="kafka_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}kafka-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="kafka-topic-name-matches-filter-action"]
= Kafka Topic Name Matches Filter Action

Filter based on kafka topic value compared to regex

[id="kafka_topic_name_matches_filter_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `topic-name-matches-filter-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *regex {empty}* *| Regex| The Regex to Evaluate against the Kafka topic name| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="kafka_topic_name_matches_filter_action_dependencies"]

== Dependencies



[id="kafka_topic_name_matches_filter_action_usage"]
== Usage





:leveloffset: +1

[id="kafka-topic-name-matches-filter-action"]
= Kafka Topic Name Matches Filter Action

Filter based on kafka topic value compared to regex

[id="kafka_topic_name_matches_filter_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `topic-name-matches-filter-action` Kamelet:
[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example
| *regex {empty}* *| Regex| The Regex to Evaluate against the Kafka topic name| string| |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.

[id="kafka_topic_name_matches_filter_action_dependencies"]

== Dependencies



[id="kafka_topic_name_matches_filter_action_usage"]
== Usage

This section describes how you can use the `topic-name-matches-filter-action`.

[id="kafka_topic_name_matches_filter_action_kafka_action"]
=== Kafka Action

You can use the `topic-name-matches-filter-action` Kamelet as an intermediate step in a Kafka binding.

.topic-name-matches-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: topic-name-matches-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: topic-name-matches-filter-action properties: regex: "The Regex" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="kafka_topic_name_matches_filter_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="kafka_topic_name_matches_filter_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `topic-name-matches-filter-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f topic-name-matches-filter-action-binding.yaml

[id="kafka_topic_name_matches_filter_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="kafka_topic_name_matches_filter_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}topic-name-matches-filter-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="log-sink"]
= Log Sink

A sink that logs all data that it receives, useful for debugging purposes.

[id="log_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `log-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example


| loggerName |  Logger Name |  Name of the logging category to use |  `string` |  `"log-sink"` |
| level |  Log Level |  Logging level to use |  `string` |  `"INFO"` |  `"TRACE"`
| logMask |  Log Mask |  Mask sensitive information like password or passphrase in the log |  `boolean` |  `false` |
| marker |  Marker |  An optional Marker name to use |  `string` |  |
| multiline |  Multiline |  If enabled then each information is outputted on a newline |  `boolean` |  `false` |
| showAllProperties |  Show All Properties |  Show all of the exchange properties (both internal and custom) |  `boolean` |  `false` |
| showBody |  Show Body |  Show the message body |  `boolean` |  `true` |
| showBodyType |  Show Body Type |  Show the body Java type |  `boolean` |  `true` |
| showExchangePattern |  Show Exchange Pattern |  Shows the Message Exchange Pattern (or MEP for short) |  `boolean` |  `true` |
| showHeaders |  Show Headers |  Show the headers received |  `boolean` |  `false` |
| showProperties |  Show Properties |  Show the exchange properties (only custom). Use showAllProperties to show both internal and custom properties. |  `boolean` |  `false` |
| showStreams |  Show Streams |  Show the stream bodies (they may not be available in following steps) |  `boolean` |  `false` |

| showCachedStreams |  Show Cached Streams |  Whether Camel should show cached stream bodies or not. |  `boolean` |  `true` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="log_sink_dependencies"]

== Dependencies



[id="log_sink_usage"]
== Usage




:leveloffset: +1

[id="log_sink_knative_sink"]
=== Knative Sink

You can use the `log-sink` Kamelet as a Knative sink by binding it to a Knative object.

.log-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: log-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink

[id="log_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="log_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `log-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f log-sink-binding.yaml

[id="log_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel log-sink

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="log_sink_knative_sink"]
=== Knative Sink

You can use the `log-sink` Kamelet as a Knative sink by binding it to a Knative object.

.log-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: log-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink

[id="log_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="log_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `log-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f log-sink-binding.yaml

[id="log_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel log-sink

This command creates the KameletBinding in the current namespace on the cluster.

[id="log_sink_kafka_sink"]
=== Kafka Sink

You can use the `log-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.log-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: log-sink-binding spec: source: ref: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink

[id="log_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="log_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `log-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f log-sink-binding.yaml

[id="log_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic log-sink

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="log_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}log-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="mariadb-sink"]
= MariaDB Sink

Send data to a MariaDB Database.

This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:

`{INSERT INTO accounts (username,city) VALUES (:#username,:#city)}`

The Kamelet needs to receive as input something like:

`{"username":"oscerd", "city":"Rome"}`

[id="mariadb_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `mariadb-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *databaseName {empty}* *| Database Name| The Database Name we are pointing| `string` | |
| *query {empty}* *| Query| The Query to execute against the MariaDB Database| `string` | | `"INSERT INTO accounts (username,city) VALUES (:#username,:#city)"`
| *serverName {empty}* *| Server Name| Server Name for the data source| `string` | | `"localhost"`

| password | Password| The password to use for accessing a secured MariaDB Database| `string` | |
| username | Username| The username to use for accessing a secured MariaDB Database| `string` | |
| serverPort| Server Port| Server Port for the data source| string| `3306`|

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="mariadb_sink_dependencies"]

== Dependencies



[id="mariadb_sink_usage"]
== Usage

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1

[id="mariadb_sink_knative_sink"]
=== Knative Sink

You can use the `mariadb-sink` Kamelet as a Knative sink by binding it to a Knative object.

.mariadb-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mariadb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mariadb-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="mariadb_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mariadb_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mariadb-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mariadb-sink-binding.yaml

[id="mariadb_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="mariadb_sink_knative_sink"]
=== Knative Sink

You can use the `mariadb-sink` Kamelet as a Knative sink by binding it to a Knative object.

.mariadb-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mariadb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mariadb-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="mariadb_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mariadb_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mariadb-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mariadb-sink-binding.yaml

[id="mariadb_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

[id="mariadb_sink_kafka_sink"]
=== Kafka Sink

You can use the `mariadb-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.mariadb-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mariadb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mariadb-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="mariadb_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mariadb_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `mariadb-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mariadb-sink-binding.yaml

[id="mariadb_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="mariadb_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}mariadb-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="mask-field-action"]
= Mask Fields Action

Mask fields with a constant value in the message in transit

[id="mask_field_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `mask-field-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *fields {empty}* *| Fields| Comma separated list of fields to mask| `string` | |
| *replacement {empty}* *| Replacement| Replacement for the fields to be masked| `string` | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="mask_field_action_dependencies"]

== Dependencies



[id="mask_field_action_usage"]
== Usage




:leveloffset: +1

[id="mask_field_action_knative_action"]
=== Knative Action

You can use the `mask-field-action` Kamelet as an intermediate step in a Knative binding.

.mask-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mask-field-action properties: fields: The Fields" replacement: "The Replacement" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="mask_field_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mask_field_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mask-field-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mask-field-action-binding.yaml

[id="mask_field_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="mask_field_action_knative_action"]
=== Knative Action

You can use the `mask-field-action` Kamelet as an intermediate step in a Knative binding.

.mask-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mask-field-action properties: fields: The Fields" replacement: "The Replacement" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="mask_field_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mask_field_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mask-field-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mask-field-action-binding.yaml

[id="mask_field_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="mask_field_action_kafka_action"]
=== Kafka Action

You can use the `mask-field-action` Kamelet as an intermediate step in a Kafka binding.

.mask-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mask-field-action properties: fields: The Fields" replacement: "The Replacement" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="mask_field_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mask_field_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `mask-field-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mask-field-action-binding.yaml

[id="mask_field_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3



[id="mask_field_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}mask-field-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="message-timestamp-router-action"]
= Message Timestamp Router Action

Update the topic field as a function of the original topic name and the record's timestamp field.

[id="message_timestamp_router_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `message-timestamp-router-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *timestampKeys {empty}* *| Timestamp Keys| Comma separated list of Timestamp keys. The timestamp is taken from the first found field.| `string` | |
| timestampFormat| Timestamp Format| Format string for the timestamp that is compatible with java.text.SimpleDateFormat.| string| `"yyyyMMdd"`|
| timestampKeyFormat| Timestamp Keys Format| Format of the timestamp keys. Possible values are `{timestamp}` or any format string for the timestamp that is compatible with java.text.SimpleDateFormat. In case of `{timestamp}` the field is evaluated as milliseconds since 1970, so as a UNIX Timestamp.| string| `"timestamp"`|
| topicFormat| Topic Format| Format string which can contain `{$[topic]}` and `{$[timestamp]}` as placeholders for the topic and timestamp, respectively.| string| `"topic-$[timestamp]"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="message_timestamp_router_action_dependencies"]

== Dependencies



[id="message_timestamp_router_action_usage"]
== Usage




:leveloffset: +1

[id="message_timestamp_router_action_knative_action"]
=== Knative Action

You can use the `message-timestamp-router-action` Kamelet as an intermediate step in a Knative binding.

.message-timestamp-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: message-timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: message-timestamp-router-action properties: timestampKeys: "The Timestamp Keys" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="message_timestamp_router_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="message_timestamp_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `message-timestamp-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f message-timestamp-router-action-binding.yaml

[id="message_timestamp_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="message_timestamp_router_action_knative_action"]
=== Knative Action

You can use the `message-timestamp-router-action` Kamelet as an intermediate step in a Knative binding.

.message-timestamp-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: message-timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: message-timestamp-router-action properties: timestampKeys: "The Timestamp Keys" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="message_timestamp_router_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="message_timestamp_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `message-timestamp-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f message-timestamp-router-action-binding.yaml

[id="message_timestamp_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="message_timestamp_router_action_kafka_action"]
=== Kafka Action

You can use the `message-timestamp-router-action` Kamelet as an intermediate step in a Kafka binding.

.message-timestamp-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: message-timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: message-timestamp-router-action properties: timestampKeys: "The Timestamp Keys" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="message_timestamp_router_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="message_timestamp_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `message-timestamp-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f message-timestamp-router-action-binding.yaml

[id="message_timestamp_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="message_timestamp_router_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}message-timestamp-router-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="mongodb-sink"]
= MongoDB Sink

Send documents to MongoDB.

This Kamelet expects a JSON as body.

Properties you can set as headers:

- `db-upsert` / `ce-dbupsert`: if the database should create the element if it does not exist. Boolean value.

[id="mongodb_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `mongodb-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *collection {empty}* *| MongoDB Collection| Sets the name of the MongoDB collection to bind to this endpoint.| `string` | |
| *database {empty}* *| MongoDB Database| Sets the name of the MongoDB database to target.| `string` | |
| *hosts {empty}* *| MongoDB Hosts| Comma separated list of MongoDB Host Addresses in host:port format.| `string` | |

| createCollection| Collection| Create collection during initialisation if it doesn't exist.| `boolean` | `false`|
| password| MongoDB Password| User password for accessing MongoDB.| `string` | |
| username| MongoDB Username| Username for accessing MongoDB.| `string` | |
| writeConcern| Write Concern| Configure the level of acknowledgment requested from MongoDB for write operations, possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED, MAJORITY.| `string` | |

| ssl |  Enable Ssl for Mongodb Connection |  whether to enable ssl connection to mongodb |  `boolean` |  `true` |
| sslValidationEnabled |  Enables Ssl Certificates Validation and Host name checks. |  IMPORTANT this should be disabled only in test environment since can pose security issues. |  `boolean` |  `true` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="mongodb_sink_dependencies"]

== Dependencies



[id="mongodb_sink_usage"]
== Usage




:leveloffset: +1

[id="mongodb_sink_knative_sink"]
=== Knative Sink

You can use the `mongodb-sink` Kamelet as a Knative sink by binding it to a Knative object.

.mongodb-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mongodb-sink-binding spec: source: ref: ref: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts"

[id="mongodb_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mongodb_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mongodb-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mongodb-sink-binding.yaml

[id="mongodb_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="mongodb_sink_knative_sink"]
=== Knative Sink

You can use the `mongodb-sink` Kamelet as a Knative sink by binding it to a Knative object.

.mongodb-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mongodb-sink-binding spec: source: ref: ref: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts"

[id="mongodb_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mongodb_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mongodb-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mongodb-sink-binding.yaml

[id="mongodb_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"

This command creates the KameletBinding in the current namespace on the cluster.

[id="mongodb_sink_kafka_sink"]
=== Kafka Sink

You can use the `mongodb-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.mongodb-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mongodb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts"

[id="mongodb_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mongodb_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `mongodb-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mongodb-sink-binding.yaml

[id="mongodb_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="mongodb_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}mongodb-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="mongodb-source"]
= MongoDB Source

Consume documents from MongoDB.

If the persistentTailTracking option is enabled, the consumer keeps track of the last consumed message and on the next restart, the consumption restarts from that message. In case of persistentTailTracking enabled, the tailTrackIncreasingField must be provided (by default it is optional).

If the persistentTailTracking option is not enabled, the consumer consumes the whole collection and wait in idle for new documents to consume.

[id="mongodb_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `mongodb-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *collection {empty}* *| MongoDB Collection| Sets the name of the MongoDB collection to bind to this endpoint.| `string` | |
| *database {empty}* *| MongoDB Database| Sets the name of the MongoDB database to target.| `string` | |
| *hosts {empty}* *| MongoDB Hosts| Comma separated list of MongoDB Host Addresses in host:port format.| `string` | |

| password | MongoDB Password| User password for accessing MongoDB.| `string` | |
| username | MongoDB Username| Username for accessing MongoDB. The username must be present in the MongoDB`{s authentication database (authenticationDatabase). By default, the MongoDB authenticationDatabase is }`admin'.| `string` | |
| persistentTailTracking| MongoDB Persistent Tail Tracking| Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The next time the system is up, the endpoint recovers the cursor from the point where it last stopped slurping records.| `boolean` | `false`|
| tailTrackIncreasingField| MongoDB Tail Track Increasing Field| Correlation field in the incoming record which is of increasing nature and is used to position the tailing cursor every time it is generated.| `string` | |

| ssl |  Enable Ssl for Mongodb Connection |  whether to enable ssl connection to mongodb |  `boolean` |  `true` |
| sslValidationEnabled |  Enable Ssl Certificates Validation and Host name checks |  IMPORTANT this should be disabled only in test environment since can pose security issues. |  `boolean` |  `true` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="mongodb_source_dependencies"]

== Dependencies



[id="mongodb_source_usage"]
== Usage




:leveloffset: +1

[id="mongodb_source_knative_source"]
=== Knative Source

You can use the `mongodb-source` Kamelet as a Knative source by binding it to a Knative object.

.mongodb-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mongodb-source properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" password: "The MongoDB Password" username: "The MongoDB Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="mongodb_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mongodb_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mongodb-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mongodb-source-binding.yaml

[id="mongodb_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="mongodb_source_knative_source"]
=== Knative Source

You can use the `mongodb-source` Kamelet as a Knative source by binding it to a Knative object.

.mongodb-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mongodb-source properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" password: "The MongoDB Password" username: "The MongoDB Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="mongodb_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mongodb_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mongodb-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mongodb-source-binding.yaml

[id="mongodb_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="mongodb_source_kafka_source"]
=== Kafka Source

You can use the `mongodb-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.mongodb-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mongodb-source properties: collection: "The MongoDB Database" database: "The MongoDB Database" hosts: "The MongoDB Hosts" password: "The MongoDB Password" username: "The MongoDB Username" sink: ref: kind: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="mongodb_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mongodb_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `mongodb-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mongodb-source-binding.yaml

[id="mongodb_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="mongodb_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}mongodb-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="mysql-sink"]
= MySQL Sink

Send data to a MySQL Database.

This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:

`{INSERT INTO accounts (username,city) VALUES (:#username,:#city)}`

The Kamelet needs to receive as input something like:

`{"username":"oscerd", "city":"Rome"}`

[id="mysql_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `mysql-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *databaseName {empty}* *| Database Name| The Database Name we are pointing| `string` | |
| *query {empty}* *| Query| The Query to execute against the MySQL Database| `string` | | `"INSERT INTO accounts (username,city) VALUES (:#username,:#city)"`
| *serverName {empty}* *| Server Name| Server Name for the data source| `string` | | `"localhost"`

| password | Password| The password to use for accessing a secured MySQL Database| `string` | |
| username | Username| The username to use for accessing a secured MySQL Database| `string` | |
| serverPort| Server Port| Server Port for the data source| string| `3306`|

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="mysql_sink_dependencies"]

== Dependencies



[id="mysql_sink_usage"]
== Usage

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1

[id="mysql_sink_knative_sink"]
=== Knative Sink

You can use the `mysql-sink` Kamelet as a Knative sink by binding it to a Knative object.

.mysql-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mysql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mysql-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="mysql_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mysql_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mysql-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mysql-sink-binding.yaml

[id="mysql_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="mysql_sink_knative_sink"]
=== Knative Sink

You can use the `mysql-sink` Kamelet as a Knative sink by binding it to a Knative object.

.mysql-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mysql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mysql-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="mysql_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mysql_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `mysql-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mysql-sink-binding.yaml

[id="mysql_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

[id="mysql_sink_kafka_sink"]
=== Kafka Sink

You can use the `mysql-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.mysql-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: mysql-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: mysql-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="mysql_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="mysql_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `mysql-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f mysql-sink-binding.yaml

[id="mysql_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mysql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="mysql_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}mysql-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="postgres-sql-sink"]
= PostgreSQL Sink

Send data to a PostgreSQL Database.

This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:

`{INSERT INTO accounts (username,city) VALUES (:#username,:#city)}`

The Kamelet needs to receive as input something like:

`{"username":"oscerd", "city":"Rome"}`

[id="postgresql_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `postgresql-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *databaseName {empty}* *| Database Name| The Database Name we are pointing| `string` | |
| *query {empty}* *| Query| The Query to execute against the PostgreSQL Database| `string` | | `"INSERT INTO accounts (username,city) VALUES (:#username,:#city)"`
| *serverName {empty}* *| Server Name| Server Name for the data source| `string` | | `"localhost"`

| password | Password| The password to use for accessing a secured PostgreSQL Database| `string` | |
| username | Username| The username to use for accessing a secured PostgreSQL Database| `string` | |
| serverPort| Server Port| Server Port for the data source| string| `5432`|

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="postgresql_sink_dependencies"]

== Dependencies



[id="postgresql_sink_usage"]
== Usage

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1

[id="postgresql_sink_knative_sink"]
=== Knative Sink

You can use the `postgresql-sink` Kamelet as a Knative sink by binding it to a Knative object.

.postgresql-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: postgresql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: postgresql-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="postgresql_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="postgresql_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `postgresql-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f postgresql-sink-binding.yaml

[id="postgresql_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="postgresql_sink_knative_sink"]
=== Knative Sink

You can use the `postgresql-sink` Kamelet as a Knative sink by binding it to a Knative object.

.postgresql-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: postgresql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: postgresql-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="postgresql_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="postgresql_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `postgresql-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f postgresql-sink-binding.yaml

[id="postgresql_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

[id="postgresql_sink_kafka_sink"]
=== Kafka Sink

You can use the `postgresql-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.postgresql-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: postgresql-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: postgresql-sink properties: databaseName: "The Password" query: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="postgresql_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="postgresql_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `postgresql-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f postgresql-sink-binding.yaml

[id="postgresql_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="postgresql_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}postgresql-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="predicate-filter-action"]
= Predicate Filter Action

Filter based on a JsonPath Expression

[id="predicate_filter_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `predicate-filter-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *expression {empty}* *| Expression| The JsonPath Expression to evaluate, without the external parenthesis. Since this is a filter, the expression is a negation, this means that if the foo field of the example is equals to John, the message goes ahead, otherwise it is filtered out.| `string` | | `"@.foo =~ /.*John/"`
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="predicate_filter_action_dependencies"]

== Dependencies



[id="predicate_filter_action_usage"]
== Usage




:leveloffset: +1

[id="predicate_filter_action_knative_action"]
=== Knative Action

You can use the `predicate-filter-action` Kamelet as an intermediate step in a Knative binding.

.predicate-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: predicate-filter-action properties: expression: "@.foo =~ /.*John/" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="predicate_filter_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="predicate_filter_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `predicate-filter-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f predicate-filter-action-binding.yaml

[id="predicate_filter_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="predicate_filter_action_knative_action"]
=== Knative Action

You can use the `predicate-filter-action` Kamelet as an intermediate step in a Knative binding.

.predicate-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: predicate-filter-action properties: expression: "@.foo =~ /.*John/" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="predicate_filter_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="predicate_filter_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `predicate-filter-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f predicate-filter-action-binding.yaml

[id="predicate_filter_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="predicate_filter_action_kafka_action"]
=== Kafka Action

You can use the `predicate-filter-action` Kamelet as an intermediate step in a Kafka binding.

.predicate-filter-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: predicate-filter-action properties: expression: "@.foo =~ /.*John/" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="predicate_filter_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="predicate_filter_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `predicate-filter-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f predicate-filter-action-binding.yaml

[id="predicate_filter_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step predicate-filter-action -p "step-0.expression=@.foo =~ /.*John/" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="predicate_filter_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}predicate-filter-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="protobuf-deserialize-action"]
= Protobuf Deserialize Action

Deserialize payload to Protobuf

[id="protobuf_deserialize_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `protobuf-deserialize-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *schema {empty}* *| Schema| The Protobuf schema to use during serialization (as single-line)| `string` | | `"message Person { required string first = 1; required string last = 2; }"`
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="protobuf_deserialize_action_dependencies"]

== Dependencies



[id="protobuf_deserialize_action_usage"]
== Usage




:leveloffset: +1

[id="protobuf_deserialize_action_knative_action"]
=== Knative Action

You can use the `protobuf-deserialize-action` Kamelet as an intermediate step in a Knative binding.

.protobuf-deserialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: protobuf-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-serialize-action properties: schema: "message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}"][id="protobuf_deserialize_action_1;_required_string_last_=2;}""] = 1; required string last = 2; }" - ref: kind: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-deserialize-action properties: schema: "message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}""] [id="protobuf_deserialize_action_1;_required_string_last_=2;}""] = 1; required string last = 2; }" sink: ref: kind: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="protobuf_deserialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="protobuf_deserialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `protobuf-deserialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f protobuf-deserialize-action-binding.yaml

[id="protobuf_deserialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}'step_protobuf_deserialize_actionp_step_2.schema='message_person{required_string_first=1; _required_string_last=2;}'_channel:mychannel"]= 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="protobuf_deserialize_action_knative_action"]
=== Knative Action

You can use the `protobuf-deserialize-action` Kamelet as an intermediate step in a Knative binding.

.protobuf-deserialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: protobuf-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-serialize-action properties: schema: "message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}"][id="protobuf_deserialize_action_1;_required_string_last_=2;}""] = 1; required string last = 2; }" - ref: kind: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-deserialize-action properties: schema: "message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}""] [id="protobuf_deserialize_action_1;_required_string_last_=2;}""] = 1; required string last = 2; }" sink: ref: kind: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="protobuf_deserialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="protobuf_deserialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `protobuf-deserialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f protobuf-deserialize-action-binding.yaml

[id="protobuf_deserialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}'step_protobuf_deserialize_actionp_step_2.schema='message_person{required_string_first=1; _required_string_last=2;}'_channel:mychannel"]= 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="protobuf_deserialize_action_kafka_action"]
=== Kafka Action

You can use the `protobuf-deserialize-action` Kamelet as an intermediate step in a Kafka binding.

.protobuf-deserialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: protobuf-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-serialize-action properties: schema: "message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}""] [id="protobuf_deserialize_action_1;_required_string_last_=2;}""]= 1; required string last = 2; }" - ref: kind: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-deserialize-action properties: schema: "message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}""] [id="protobuf_deserialize_action_1;_required_string_last_=2;}""]= 1; required string last = 2; }" sink: ref: kind: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="protobuf_deserialize_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="protobuf_deserialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `protobuf-deserialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f protobuf-deserialize-action-binding.yaml

[id="protobuf_deserialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name protobuf-deserialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first [id="protobuf_deserialize_action_1;_required_string_last_=2;}'step_protobuf_deserialize_actionp_step_2.schema='message_person{required_string_first=1; _required_string_last=2;}'_kafka.strimzi.io/v1beta1:kafkatopic:my_topic"]= 1; required string last = 2; }' --step protobuf-deserialize-action -p step-2.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="protobuf_deserialize_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}protobuf-deserialize-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="protobuf-serialize-action"]
= Protobuf Serialize Action

Serialize payload to Protobuf

[id="protobuf_serialize_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `protobuf-serialize-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *schema {empty}* *| Schema| The Protobuf schema to use during serialization (as single-line)| `string` | | `"message Person { required string first = 1; required string last = 2; }"`
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="protobuf_serialize_action_dependencies"]

== Dependencies



[id="protobuf_serialize_action_usage"]
== Usage




:leveloffset: +1

[id="protobuf_serialize_action_knative_action"]
=== Knative Action

You can use the `protobuf-serialize-action` Kamelet as an intermediate step in a Knative binding.

.protobuf-serialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: protobuf-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-serialize-action properties: schema: "message Person { required string first [id="protobuf_serialize_action_1;_required_string_last_=2;}""] = 1; required string last = 2; }" sink: ref: kind: type: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="protobuf_serialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="protobuf_serialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `protobuf-serialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f protobuf-serialize-action-binding.yaml

[id="protobuf_serialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first [id="protobuf_serialize_action_1;_required_string_last_=2;}'_channel:mychannel"] = 1; required string last = 2; }' channel:mychannel)

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="protobuf_serialize_action_knative_action"]
=== Knative Action

You can use the `protobuf-serialize-action` Kamelet as an intermediate step in a Knative binding.

.protobuf-serialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: protobuf-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-serialize-action properties: schema: "message Person { required string first [id="protobuf_serialize_action_1;_required_string_last_=2;}""] = 1; required string last = 2; }" sink: ref: kind: type: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="protobuf_serialize_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="protobuf_serialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `protobuf-serialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f protobuf-serialize-action-binding.yaml

[id="protobuf_serialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first [id="protobuf_serialize_action_1;_required_string_last_=2;}'_channel:mychannel"] = 1; required string last = 2; }' channel:mychannel)

This command creates the KameletBinding in the current namespace on the cluster.

[id="protobuf_serialize_action_kafka_action"]
=== Kafka Action

You can use the `protobuf-serialize-action` Kamelet as an intermediate step in a Kafka binding.

.protobuf-serialize-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: protobuf-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: protobuf-serialize-action properties: schema: "message Person { required string first [id="protobuf_serialize_action_1;_required_string_last_=2;}""] = 1; required string last = 2; }" sink: ref: kind: type: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="protobuf_serialize_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="protobuf_serialize_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `protobuf-serialize-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f protobuf-serialize-action-binding.yaml

[id="protobuf_serialize_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first [id="protobuf_serialize_action_1;_required_string_last_=2;}'_kafka.strimzi.io/v1beta1:kafkatopic:my_topic"] = 1; 必須文字列 last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="protobuf_serialize_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}protobuf-serialize-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="regex-router-action"]
= Regex Router Action

Update the destination using the configured regular expression and replacement string

[id="regex_router_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `regex-router-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *regex {empty}* *| Regex| Regular Expression for destination| `string` | |
| *replacement {empty}* *| Replacement| Replacement when matching| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="regex_router_action_dependencies"]

== Dependencies



[id="regex_router_action_usage"]
== Usage




:leveloffset: +1

[id="regex_router_action_knative_action"]
=== Knative Action

You can use the `regex-router-action` Kamelet as an intermediate step in a Knative binding.

.regex-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: regex-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: regex-router-action properties: regex: "The Regex" replacement: "The Replacement" sink: ref: kind: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="regex_router_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="regex_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `regex-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f regex-router-action-binding.yaml

[id="regex_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="regex_router_action_knative_action"]
=== Knative Action

You can use the `regex-router-action` Kamelet as an intermediate step in a Knative binding.

.regex-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: regex-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: regex-router-action properties: regex: "The Regex" replacement: "The Replacement" sink: ref: kind: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="regex_router_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="regex_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `regex-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f regex-router-action-binding.yaml

[id="regex_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="regex_router_action_kafka_action"]
=== Kafka Action

You can use the `regex-router-action` Kamelet as an intermediate step in a Kafka binding.

.regex-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: regex-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: regex-router-action properties: regex: "The Regex" replacement: "The Replacement" sink: ref: kind: kind KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="regex_router_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="regex_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `regex-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f regex-router-action-binding.yaml

[id="regex_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step regex-router-action -p "step-0.regex=The Regex" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="regex_router_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}regex-router-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="replace-field-action"]
= Replace Field Action

Replace field with a different key in the message in transit.

* The required parameter `{renames}` is a comma-separated list of colon-delimited renaming pairs like for example `{foo:bar,abc:xyz}` and it represents the field rename mappings.

* The optional parameter `{enabled}` represents the fields to include. If specified, only the named fields is included in the resulting message.

* The optional parameter `{disabled}` represents the fields to exclude. If specified, the listed fields is excluded from the resulting message. This takes precedence over the `{enabled}` parameter.

* The default value of `{enabled}` parameter is `{all}`, so all the fields of the payload is included.

* The default value of `{disabled}` parameter is `{none}`, so no fields of the payload is excluded.

[id="replace_field_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `replace-field-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *renames {empty}* *| Renames| Comma separated list of field with new value to be renamed| `string` | | `"foo:bar,c1:c2"`
| *disabled {empty}* | Disabled| Comma separated list of fields to be disabled| string| "none"|
| *enabled {empty}* | Enabled| Comma separated list of fields to be enabled| string| "all"|

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="replace_field_action_dependencies"]

== Dependencies



[id="replace_field_action_usage"]
== Usage




:leveloffset: +1

[id="replace_field_action_knative_action"]
=== Knative Action

You can use the `replace-field-action` Kamelet as an intermediate step in a Knative binding.

.replace-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="replace_field_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="replace_field_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `replace-field-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f replace-field-action-binding.yaml

[id="replace_field_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="replace_field_action_knative_action"]
=== Knative Action

You can use the `replace-field-action` Kamelet as an intermediate step in a Knative binding.

.replace-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="replace_field_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="replace_field_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `replace-field-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f replace-field-action-binding.yaml

[id="replace_field_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="replace_field_action_kafka_action"]
=== Kafka Action

You can use the `replace-field-action` Kamelet as an intermediate step in a Kafka binding.

.replace-field-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="replace_field_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="replace_field_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `replace-field-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f replace-field-action-binding.yaml

[id="replace_field_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="replace_field_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}replace-field-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="salesforce-source"]
= Salesforce Source

Receive updates from Salesforce.

[id="salesforce_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `salesforce-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *clientId {empty}* *| Consumer Key| The Salesforce application consumer key| `string` | |
| *clientSecret {empty}* *| Consumer Secret| The Salesforce application consumer secret| `string` | |
| *password {empty}* * | Password| The Salesforce user password| `string` | |
| *username {empty}* * | Username| The Salesforce username| `string` | |
| *query {empty}* *| Query| The query to execute on Salesforce| `string` | | `"SELECT Id, Name, Email, Phone FROM Contact"`
| *topicName {empty}* *| Topic Name| The name of the topic/channel to use| `string` | | `"ContactTopic"`

| loginUrl| Login URL| The Salesforce instance login URL| string| `"https://login.salesforce.com"`|
| query |  Query |  The query to execute on Salesforce. |  `string` |  |  `SELECT Id, Name, Email, Phone FROM Contact`
| topicName |  Topic Name |  The name of the topic or channel. |  `string` |  |  `ContactTopic`
| loginUrl |  Login URL |  The Salesforce instance login URL. |  `string` |  `https://login.salesforce.com` |
| notifyForFields |  Notify For Fields |  Notify for fields. |  `string` |  `ALL` |  `[ "ALL", "REFERENCED", "SELECT", "WHERE"]`
| clientId |  Consumer Key |  The Salesforce application consumer key. |  `string` |  |
| clientSecret |  Consumer Secret |  The Salesforce application consumer secret. |  `string` (_password format_) |  |
| userName |  Username |  The Salesforce username. |  `string` |  |
| password |  Password |  The Salesforce user password. |  `string` (_password format_) |  |
| notifyForOperationCreate |  Notify Operation Create |  Notify for create operation. |  `boolean` |  `true` |
| notifyForOperationUpdate |  Notify Operation Update |  Notify for update operation. |  `boolean` |  `false` |
| notifyForOperationDelete |  Notify Operation Delete |  Notify for delete operation. |  `boolean` |  `false` |
| notifyForOperationUndelete |  Notify Operation Undelete |  Notify for undelete operation. |  `boolean` |  `false` |
| operation |  Operation |  The operation to use |  `string` |  `subscribe` |
| rawPayload |  Raw Payload |  Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. |  `boolean` |  `false` |
| replayId | Replay Id | The replayId value to use when subscribing to the Streaming API. | `long` | |

|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="salesforce_source_dependencies"]

== Dependencies



[id="salesforce_source_usage"]
== Usage




:leveloffset: +1

[id="salesforce_source_knative_source"]
=== Knative Source

You can use the `salesforce-source` Kamelet as a Knative source by binding it to a Knative object.

.salesforce-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: salesforce-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: salesforce-source properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" query: "SELECT Id, Name, Email, Phone FROM Contact" topicName: "ContactTopic" userName: "The Username" sink: ref: kind: kind: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="salesforce_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="salesforce_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `salesforce-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f salesforce-source-binding.yaml

[id="salesforce_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="salesforce_source_knative_source"]
=== Knative Source

You can use the `salesforce-source` Kamelet as a Knative source by binding it to a Knative object.

.salesforce-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: salesforce-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: salesforce-source properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" query: "SELECT Id, Name, Email, Phone FROM Contact" topicName: "ContactTopic" userName: "The Username" sink: ref: kind: kind: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="salesforce_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="salesforce_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `salesforce-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f salesforce-source-binding.yaml

[id="salesforce_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="salesforce_source_kafka_source"]
=== Kafka Source

You can use the `salesforce-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.salesforce-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: salesforce-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: salesforce-source properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" query: "SELECT Id, Name, Email, Phone FROM Contact" topicName: "ContactTopic" userName: "The Username" sink: ref: kind: kind: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="salesforce_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="salesforce_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `salesforce-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f salesforce-source-binding.yaml

[id="salesforce_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3



[id="salesforce_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}salesforce-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="salesforce-sink-create"]

= Salesforce Create Sink

Creates an object in Salesforce. The body of the message must contain
the JSON of the salesforce object.

Example body: { "Phone": "555", "Name": "Antonia", "LastName": "Garcia" }

[id="salesforce_sink_create_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `salesforce-create-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *clientId {empty}* *| Consumer Key| The Salesforce application consumer key| `string` | |
| *clientSecret {empty}* *| Consumer Secret| The Salesforce application consumer secret| `string` | |
| *password {empty}* * | Password| The Salesforce user password| `string` | |
| *username {empty}* * | Username| The Salesforce username| `string` | |
| loginUrl| Login URL| The Salesforce instance login URL| string| `"https://login.salesforce.com"`|
| sObjectName| Object Name| Type of the object| `string` | | `"Contact"`
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="salesforce_sink_create_dependencies"]

== Dependencies



[id="salesforce_sink_create_usage"]
== Usage







[id="salesforce_sink_create_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}salesforce-create-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="salesforce-sink-delete"]

= Salesforce Delete Sink

Removes an object from Salesforce. The body received must be a JSON
containing two keys: sObjectId and sObjectName.

Example body: { "sObjectId": "XXXXX0", "sObjectName": "Contact" }

[id="salesforce_sink_delete_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `salesforce-delete-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *clientId {empty}* *| Consumer Key| The Salesforce application consumer key| `string` | |
| *clientSecret {empty}* *| Consumer Secret| The Salesforce application consumer secret| `string` | |
| *password {empty}* * | Password| The Salesforce user password| `string` | |
| *username {empty}* * | Username| The Salesforce username| `string` | |
| loginUrl| Login URL| The Salesforce instance login URL| string| `"https://login.salesforce.com"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="salesforce_sink_delete_dependencies"]

== Dependencies



[id="salesforce_sink_delete_usage"]
== Usage







[id="salesforce_sink_delete_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}salesforce-delete-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="salesforce-sink-update"]

= Salesforce Update Sink

Update an object in Salesforce.

The body received must contain a JSON key-value pair for each property to update inside the payload attribute, for example:

`{ "payload": { "Phone": "1234567890", "Name": "Antonia" } }`

The body received must include the `sObjectName` and `sObjectId` properties, for example:

`{ "payload": { "Phone": "1234567890", "Name": "Antonia" }, "sObjectId": "sObjectId", "sObjectName": "sObjectName" }`


[id="salesforce_sink_update_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `salesforce-update-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *clientId {empty}* *| Consumer Key| The Salesforce application consumer key| `string` | |
| *clientSecret {empty}* *| Consumer Secret| The Salesforce application consumer secret| `string` | |
| *password {empty}* * | Password| The Salesforce user password| `string` | |
| *username {empty}* * | Username| The Salesforce username| `string` | |
| loginUrl| Login URL| The Salesforce instance login URL| string| `"https://login.salesforce.com"`|



|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="salesforce_sink_update_dependencies"]

== Dependencies



[id="salesforce_sink_update_usage"]
== Usage




:leveloffset: +1

[id="salesforce_sink_update_knative_sink"]
=== Knative Sink

You can use the `salesforce-update-sink` Kamelet as a Knative sink by binding it to a Knative object.

.salesforce-update-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: salesforce-update-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: salesforce-update-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" sObjectId: "The Object Id" sObjectName: "Contact" userName: "The Username"

[id="salesforce_sink_update_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="salesforce_sink_update_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `salesforce-update-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f salesforce-update-sink-binding.yaml

[id="salesforce_sink_update_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="salesforce_sink_update_knative_sink"]
=== Knative Sink

You can use the `salesforce-update-sink` Kamelet as a Knative sink by binding it to a Knative object.

.salesforce-update-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: salesforce-update-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: salesforce-update-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" sObjectId: "The Object Id" sObjectName: "Contact" userName: "The Username"

[id="salesforce_sink_update_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="salesforce_sink_update_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `salesforce-update-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f salesforce-update-sink-binding.yaml

[id="salesforce_sink_update_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

[id="salesforce_sink_update_kafka_sink"]
=== Kafka Sink

You can use the `salesforce-update-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.salesforce-update-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe メタデータ:name: salesforce-update-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: salesforce-update-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" sObjectId: "The Object Id" sObjectName: "Contact" userName: "The Username"

[id="salesforce_sink_update_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="salesforce_sink_update_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `salesforce-update-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f salesforce-update-sink-binding.yaml

[id="salesforce_sink_update_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-update-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.sObjectId=The Object Id" -p "sink.sObjectName=Contact" -p "sink.userName=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="salesforce_sink_update_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}salesforce-update-sink.kamelet.yaml[]

:leveloffset: 3
:leveloffset: +1

[id="sftp-sink"]
= SFTP Sink

Send data to an SFTP Server.

The Kamelet expects the following headers to be set:

- `file` / `ce-file`: as the file name to upload

If the header is not set the exchange ID is used as file name.

[id="sftp_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `sftp-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* * |  Connection Host |  The hostname of the SFTP server |  `string` |  |
| *connectionPort {empty}* * |  Connection Port |  The port of the SFTP server |  `string` |  `22` |
| *directoryName {empty}* *| Directory Name| The starting directory| `string` | |

| username |  Username |  The username to access the FTP server. |  `string` |  |
| password |  Password |  The password to access the FTP server. |  `string` (_password format_) |  |
| passiveMode |  Passive Mode |  Specifies to use passive mode connection. |  `boolean` |  `false` |
| fileExist |  File Existence |  How to behave in case of file already existent. |  `string` |  `Override` |  `["Override", "Append", "Fail", "Ignore"]`
| binary |  Binary |  Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). |  `boolean` |  `false` |
| privateKeyFile |  Private Key File |  Set the private key file so that the SFTP endpoint can do private key verification. |  `string` |  |
| privateKeyPassphrase |  Private Key Passphrase |  Set the private key file passphrase so that the SFTP endpoint can do private key verification. |  `string` |  |
| privateKeyUri |  Private Key URI |  Set the private key file (loaded from classpath by default) so that the SFTP endpoint can do private key verification. |  `string` (pattern: `^(http\|https\|file\|classpath)://.*")` |  |
| strictHostKeyChecking |  Strict Host Checking |  Sets whether to use strict host key checking. |  `string` |  `no` |
| useUserKnownHostsFile |  Use User Known Hosts File |  If knownHostFile has not been explicit configured then use the host file from System.getProperty(user.home)/.ssh/known_hosts. |  `boolean` |  `true` |
| autoCreate |  Autocreate Missing Directories |  Automatically create the directory the files should be written to. |  `boolean` |  `true` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="sftp_sink_dependencies"]

== Dependencies



[id="sftp_sink_usage"]
== Usage




:leveloffset: +1

[id="sftp_sink_knative_sink"]
=== Knative Sink

You can use the `sftp-sink` Kamelet as a Knative sink by binding it to a Knative object.

.sftp-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" username: "The Username"

[id="sftp_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sftp_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `sftp-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sftp-sink-binding.yaml

[id="sftp_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="sftp_sink_knative_sink"]
=== Knative Sink

You can use the `sftp-sink` Kamelet as a Knative sink by binding it to a Knative object.

.sftp-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" username: "The Username"

[id="sftp_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sftp_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `sftp-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sftp-sink-binding.yaml

[id="sftp_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

[id="sftp_sink_kafka_sink"]
=== Kafka Sink

You can use the `sftp-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.sftp-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sftp-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" username: "The Password

[id="sftp_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sftp_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `sftp-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sftp-sink-binding.yaml

[id="sftp_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3



[id="sftp_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}sftp-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="sftp-source"]
= SFTP Source

Receive data from an SFTP Server.

[id="sftp_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `sftp-source` Kamelet:


[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *connectionHost {empty}* * |  Connection Host |  The hostname of the SFTP server. |  `string` |  |
| *connectionPort {empty}* * |  Connection Port |  The port of the SFTP server. |  `string` |  `22` |
| *directoryName {empty}* *| Directory Name| The starting directory| `string` | |

| username |  Username |  The username to access the SFTP server. |  `string` |  |
| password |  Password |  The password to access the SFTP server. |  `string` (_password format_) |  |
| passiveMode |  Passive Mode |  Sets the passive mode connection. |  `boolean` |  `false` |
| recursive |  Recursive |  If a directory, look for files in all subdirectories as well. |  `boolean` |  `false` |
| idempotent |  Idempotency |  Skip already-processed files. |  `boolean` |  `true` |
| ignoreFileNotFoundOrPermissionError |  Ignore File Not Found Or Permission Error |  Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exists or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead. |  `boolean` |  `false` |
| binary |  Binary |  Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). |  `boolean` |  `false` |
| privateKeyFile |  Private Key File |  Set the private key file so that the SFTP endpoint can do private key verification. |  `string` |  |
| privateKeyPassphrase |  Private Key Passphrase |  Set the private key file passphrase so that the SFTP endpoint can do private key verification. |  `string` |  |
| privateKeyUri |  Private Key URI |  Set the private key file (loaded from classpath by default) so that the SFTP endpoint can do private key verification. |  `string (_pattern: "^(http\|https\|file\|classpath)://.*"_)` |  |
| strictHostKeyChecking |  Strict Host Checking |  Sets whether to use strict host key checking. |  `string` |  `no` |
| useUserKnownHostsFile |  Use User Known Hosts File |  If knownHostFile has not been explicit configured then use the host file from System.getProperty(user.home)/.ssh/known_hosts. |  `boolean` |  `true` |
| autoCreate |  Autocreate Missing Directories |  Automatically create starting directory. |  `boolean` |  `true` |
| delete |  Delete |  If true, the file is deleted after it is processed successfully. |  `boolean` |  `false` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="sftp_source_dependencies"]

== Dependencies



[id="sftp_source_usage"]
== Usage




:leveloffset: +1

[id="sftp_source_knative_source"]
=== Knative Source

You can use the `sftp-source` Kamelet as a Knative source by binding it to a Knative object.

.sftp-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sftp-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="sftp_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sftp_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `sftp-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sftp-source-binding.yaml

[id="sftp_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="sftp_source_knative_source"]
=== Knative Source

You can use the `sftp-source` Kamelet as a Knative source by binding it to a Knative object.

.sftp-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sftp-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="sftp_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sftp_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `sftp-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sftp-source-binding.yaml

[id="sftp_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="sftp_source_kafka_source"]
=== Kafka Source

You can use the `sftp-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.sftp-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sftp-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="sftp_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sftp_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `sftp-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sftp-source-binding.yaml

[id="sftp_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="sftp_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}sftp-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="simple-filter-action"]
= Simple Filter Action

Filter based on simple expression


[id="simple_filter_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for
the `simple-filter-action` Kamelet:

[cols=",,,,,",options="header",]
|===
|Property |Name |Description |Type |Default |Example
|**expression**
|Simple Expression |*Required* A simple expression to apply on the
exchange to filter out some exchange. |string | |
|===


[id="simple_filter_action_dependencies"]
== Dependencies

At runtime, the `simple-filter-action` Kamelet relies on the presence
of the following dependencies:

* camel:core
* camel:kamelet


[id="simple_filter_action_usage"]
== Usage








[id="simple_filter_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}simple-filter-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="slack-source"]
= Slack Source

Receive messages from a Slack channel.

[id="slack_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `slack-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *channel {empty}* * |  Channel |  The Slack channel to receive messages from. |  `string` |  |  "#myroom"
| *token {empty}* * |  Token |  "The Bot User OAuth Access Token to access Slack. A Slack app that has the following permissions is required: `channels:history`, `groups:history`, `im:history`, `mpim:history`, `channels:read`, `groups:read`, `im:read`, and `mpim:read`." |  `string` (_password format_) |  `false` |

| serverUrl |  Server URL |  The Slack API server endpoint URL. |  `string` |  `"https://slack.com"` |  `"https://slack.com"`
| delay |  Delay |  The delay between polls. If no unit provided, milliseconds is the default. |  `string` |  `"60000"` |  `"60s or 6000 or 1m"`
| naturalOrder |  Natural Order |  Create exchanges in natural order (oldest to newest) or not. |  `boolean` |  `false` |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="slack_source_dependencies"]

== Dependencies



[id="slack_source_usage"]
== Usage




:leveloffset: +1

[id="slack_source_knative_source"]
=== Knative Source

You can use the `slack-source` Kamelet as a Knative source by binding it to a Knative object.

.slack-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: slack-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: slack-source properties: channel: "#myroom" token: "The Token" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="slack_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="slack_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `slack-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f slack-source-binding.yaml

[id="slack_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="slack_source_knative_source"]
=== Knative Source

You can use the `slack-source` Kamelet as a Knative source by binding it to a Knative object.

.slack-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: slack-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: slack-source properties: channel: "#myroom" token: "The Token" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="slack_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="slack_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `slack-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f slack-source-binding.yaml

[id="slack_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="slack_source_kafka_source"]
=== Kafka Source

You can use the `slack-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.slack-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: slack-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: slack-source properties: channel: "#myroom" token: "The Token" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="slack_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="slack_source_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `slack-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f slack-source-binding.yaml

[id="slack_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind slack-source -p "source.channel=#myroom" -p "source.token=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="slack_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}slack-source.kamelet.yaml[]



:leveloffset: 3
// include::../../../modules/camel/kamelets-reference/kamelets/splunk-sink.adoc[leveloffset=+1]
// include::../../../modules/camel/kamelets-reference/kamelets/splunk-source.adoc[leveloffset=+1]
:leveloffset: +1

[id="microsoft-sql-server-sink"]
= Microsoft SQL Server Sink

Send data to a Microsoft SQL Server Database.

This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query:

`{INSERT INTO accounts (username,city) VALUES (:#username,:#city)}`

The Kamelet needs to receive as input something like:

`{"username":"oscerd", "city":"Rome"}`

[id="sqlserver_sink_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `sqlserver-sink` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *serverName {empty}* *|  Server Name |  The server name for the data source. |  string |  |  localhost
| *username{empty}* * |  Username |  The username to access a secured SQL Server Database. |  string |  |
| *password{empty}* * |  Password |  The password to access a secured SQL Server Database. |  string |  |  password
| *query{empty}* * |  Query |  The query to execute against the SQL Server Database. |  string |  |  `{INSERT INTO accounts (username,city) VALUES (:#username,:#city)}`
| *databaseName{empty}* * |  Database Name |  The name of the SQL Server Database. |  string |  |

| serverPort |  Server Port |  The server port for the data source. |  string |  1433 |
| encrypt |  Encrypt Connection |  Encrypt the connection to SQL Server. |  boolean |  false |
| trustServerCertificate |  Trust Server Certificate |  Trust Server Certificate |  boolean |  true |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="sqlserver_sink_dependencies"]

== Dependencies



[id="sqlserver_sink_usage"]
== Usage

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1


////
=(.*?)usage
////




:leveloffset: 3

:leveloffset: +1

[id="sqlserver_sink_knative_sink"]
=== Knative Sink

You can use the `sqlserver-sink` Kamelet as a Knative sink by binding it to a Knative object.

.sqlserver-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sqlserver-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sqlserver-sink properties: databaseName: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="sqlserver_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sqlserver_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `sqlserver-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sqlserver-sink-binding.yaml

[id="sqlserver_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="sqlserver_sink_knative_sink"]
=== Knative Sink

You can use the `sqlserver-sink` Kamelet as a Knative sink by binding it to a Knative object.

.sqlserver-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sqlserver-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sqlserver-sink properties: databaseName: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="sqlserver_sink_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sqlserver_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `sqlserver-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sqlserver-sink-binding.yaml

[id="sqlserver_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind channel:mychannel sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

[id="sqlserver_sink_kafka_sink"]
=== Kafka Sink

You can use the `sqlserver-sink` Kamelet as a Kafka sink by binding it to a Kafka topic.

.sqlserver-sink-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: sqlserver-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: sqlserver-sink properties: databaseName: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city) " serverName: "localhost" username: "The Username"

[id="sqlserver_sink_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="sqlserver_sink_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `sqlserver-sink-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the sink by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f sqlserver-sink-binding.yaml

[id="sqlserver_sink_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the sink by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sqlserver-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city) " -p "sink.serverName=localhost" -p "sink.username=The Username"

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="sqlserver_sink_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}sqlserver-sink.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="telegram-source"]
= Telegram Source

Receive all messages that people send to your Telegram bot.

To create a bot, contact the @botfather account using the Telegram app.

The source attaches the following headers to the messages:

- `chat-id` / `ce-chatid`: the ID of the chat where the message comes from

[id="telegram_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `telegram-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *authorizationToken {empty}* *| Token| The token to access your bot on Telegram. You you can obtain it from the Telegram @botfather.| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="telegram_source_dependencies"]

== Dependencies



[id="telegram_source_usage"]
== Usage




:leveloffset: +1

[id="telegram_source_knative_source"]
=== Knative Source

You can use the `telegram-source` Kamelet as a Knative source by binding it to a Knative object.

.telegram-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: telegram-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: telegram-source properties: authorizationToken: "The Token" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="telegram_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="telegram_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `telegram-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f telegram-source-binding.yaml

[id="telegram_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind telegram-source -p "source.authorizationToken=The Token" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="telegram_source_knative_source"]
=== Knative Source

You can use the `telegram-source` Kamelet as a Knative source by binding it to a Knative object.

.telegram-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: telegram-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: telegram-source properties: authorizationToken: "The Token" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="telegram_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="telegram_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `telegram-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f telegram-source-binding.yaml

[id="telegram_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind telegram-source -p "source.authorizationToken=The Token" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="telegram_source_kafka_source"]
=== Kafka Source

You can use the `telegram-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.telegram-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: telegram-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: telegram-source properties: authorizationToken: "The Token" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="telegram_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="telegram_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `telegram-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f telegram-source-binding.yaml

[id="telegram_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind telegram-source -p "source.authorizationToken=The Token" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="telegram_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}telegram-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="throttle-action"]


= Throttle Action

The Throttle action allows you to ensure that a specific sink does not get overloaded.

[id="throttle_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `throttle-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *messages {empty}* *| Messages Number| The number of messages to send in the time period set| integer| | `10`
| *timePeriod {empty}* | Time Period| Sets the time period during which the maximum request count is valid for, in milliseconds| string| `"1000"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="throttle_action_dependencies"]

== Dependencies



[id="throttle_action_usage"]
== Usage




:leveloffset: +1

[id="throttle_action_knative_action"]
=== Knative Action

You can use the `throttle-action` Kamelet as an intermediate step in a Knative binding.

.throttle-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: throttle-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: throttle-action properties: messages: 1 sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="throttle_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="throttle_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `throttle-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f throttle-action-binding.yaml

[id="throttle_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step throttle-action -p "step-0.messages=10" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="throttle_action_knative_action"]
=== Knative Action

You can use the `throttle-action` Kamelet as an intermediate step in a Knative binding.

.throttle-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: throttle-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: throttle-action properties: messages: 1 sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="throttle_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="throttle_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `throttle-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f throttle-action-binding.yaml

[id="throttle_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step throttle-action -p "step-0.messages=10" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="throttle_action_kafka_action"]
=== Kafka Action

You can use the `throttle-action` Kamelet as an intermediate step in a Kafka binding.

.throttle-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: throttle-action-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: throttle-action properties: messages: 1 sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="throttle_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="throttle_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `throttle-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f throttle-action-binding.yaml

[id="throttle_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step throttle-action -p "step-0.messages=1" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="throttle_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}throttle-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="timer-source"]
= Timer Source

Produces periodic events with a custom payload.

[id="timer_source_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `timer-source` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *message {empty}* *| Message| The message to generate| `string` | | `"hello world"`

| contentType| Content Type| The content type of the message being generated| string| `"text/plain"`|
| period| Period| The interval between two events in milliseconds| integer| `1000`|
| repeatCount| Repeat Count| Specifies the maximum limit of the number of fires| integer| |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="timer_source_dependencies"]

== Dependencies



[id="timer_source_usage"]
== Usage




:leveloffset: +1

[id="timer_source_knative_source"]
=== Knative Source

You can use the `timer-source` Kamelet as a Knative source by binding it to a Knative object.

.timer-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: timer-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "hello world" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="timer_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="timer_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `timer-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f timer-source-binding.yaml

[id="timer_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source -p "source.message=hello world" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="timer_source_knative_source"]
=== Knative Source

You can use the `timer-source` Kamelet as a Knative source by binding it to a Knative object.

.timer-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: timer-source-binding spec: source: ref: ref: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "hello world" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="timer_source_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="timer_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `timer-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f timer-source-binding.yaml

[id="timer_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source -p "source.message=hello world" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="timer_source_kafka_source"]
=== Kafka Source

You can use the `timer-source` Kamelet as a Kafka source by binding it to a Kafka topic.

.timer-source-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "hello world" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="timer_source_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="timer_source_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `timer-source-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the source by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f timer-source-binding.yaml

[id="timer_source_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the source by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source -p "source.message=hello world" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="timer_source_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}timer-source.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="timestamp-router-action"]
= Timestamp Router Action

Update the topic field as a function of the original topic name and the record timestamp.

[id="timestamp_router_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `timestamp-router-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| timestampFormat| Timestamp Format| Format string for the timestamp that is compatible with java.text.SimpleDateFormat.| string| `"yyyyMMdd"`|
| timestampHeaderName| Timestamp Header Name| The name of the header containing a timestamp| string| `"kafka.TIMESTAMP"`|
| topicFormat| Topic Format| Format string which can contain `{$[topic]}` and `{$[timestamp]}` as placeholders for the topic and timestamp, respectively.| string| `"topic-$[timestamp]"`|
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="timestamp_router_action_dependencies"]

== Dependencies



[id="timestamp_router_action_usage"]
== Usage




:leveloffset: +1

[id="timestamp_router_action_knative_action"]
=== Knative Action

You can use the `timestamp-router-action` Kamelet as an intermediate step in a Knative binding.

.timestamp-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timestamp-router-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="timestamp_router_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="timestamp_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `timestamp-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f timestamp-router-action-binding.yaml

[id="timestamp_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step timestamp-router-action channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="timestamp_router_action_knative_action"]
=== Knative Action

You can use the `timestamp-router-action` Kamelet as an intermediate step in a Knative binding.

.timestamp-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timestamp-router-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="timestamp_router_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="timestamp_router_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Knative

. Save the `timestamp-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f timestamp-router-action-binding.yaml

[id="timestamp_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step timestamp-router-action channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="timestamp_router_action_kafka_action"]
=== Kafka Action

You can use the `timestamp-router-action` Kamelet as an intermediate step in a Kafka binding.

.timestamp-router-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timestamp-router-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="timestamp_router_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="timestamp_router_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `timestamp-router-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f timestamp-router-action-binding.yaml

[id="timestamp_router_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step timestamp-router-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="timestamp_router_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}timestamp-router-action.kamelet.yaml[]



:leveloffset: 3
:leveloffset: +1

[id="value-to-key-action"]
= Value to Key Action

Replace the Kafka record key with a new key formed from a subset of fields in the body

[id="value_to_key_action_configuration_options"]
== Configuration Options

The following table summarizes the configuration options available for the `value-to-key-action` Kamelet:

[width="100%",cols="2,^2,3,^2,^2,^3",options="header"]
|===
| Property| Name| Description| Type| Default| Example

| *fields {empty}* *| Fields| Comma separated list of fields to be used to form the new key| `string` | |
|===

*{empty}** = Fields marked with an asterisk are *mandatory*.


[id="value_to_key_action_dependencies"]

== Dependencies



[id="value_to_key_action_usage"]
== Usage




:leveloffset: +1

[id="value_to_key_action_knative_action"]
=== Knative Action

You can use the `value-to-key-action` Kamelet as an intermediate step in a Knative binding.

.value-to-key-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: value-to-key-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: value-to-key-action properties: fields: "The Fields" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="value_to_key_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="value_to_key_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `value-to-key-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f value-to-key-action-binding.yaml

[id="value_to_key_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3

:leveloffset: +1

[id="value_to_key_action_knative_action"]
=== Knative Action

You can use the `value-to-key-action` Kamelet as an intermediate step in a Knative binding.

.value-to-key-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: value-to-key-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: value-to-key-action properties: fields: "The Fields" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel

[id="value_to_key_action_prerequisite"]
==== Prerequisites
Make sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="value_to_key_action_procedure_for_using_the_cluster_cli"]
==== Procedure for using the cluster CLI

. Save the `value-to-key-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f value-to-key-action-binding.yaml

[id="value_to_key_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Knative

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" channel:mychannel

This command creates the KameletBinding in the current namespace on the cluster.

[id="value_to_key_action_kafka_action"]
=== Kafka Action

You can use the `value-to-key-action` Kamelet as an intermediate step in a Kafka binding.

.value-to-key-action-binding.yaml
[source,yaml,subs="attributes+"]
Copy to Clipboard

apiVersion: camel.apache.org/v1 kind: Pipe metadata: name: value-to-key-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: value-to-key-action properties: fields: "The Fields" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic

[id="value_to_key_action_prerequisites"]
==== Prerequisites

Ensure that you've installed the *AMQ Streams* operator in your OpenShift cluster and created a topic named `my-topic` in the current namespace.
Make also sure you have *"Red Hat Integration - Camel K"* installed into the OpenShift cluster you're connected to.

[id="value_to_key_action_procedure_for_using_the_cluster_cli_kafka"]
==== Procedure for using the cluster CLI with Kafka

. Save the `value-to-key-action-binding.yaml` file to your local drive, and then edit it as needed for your configuration.

. Run the action by using the following command:
+
[source,bash,subs="attributes+"]
Copy to Clipboard

oc apply -f value-to-key-action-binding.yaml

[id="value_to_key_action_procedure_for_using_the_kamel_cli_kafka"]
==== Procedure for using the Kamel CLI with Kafka

Configure and run the action by using the following command:

[source,bash,subs="attributes+"]
Copy to Clipboard

kamel bind timer-source?message=Hello --step value-to-key-action -p "step-0.fields=The Fields" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the KameletBinding in the current namespace on the cluster.

:leveloffset: 3


[id="value_to_key_action_kamelets_source_file"]
== Kamelets source file

link:{kamelets-source-url}value-to-key-action.kamelet.yaml[]



:leveloffset: 3



:leveloffset!:
Copy to Clipboard

法律上の通知

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat