此内容没有您所选择的语言版本。
Chapter 31. Kafka Sink
Send data to Kafka topics.
The Kamelet is able to understand the following headers to be set:
- 
					
key/ce-key: as message key - 
					
partition-key/ce-partitionkey: as message partition key 
Both the headers are optional.
31.1. Configuration Options 复制链接链接已复制到粘贴板!
				The following table summarizes the configuration options available for the kafka-sink Kamelet:
			
| Property | Name | Description | Type | Default | Example | 
|---|---|---|---|---|---|
|   bootstrapServers *  |   Brokers  |   Comma separated list of Kafka Broker URLs  |   string  | ||
|   password *  |   Password  |   Password to authenticate to kafka  |   string  | ||
|   topic *  |   Topic Names  |   Comma separated list of Kafka topic names  |   string  | ||
|   user *  |   Username  |   Username to authenticate to Kafka  |   string  | ||
|   saslMechanism  |   SASL Mechanism  |   The Simple Authentication and Security Layer (SASL) Mechanism used.  |   string  |   
								  | |
|   securityProtocol  |   Security Protocol  |   Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported  |   string  |   
								  | 
Fields marked with an asterisk (*) are mandatory.
31.2. Dependencies 复制链接链接已复制到粘贴板!
At runtime, the `kafka-sink Kamelet relies upon the presence of the following dependencies:
- camel:kafka
 - camel:kamelet
 
31.3. Usage 复制链接链接已复制到粘贴板!
				This section describes how you can use the kafka-sink.
			
31.3.1. Knative Sink 复制链接链接已复制到粘贴板!
					You can use the kafka-sink Kamelet as a Knative sink by binding it to a Knative object.
				
kafka-sink-binding.yaml
31.3.1.1. Prerequisite 复制链接链接已复制到粘贴板!
Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
31.3.1.2. Procedure for using the cluster CLI 复制链接链接已复制到粘贴板!
- 
								Save the 
kafka-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f kafka-sink-binding.yaml
oc apply -f kafka-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
31.3.1.3. Procedure for using the Kamel CLI 复制链接链接已复制到粘贴板!
Configure and run the sink by using the following command:
kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
This command creates the KameletBinding in the current namespace on the cluster.
31.3.2. Kafka Sink 复制链接链接已复制到粘贴板!
					You can use the kafka-sink Kamelet as a Kafka sink by binding it to a Kafka topic.
				
kafka-sink-binding.yaml
31.3.2.1. Prerequisites 复制链接链接已复制到粘贴板!
						Ensure that you’ve installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you’re connected to.
					
31.3.2.2. Procedure for using the cluster CLI 复制链接链接已复制到粘贴板!
- 
								Save the 
kafka-sink-binding.yamlfile to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command:
oc apply -f kafka-sink-binding.yaml
oc apply -f kafka-sink-binding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
31.3.2.3. Procedure for using the Kamel CLI 复制链接链接已复制到粘贴板!
Configure and run the sink by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username"
This command creates the KameletBinding in the current namespace on the cluster.