190.5. Samples
190.5.1. 消耗 Kafka 中的信息 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
以下是从 Kafka 读取信息所需的最小路由。
from("kafka:test?brokers=localhost:9092") .log("Message received from Kafka : ${body}") .log(" on the topic ${headers[kafka.TOPIC]}") .log(" on the partition ${headers[kafka.PARTITION]}") .log(" with the offset ${headers[kafka.OFFSET]}") .log(" with the key ${headers[kafka.KEY]}")
from("kafka:test?brokers=localhost:9092")
.log("Message received from Kafka : ${body}")
.log(" on the topic ${headers[kafka.TOPIC]}")
.log(" on the partition ${headers[kafka.PARTITION]}")
.log(" with the offset ${headers[kafka.OFFSET]}")
.log(" with the key ${headers[kafka.KEY]}")
如果您需要消耗来自多个主题的消息,可以使用以逗号分隔的主题名称列表
from("kafka:test,test1,test2?brokers=localhost:9092") .log("Message received from Kafka : ${body}") .log(" on the topic ${headers[kafka.TOPIC]}") .log(" on the partition ${headers[kafka.PARTITION]}") .log(" with the offset ${headers[kafka.OFFSET]}") .log(" with the key ${headers[kafka.KEY]}")
from("kafka:test,test1,test2?brokers=localhost:9092")
.log("Message received from Kafka : ${body}")
.log(" on the topic ${headers[kafka.TOPIC]}")
.log(" on the partition ${headers[kafka.PARTITION]}")
.log(" with the offset ${headers[kafka.OFFSET]}")
.log(" with the key ${headers[kafka.KEY]}")
当消耗 Kafka 的消息时,您可以使用自己的偏移管理,而不将此管理委派给 Kafka。为了保持偏移组件的偏移,组件需要 StateRepository
实现,如 FileStateRepository
。此 bean 应该在注册表中提供。如何使用它:
// Create the repository in which the Kafka offsets will be persisted FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat")); // Bind this repository into the Camel registry JndiRegistry registry = new JndiRegistry(); registry.bind("offsetRepo", repository); // Configure the camel context DefaultCamelContext camelContext = new DefaultCamelContext(registry); camelContext.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + // Setup the topic and broker address "&groupId=A" + // The consumer processor group ID "&autoOffsetReset=earliest" + // Ask to start from the beginning if we have unknown offset "&offsetRepository=#offsetRepo") // Keep the offsets in the previously configured repository .to("mock:result"); } });
// Create the repository in which the Kafka offsets will be persisted
FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat"));
// Bind this repository into the Camel registry
JndiRegistry registry = new JndiRegistry();
registry.bind("offsetRepo", repository);
// Configure the camel context
DefaultCamelContext camelContext = new DefaultCamelContext(registry);
camelContext.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
// Setup the topic and broker address
"&groupId=A" +
// The consumer processor group ID
"&autoOffsetReset=earliest" +
// Ask to start from the beginning if we have unknown offset
"&offsetRepository=#offsetRepo")
// Keep the offsets in the previously configured repository
.to("mock:result");
}
});