此内容没有您所选择的语言版本。
Chapter 11. Using the AMQ Streams Kafka Bridge
This chapter provides an overview of the AMQ Streams Kafka Bridge and helps you get started using the REST API.
For the full list of REST API endpoints and descriptions, including example requests and responses, see Kafka Bridge API reference.
11.1. Overview of the AMQ Streams Kafka Bridge 复制链接链接已复制到粘贴板!
The AMQ Streams Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster running on Red Hat Enterprise Linux. The API enables these clients to produce and consume messages without the requirement to use the native Kafka protocol.
The API has two main resources — consumers and topics — that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka.
You can:
- Send messages to a topic.
- Create and delete consumers.
- Subscribe consumers to topics, so that they start receiving messages from those topics.
- Unsubscribe consumers from topics.
- Assign partitions to consumers.
- Retrieve messages from topics.
- Commit a list of consumer offsets.
- Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position.
Similar to an AMQ Streams installation, you can download the Kafka Bridge files for installation on Red Hat Enterprise Linux.
For more information on configuring the host and port for the KafkaBridge resource, see Section 11.4, “Configuring AMQ Streams Kafka Bridge properties”.
11.2. Requests to the AMQ Streams Kafka Bridge 复制链接链接已复制到粘贴板!
11.2.1. Authentication and encryption 复制链接链接已复制到粘贴板!
Authentication and encryption between HTTP clients and the Kafka Bridge is not yet supported. This means that requests sent from clients to the Kafka Bridge are:
- Not encrypted, and must use HTTP rather than HTTPS
- Sent without authentication
You can configure TLS or SASL-based authentication between the Kafka Bridge and your Kafka cluster.
You configure the Kafka Bridge for authentication through its properties file.
11.2.2. Data formats and headers 复制链接链接已复制到粘贴板!
Specify data formats and HTTP headers to ensure valid requests are submitted to the Kafka Bridge.
11.2.2.1. Content Type headers 复制链接链接已复制到粘贴板!
API request and response bodies are always encoded as JSON.
When performing consumer operations,
POSTrequests must provide the followingContent-Typeheader:Content-Type: application/vnd.kafka.v2+json
Content-Type: application/vnd.kafka.v2+jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow When performing producer operations,
POSTrequests must provide the followingContent-Typeheader specifying the embedded data format of the consumer, eitherjsonorbinary, as shown in the following table.Expand Embedded data format Content-Type header JSON
Content-Type: application/vnd.kafka.json.v2+jsonBinary
Content-Type: application/vnd.kafka.binary.v2+json
You set the embedded data format when creating a consumer using the consumers/groupid endpoint—for more information, see the next section.
11.2.2.2. Embedded data format 复制链接链接已复制到粘贴板!
The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the Kafka Bridge. Two embedded data formats are supported: JSON and binary.
When creating a consumer using the /consumers/groupid endpoint, the POST request body must specify an embedded data format of either JSON or binary. This is specified in the format field, for example:
{
"name": "my-consumer",
"format": "binary",
...
}
{
"name": "my-consumer",
"format": "binary",
...
}
- 1
- A binary embedded data format.
The embedded data format specified when creating a consumer must match the data format of the Kafka messages it will consume.
If you choose to specify a binary embedded data format, subsequent producer requests must provide the binary data in the request body as Base64-encoded strings. For example, when sending messages by making POST requests to the /topics/topicname endpoint, the value must be encoded in Base64:
Producer requests must also provide a Content-Type header that corresponds to the embedded data format, for example, Content-Type: application/vnd.kafka.binary.v2+json.
11.2.2.3. Accept headers 复制链接链接已复制到粘贴板!
After creating a consumer, subsequent GET requests must provide an Accept header in the following format:
Accept: application/vnd.kafka.embedded-data-format.v2+json
Accept: application/vnd.kafka.embedded-data-format.v2+json
Where the embedded-data-format is the embedded data format of the consumer: either json or binary.
For example, when retrieving records for a subscribed consumer using an embedded data format of JSON, include this Accept header:
Accept: application/vnd.kafka.json.v2+json
Accept: application/vnd.kafka.json.v2+json
11.3. Downloading an AMQ Streams Archive 复制链接链接已复制到粘贴板!
A zipped distribution of AMQ Streams is available for download from the Red Hat website. You can download a copy of the distribution by following the steps below.
Procedure
- Download the latest version of the Red Hat AMQ Streams archive from the Customer Portal.
11.4. Configuring AMQ Streams Kafka Bridge properties 复制链接链接已复制到粘贴板!
This procedure describes how to configure the properties used by the AMQ Streams Kafka Bridge.
You configure the Kafka Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties.
-
kafka.for general configuration that applies to producer and consumer, such as server connection and security. -
kafka.consumer.for consumer-specific configuration passed only to the consumer. -
kafka.producer.for producer-specific configuration passed only to the producer.
For more on
Prerequisites
Procedure
Edit the
application.propertiesfile provided with the AMQ Streams Kafka Bridge installation archive.Use the properties file to specify Kafka and HTTP-related properties.
Configure standard Kafka-related properties, including properties specific to the Kafka consumers and producers.
Use:
-
kafka.bootstrap.serversto define the host/port connections to the Kafka cluster -
kafka.producer.acksto provide acknowledgments to the HTTP client kafka.consumer.auto.offset.resetto determine how to manage reset of the offset in Kafka ---For more information on configuration of Kafka properties, see the Apache Kafka website
-
Configure HTTP-related properties to enable HTTP access to the Kafka cluster.
http.enabled=true http.host=0.0.0.0 http.port=8080
http.enabled=true http.host=0.0.0.0 http.port=8080Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Follow this procedure to install the AMQ Streams Kafka Bridge on Red Hat Enterprise Linux.
Prerequisites
Procedure
- If you have not already done so, unzip the AMQ Streams Kafka Bridge installation archive to any directory.
Run the Kafka Bridge script using the configuration properties as a parameter:
For example:
./bin/kafka_bridge_run.sh --config-file=_path_/configfile.properties
./bin/kafka_bridge_run.sh --config-file=_path_/configfile.propertiesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check to see that the installation was successful in the log.
HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092
HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. AMQ Streams Kafka Bridge API resources 复制链接链接已复制到粘贴板!
For the full list of REST API endpoints and descriptions, including example requests and responses, see Kafka Bridge API reference.