此内容没有您所选择的语言版本。

Chapter 11. Using the AMQ Streams Kafka Bridge


This chapter provides an overview of the AMQ Streams Kafka Bridge and helps you get started using the REST API.

Note

For the full list of REST API endpoints and descriptions, including example requests and responses, see Kafka Bridge API reference.

11.1. Overview of the AMQ Streams Kafka Bridge

The AMQ Streams Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster running on Red Hat Enterprise Linux. The API enables these clients to produce and consume messages without the requirement to use the native Kafka protocol.

The API has two main resources — consumers and topics — that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka.

You can:

  • Send messages to a topic.
  • Create and delete consumers.
  • Subscribe consumers to topics, so that they start receiving messages from those topics.
  • Unsubscribe consumers from topics.
  • Assign partitions to consumers.
  • Retrieve messages from topics.
  • Commit a list of consumer offsets.
  • Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position.

Similar to an AMQ Streams installation, you can download the Kafka Bridge files for installation on Red Hat Enterprise Linux.

For more information on configuring the host and port for the KafkaBridge resource, see Section 11.4, “Configuring AMQ Streams Kafka Bridge properties”.

11.2. Requests to the AMQ Streams Kafka Bridge

11.2.1. Authentication and encryption

Authentication and encryption between HTTP clients and the Kafka Bridge is not yet supported. This means that requests sent from clients to the Kafka Bridge are:

  • Not encrypted, and must use HTTP rather than HTTPS
  • Sent without authentication

You can configure TLS or SASL-based authentication between the Kafka Bridge and your Kafka cluster.

You configure the Kafka Bridge for authentication through its properties file.

11.2.2. Data formats and headers

Specify data formats and HTTP headers to ensure valid requests are submitted to the Kafka Bridge.

11.2.2.1. Content Type headers

API request and response bodies are always encoded as JSON.

  • When performing consumer operations, POST requests must provide the following Content-Type header:

    Content-Type: application/vnd.kafka.v2+json
    Copy to Clipboard Toggle word wrap
  • When performing producer operations, POST requests must provide the following Content-Type header specifying the embedded data format of the consumer, either json or binary, as shown in the following table.

    Expand
    Embedded data formatContent-Type header

    JSON

    Content-Type: application/vnd.kafka.json.v2+json

    Binary

    Content-Type: application/vnd.kafka.binary.v2+json

You set the embedded data format when creating a consumer using the consumers/groupid endpoint—​for more information, see the next section.

11.2.2.2. Embedded data format

The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the Kafka Bridge. Two embedded data formats are supported: JSON and binary.

When creating a consumer using the /consumers/groupid endpoint, the POST request body must specify an embedded data format of either JSON or binary. This is specified in the format field, for example:

{
  "name": "my-consumer",
  "format": "binary", 
1

...
}
Copy to Clipboard Toggle word wrap
1
A binary embedded data format.

The embedded data format specified when creating a consumer must match the data format of the Kafka messages it will consume.

If you choose to specify a binary embedded data format, subsequent producer requests must provide the binary data in the request body as Base64-encoded strings. For example, when sending messages by making POST requests to the /topics/topicname endpoint, the value must be encoded in Base64:

{
  "records": [
    {
      "key": "my-key",
      "value": "ZWR3YXJkdGhldGhyZWVsZWdnZWRjYXQ="
    },
  ]
}
Copy to Clipboard Toggle word wrap

Producer requests must also provide a Content-Type header that corresponds to the embedded data format, for example, Content-Type: application/vnd.kafka.binary.v2+json.

11.2.2.3. Accept headers

After creating a consumer, subsequent GET requests must provide an Accept header in the following format:

Accept: application/vnd.kafka.embedded-data-format.v2+json
Copy to Clipboard Toggle word wrap

Where the embedded-data-format is the embedded data format of the consumer: either json or binary.

For example, when retrieving records for a subscribed consumer using an embedded data format of JSON, include this Accept header:

Accept: application/vnd.kafka.json.v2+json
Copy to Clipboard Toggle word wrap

11.3. Downloading an AMQ Streams Archive

A zipped distribution of AMQ Streams is available for download from the Red Hat website. You can download a copy of the distribution by following the steps below.

Procedure

  • Download the latest version of the Red Hat AMQ Streams archive from the Customer Portal.

11.4. Configuring AMQ Streams Kafka Bridge properties

This procedure describes how to configure the properties used by the AMQ Streams Kafka Bridge.

You configure the Kafka Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties.

  • kafka. for general configuration that applies to producer and consumer, such as server connection and security.
  • kafka.consumer. for consumer-specific configuration passed only to the consumer.
  • kafka.producer. for producer-specific configuration passed only to the producer.

For more on

Procedure

  1. Edit the application.properties file provided with the AMQ Streams Kafka Bridge installation archive.

    Use the properties file to specify Kafka and HTTP-related properties.

    1. Configure standard Kafka-related properties, including properties specific to the Kafka consumers and producers.

      Use:

      • kafka.bootstrap.servers to define the host/port connections to the Kafka cluster
      • kafka.producer.acks to provide acknowledgments to the HTTP client
      • kafka.consumer.auto.offset.reset to determine how to manage reset of the offset in Kafka ---

        For more information on configuration of Kafka properties, see the Apache Kafka website

    2. Configure HTTP-related properties to enable HTTP access to the Kafka cluster.

      http.enabled=true
      http.host=0.0.0.0
      http.port=8080
      Copy to Clipboard Toggle word wrap

Follow this procedure to install the AMQ Streams Kafka Bridge on Red Hat Enterprise Linux.

Procedure

  1. If you have not already done so, unzip the AMQ Streams Kafka Bridge installation archive to any directory.
  2. Run the Kafka Bridge script using the configuration properties as a parameter:

    For example:

    ./bin/kafka_bridge_run.sh --config-file=_path_/configfile.properties
    Copy to Clipboard Toggle word wrap
  3. Check to see that the installation was successful in the log.

    HTTP-Kafka Bridge started and listening on port 8080
    HTTP-Kafka Bridge bootstrap servers localhost:9092
    Copy to Clipboard Toggle word wrap

11.6. AMQ Streams Kafka Bridge API resources

For the full list of REST API endpoints and descriptions, including example requests and responses, see Kafka Bridge API reference.

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat