Chapter 1. Features
The features added in this release, and that were not in previous releases of AMQ Streams, are outlined below.
1.1. Kafka 2.3.0 support Copy linkLink copied to clipboard!
AMQ Streams now supports Apache Kafka version 2.3.0.
AMQ Streams is based on Kafka 2.3.0. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to AMQ Streams version 1.3 before you can upgrade brokers and client applications to Kafka 2.3.0. For instructions, see AMQ Streams and Kafka upgrades.
Refer to the Kafka 2.2.1 and Kafka 2.3.0 Release Notes for additional information.
Kafka 2.2.x is supported in AMQ Streams only for upgrade purposes.
For more information on supported versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal.
1.2. AMQ Streams Kafka Bridge Copy linkLink copied to clipboard!
The AMQ Streams Kafka Bridge moves from a Technology Preview to a generally available component of AMQ Streams.
1.2.1. Overview Copy linkLink copied to clipboard!
The Kafka Bridge provides a RESTful interface to AMQ Streams, offering the advantages of a web API that is easy to use and connect with AMQ Streams without the need for client applications to interpret the Kafka protocol.
The API has two main resources — consumers and topics — that are exposed through endpoints that allow you to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka.
The Kafka Bridge supports HTTP requests to:
- Send messages to topics.
- Retrieve messages from topics.
- Create and delete consumers.
- Subscribe consumers to topics, so that they start receiving messages from those topics.
- Retrieve a list of topics that a consumer is subscribed to.
- Unsubscribe consumers from topics.
- Assign partitions to consumers.
- Commit consumer offsets.
- Perform seek operations on a partition.
The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats.
See Kafka Bridge overview and Kafka Bridge configuration. For API documentation, see the Kafka Bridge API reference.
To try out the API from your local machine, see Kafka Bridge quickstart.
1.2.2. API changes since the Technology Preview Copy linkLink copied to clipboard!
The following changes have been made to the Kafka Bridge API since the Technology Preview release:
-
The
openapiendpoint has been added. This retrieves the OpenAPI 2.0 specification for the Kafka Bridge in JSON format. -
The
subscriptionendpoint now supportsGETrequests to retrieve a list of all topics that the consumer is subscribed to.
See the openapi and subscription endpoints in the Kafka Bridge API reference.
Also, there is a breaking change to the data types declared in the OpenAPI Specification (OAS) for the following consumer configuration settings:
| Consumer configuration setting | Previously declared as… | Now declared as… |
|---|---|---|
|
| String | Integer |
|
| String | Boolean |
|
| String | Integer |
The data types for these settings are now validated by the OAS. As a result, if invalid data types are submitted to the /consumers endpoint, the OAS returns a 400 Bad Request code (containing details of the invalid settings). Previously, a general 500 Internal Server Error code was returned.
1.2.3. 3scale integration with the Kafka Bridge Copy linkLink copied to clipboard!
You can now integrate Red Hat 3scale API Management with the Kafka Bridge.
3scale can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting and billing are now available.
1.2.4. 3scale service discovery annotations and labels Copy linkLink copied to clipboard!
When the Kafka Bridge is deployed, the service that exposes the REST interface of the Kafka Bridge has the annotations and labels required for discovery by 3scale.
1.2.5. Kafka Bridge quickstart Copy linkLink copied to clipboard!
A new quickstart guide for the Kafka Bridge helps you to try out the API from your local machine. The quickstart provides example curl requests for the most common methods in the API.
1.3. Status properties for custom resources Copy linkLink copied to clipboard!
Status properties for AMQ Streams custom resources moves from a Technology Preview to a generally available feature.
You can check the current status of a custom resource by querying its status property. A resource’s status publishes information about the resource to users and tools that need the information. The current state and last observedGeneration are available for every resource. Some resources also publish resource-specific information.
Status information is available for the following resources:
-
Kafka -
KafkaTopic -
KafkaConnect -
KafkaConnectS2I -
KafkaMirrorMaker -
KafkaUser -
KafkaBridge
To check the status of a resource, use the oc get command and apply a JSONPath expression, for example:
oc get kafka KafkaTopic -o jsonpath='{.status}'
oc get kafka KafkaBridge -o jsonpath='{.status.observedGeneration}'
See AMQ Streams custom resource status and Checking the status of a custom resource.
1.4. Kafka Exporter support Copy linkLink copied to clipboard!
Kafka Exporter is an open source project to enhance monitoring of Apache Kafka brokers and clients. Kafka Exporter is provided with AMQ Streams for deployment with a Kafka cluster to extract additional metrics data from Kafka brokers related to offsets, consumer groups, consumer lag, and topics.
The metrics data is used, for example, to help identify slow consumers.
Lag data is exposed as Prometheus metrics, which can then be presented in Grafana for analysis.
If you are already using Prometheus and Grafana for monitoring of built-in Kafka metrics, you can configure Prometheus to also scrape the Kafka Exporter Prometheus endpoint.
See Kafka Exporter.