Chapter 1. Features
AMQ Streams version 1.4 is based on Strimzi 0.17.x.
The features added in this release, and that were not in previous releases of AMQ Streams, are outlined below.
1.1. Kafka 2.4.0 support Copy linkLink copied to clipboard!
AMQ Streams now supports Apache Kafka version 2.4.0.
AMQ Streams uses Kafka 2.4.0. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to AMQ Streams version 1.4 before you can upgrade brokers and client applications to Kafka 2.4.0. For upgrade instructions, see AMQ Streams and Kafka upgrades.
Refer to the Kafka 2.3.0 and Kafka 2.4.0 Release Notes for additional information.
Kafka 2.3.x is supported in AMQ Streams 1.4 only for upgrade purposes.
For more information on supported versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal.
Changes to the partition rebalance protocol in Kafka 2.4.0
Kafka 2.4.0 adds incremental cooperative rebalancing for consumers and Kafka Streams applications. This is an improved rebalance protocol for implementing partition rebalances according to a defined rebalance strategy. Using the new protocol, consumers keep their assigned partitions during a rebalance and only revoke them at the end of the process if required to achieve cluster balance. This reduces the unavailability of the consumer group or Kafka Streams application during a rebalance.
To take advantage of incremental cooperative rebalancing, you must upgrade consumers and Kafka Streams applications to use the new protocol instead of the old eager rebalance protocol.
See Upgrading consumers and Kafka Streams applications to cooperative rebalancing and Notable changes in 2.4.0 in the Apache Kafka documentation.
1.1.1. ZooKeeper 3.5.7 Copy linkLink copied to clipboard!
Kafka version 2.4.0 requires a new version of ZooKeeper, version 3.5.7.
You do not need to manually upgrade to ZooKeeper 3.5.7; the Cluster Operator performs the ZooKeeper upgrade when it upgrades Kafka brokers. However, you might notice some additional rolling updates during this procedure.
There is a known issue in AMQ Streams 1.4 related to scaling ZooKeeper. For more information, see Chapter 6, Known issues.
1.2. KafkaConnector resources Copy linkLink copied to clipboard!
AMQ Streams now provides Kubernetes-native management of connectors in a Kafka Connect cluster using a new custom resource, named KafkaConnector, and an internal operator.
A KafkaConnector YAML file describes the configuration of a source or sink connector that you deploy to your Kubernetes cluster to either create a new connector instance or manage a running one. Like other Kafka resources, the Cluster Operator updates running connector instances to match the configurations defined in their KafkaConnectors.
The Installation and Example Files now include an example KafkaConnector resource in examples/connector/source-connector.yaml. Deploy the example YAML file to create a FileStreamSourceConnector that sends each line of the license file to Kafka as a message in a topic named my-topic.
Example KafkaConnector
1.2.1. Enabling KafkaConnectors Copy linkLink copied to clipboard!
To ensure compatibility with earlier versions of AMQ Streams, KafkaConnectors are disabled by default. They might become the default way to create and manage connectors in future AMQ Streams releases.
To enable KafkaConnectors for an AMQ Streams 1.4 Kafka Connect cluster, add the strimzi.io/use-connector-resources annotation to the KafkaConnect resource. For example:
Example Kafka Connect cluster with KafkaConnectors enabled
If KafkaConnectors are enabled, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
The Kafka Connect REST API (on port 8083) is still required to restart failed tasks.
See Creating and managing connectors, Deploying a KafkaConnector resource to Kafka Connect, and Enabling KafkaConnector resources.
1.3. Kafka listener certificates Copy linkLink copied to clipboard!
You can now provide your own server certificates and private keys for the following types of listeners:
- TLS listeners
- External listeners with TLS encryption enabled
These user-provided certificates are called Kafka listener certificates.
You can use your organization’s private Certificate Authority (CA) or a public CA to generate and sign your own Kafka listener certificates.
Listener configuration
You configure Kafka listener certificates in the configuration.brokerCertChainAndKey property of the listener. For example:
See Kafka listener certificates and Providing your own Kafka listener certificates.
1.4. OAuth 2.0 authentication Copy linkLink copied to clipboard!
Support for OAuth 2.0 authentication moves from a Technology Preview to a generally available component of AMQ Streams.
AMQ Streams supports the use of OAuth 2.0 authentication using the SASL OAUTHBEARER mechanism. Using OAuth 2.0 token based authentication, application clients can access resources on application servers (called ‘resource servers’) without exposing account credentials. The client presents an access token as a means of authenticating, which application servers can also use to find more information about the level of access granted. The authorization server handles the granting of access and inquiries about access.
In the context of AMQ Streams:
- Kafka brokers act as resource servers
- Kafka clients act as resource clients
The brokers and clients communicate with the OAuth 2.0 authorization server, as necessary, to obtain or validate access tokens.
For a deployment of AMQ Streams, OAuth 2.0 integration provides:
- Server-side OAuth 2.0 support for Kafka brokers
- Client-side OAuth 2.0 support for Kafka MirrorMaker, Kafka Connect and the Kafka Bridge
Red Hat Single Sign-On integration
You can deploy Red Hat Single Sign-On as an authorization server and configure it for integration with AMQ Streams.
You can use Red Hat Single Sign-On to:
- Configure authentication for Kafka brokers
- Configure and authorize clients
- Configure users and roles
- Obtain access and refresh tokens
1.5. Debezium for Change Data Capture integration Copy linkLink copied to clipboard!
Debezium for Change Data Capture is only supported on OpenShift 4.x.
Debezium for Change Data Capture is a distributed platform that monitors databases and creates change event streams. Debezium is built on Apache Kafka and can be deployed and integrated with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium captures row-level changes to a database table and passes corresponding change events to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium has multiple uses, including:
- Data replication
- Updating caches and search indexes
- Simplifying monolithic applications
- Data integration
- Enabling streaming queries
Debezium provides connectors (based on Kafka Connect) for the following common databases:
- MySQL
- PostgreSQL
- SQL Server
- MongoDB
For more information on deploying Debezium with AMQ Streams, refer to the product documentation.