Chapter 1. Features
AMQ Streams version 1.6 is based on Strimzi 0.20.x.
The features added in this release, and that were not in previous releases of AMQ Streams, are outlined below.
To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.
1.1. Kafka support in AMQ Streams 1.6.x (Long Term Support on OCP 3.11)
This section describes the versions of Kafka and ZooKeeper that are supported in AMQ Streams 1.6 and the subsequent patch releases.
AMQ Streams 1.6.x is the Long Term Support release for use with OCP 3.11, and is supported only for as long as OpenShift Container Platform 3.11 is supported.
AMQ Streams 1.6.4 and later patch releases are supported on OCP 3.11 only. If you are using OCP 4.x you are required to upgrade to AMQ Streams 1.7.x or later.
For information on support dates for AMQ LTS versions, see the Red Hat Knowledgebase solution How long are AMQ LTS releases supported?.
Only Kafka distributions built by Red Hat are supported. Previous versions of Kafka are supported in AMQ Streams 1.6.x only for upgrade purposes.
For more information on supported Kafka versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal.
1.1.1. Kafka support in AMQ Streams 1.6.6 and 1.6.7
The AMQ Streams 1.6.6 and 1.6.7 releases support Apache Kafka version 2.6.3.
You must upgrade the Cluster Operator before you can upgrade brokers and client applications to Kafka 2.6.3. For upgrade instructions, see AMQ Streams and Kafka upgrades.
Kafka 2.6.3 requires ZooKeeper version 3.5.9. Therefore, the Cluster Operator does not perform a ZooKeeper upgrade when upgrading from AMQ Streams 1.6.4 / 1.6.5.
Refer to the Kafka 2.6.3 Release Notes for additional information.
1.1.2. Kafka support in AMQ Streams 1.6.4 and 1.6.5
The AMQ Streams 1.6.4 and 1.6.5 releases support Apache Kafka version 2.6.2 and ZooKeeper version 3.5.9.
You must upgrade the Cluster Operator before you can upgrade brokers and client applications to Kafka 2.6.2. For upgrade instructions, see AMQ Streams and Kafka upgrades.
Kafka 2.6.2 requires ZooKeeper version 3.5.9. Therefore, the Cluster Operator will perform a ZooKeeper upgrade when upgrading from AMQ Streams 1.6.2.
Refer to the Kafka 2.6.2 Release Notes for additional information.
1.1.3. Kafka support in AMQ Streams 1.6.0 and 1.6.2
AMQ Streams 1.6.0 and 1.6.2 support Apache Kafka version 2.6.0.
You must upgrade the Cluster Operator before you can upgrade brokers and client applications to Kafka 2.6.0. For upgrade instructions, see AMQ Streams and Kafka upgrades.
Refer to the Kafka 2.5.0 and Kafka 2.6.0 Release Notes for additional information.
Kafka 2.6.0 requires the same ZooKeeper version as Kafka 2.5.x (ZooKeeper version 3.5.7 / 3.5.8). Therefore, the Cluster Operator does not perform a ZooKeeper upgrade when upgrading from AMQ Streams 1.5.
1.2. Container images move to Java 11
AMQ Streams container images move to Java 11 as the Java runtime environment (JRE). The JRE version in the images changes from OpenJDK 8 to OpenJDK 11.
1.3. Cluster Operator logging
Cluster Operator logging is now configured using a ConfigMap that is automatically created when the Cluster Operator is deployed. The ConfigMap is described in the following new YAML file:
install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
To configure Cluster Operator logging:
In the
050-ConfigMap-strimzi-cluster-operator.yaml
file, edit thedata.log4j2.properties
field:Example Cluster Operator logging configuration
kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator labels: app: strimzi data: log4j2.properties: | name = COConfig monitorInterval = 30 appender.console.type = Console appender.console.name = STDOUT # ...
Update the custom resource:
oc apply -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
To change the frequency that logs are reloaded, set a time in seconds in the monitorInterval
field (the default reload time is 30 seconds).
As a result of this change, the STRIMZI_LOG_LEVEL
environment variable has been removed from the 060-Deployment-strimzi-cluster-operator.yaml
file. Set the log level in the ConfigMap instead.
1.4. OAuth 2.0 authorization
Support for OAuth 2.0 authorization moves out of Technology Preview to a generally available component of AMQ Streams.
If you are using OAuth 2.0 for token-based authentication, you can now also use OAuth 2.0 authorization rules to constrain client access to Kafka brokers.
AMQ Streams supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services, which allows you to manage security policies and permissions centrally.
Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers.
1.5. Open Policy Agent (OPA) integration
Open Policy Agent (OPA) is an open-source policy engine. You can integrate OPA with AMQ Streams to act as a policy-based authorization mechanism for permitting client operations on Kafka brokers.
When a request is made from a client, OPA will evaluate the request against policies defined for Kafka access, then allow or deny the request.
You can define access control for Kafka clusters, consumer groups and topics. For instance, you can define an authorization policy that allows write access from a producer client to a specific broker topic.
See KafkaAuthorizationOpa
schema reference
Red Hat does not support the OPA server.
1.6. Debezium for change data capture integration
Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium has multiple uses, including:
- Data replication
- Updating caches and search indexes
- Simplifying monolithic applications
- Data integration
- Enabling streaming queries
Debezium provides connectors (based on Kafka Connect) for the following common databases:
- MySQL
- PostgreSQL
- SQL Server
- MongoDB
For more information on deploying Debezium with AMQ Streams, refer to the product documentation.
1.7. Service Registry
You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema.
Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.
Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.
For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.
Kafka client applications can push or pull their schemas from Service Registry at runtime.