Chapter 1. Features
AMQ Streams 2.1 introduces the features described in this section.
AMQ Streams version 2.1 is based on Strimzi 0.28.x.
To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.
1.1. OpenShift Container Platform support
AMQ Streams 2.1 is supported on OpenShift Container Platform 4.6 to 4.10.
For more information about the supported platform versions, see the AMQ Streams Supported Configurations.
1.2. Kafka 3.1.0 support
AMQ Streams now supports Apache Kafka version 3.1.0.
AMQ Streams uses Kafka 3.1.0. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to AMQ Streams version 2.1 before you can upgrade brokers and client applications to Kafka 3.1.0. For upgrade instructions, see Upgrading AMQ Streams.
Refer to the Kafka 3.0.0 and Kafka 3.1.0 Release Notes for additional information.
Kafka 3.0.x is supported only for the purpose of upgrading to AMQ Streams 2.1.
For more information on supported versions, see the AMQ Streams Component Details.
Kafka 3.1.0 requires ZooKeeper version 3.6.3, which is the same version as Kafka 3.0.0. Therefore, the Cluster Operator will not perform a ZooKeeper upgrade when you upgrade from AMQ Streams 2.0 to AMQ Streams 2.1.
1.3. Supporting the v1beta2 API version
The v1beta2
API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, v1alpha1
and v1beta1
API versions were removed from all AMQ Streams custom resources apart from KafkaTopic
and KafkaUser
.
Upgrade of the custom resources to v1beta2
prepares AMQ Streams for a move to Kubernetes CRD v1
, which is required for Kubernetes v1.22.
If you are upgrading from an AMQ Streams version prior to version 1.7:
- Upgrade to AMQ Streams 1.7
-
Convert the custom resources to
v1beta2
- Upgrade to AMQ Streams 1.8
You must upgrade your custom resources to use API version v1beta2
before upgrading to AMQ Streams version 2.1.
See Deploying and upgrading AMQ Streams.
1.3.1. Upgrading custom resources to v1beta2
To support the upgrade of custom resources to v1beta2
, AMQ Streams provides an API conversion tool, which you can download from the AMQ Streams software downloads page.
You perform the custom resources upgrades in two steps.
Step one: Convert the format of custom resources
Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2
in one of two ways:
- Converting the YAML files that describe the configuration for AMQ Streams custom resources
- Converting AMQ Streams custom resources directly in the cluster
Alternatively, you can manually convert each custom resource into a format applicable to v1beta2
. Instructions for manually converting custom resources are included in the documentation.
Step two: Upgrade CRDs to v1beta2
Next, using the API conversion tool with the crd-upgrade
command, you must set v1beta2
as the storage API version in your CRDs. You cannot perform this step manually.
For full instructions, see Upgrading AMQ Streams.
1.4. Support for IBM Z and LinuxONE architecture
AMQ Streams 2.1 is enabled to run on IBM Z and LinuxONE s390x architecture.
Support for IBM Z and LinuxONE applies to AMQ Streams running with Kafka 3.1.0 on OpenShift Container Platform 4.10. The Kafka versions shipped with AMQ Streams 2.0 and earlier versions do not contain the s390x binaries.
1.4.1. Requirements for IBM Z and LinuxONE
- OpenShift Container Platform 4.10
- Kafka 3.1.0
1.4.2. Unsupported on IBM Z and LinuxONE
- Kafka 3.0.0 or earlier
- AMQ Streams upgrades and downgrades since this is the first release on s390x
- AMQ Streams on disconnected OpenShift Container Platform environments
- AMQ Streams OPA integration
1.5. Support for IBM Power architecture
AMQ Streams 2.1 is enabled to run on IBM Power ppc64le architecture.
Support for IBM Power applies to AMQ Streams running with Kafka 3.0.0 and later on OpenShift Container Platform 4.9 and later. The Kafka versions shipped with AMQ Streams 1.8 and earlier versions do not contain the ppc64le binaries.
1.5.1. Requirements for IBM Power
- OpenShift Container Platform 4.9 and later
- Kafka 3.0.0 and later
1.5.2. Unsupported on IBM Power
- Kafka 2.8.0 and earlier
- AMQ Streams on disconnected OpenShift Container Platform environments
1.6. Renewal of custom CA certificates
The Cluster Operator can now detect user-provided custom CA certificates. When you renew your custom certificates, the Cluster Operator will perform a rolling update of ZooKeeper, Kafka, and other components to trust the new CA certificate.
If you are using your own certificates, the Cluster Operator does not renew them automatically. Instead, you need to edit the existing Secret
to add the new CA certificate and update a certificate generation annotation value. The annotation is set to a higher incremental value so that the Cluster Operator uses the latest certificate in the renewal process.
Example secret configuration updated with a new CA certificate
apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "1" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque
1.7. Debezium for change data capture integration
Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium has multiple uses, including:
- Data replication
- Updating caches and search indexes
- Simplifying monolithic applications
- Data integration
- Enabling streaming queries
Debezium provides connectors (based on Kafka Connect) for the following common databases:
- Db2
- MongoDB
- MySQL
- PostgreSQL
- SQL Server
For more information on deploying Debezium with AMQ Streams, refer to the product documentation.
1.8. Service Registry
You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema.
Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.
Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.
For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.
Kafka client applications can push or pull their schemas from Service Registry at runtime.
For more information on using Service Registry with AMQ Streams, refer to the Service Registry documentation.