Chapter 1. Features
AMQ Streams 2.3 introduces the features described in this section.
AMQ Streams 2.3 on OpenShift is based on Kafka 3.3.1 and Strimzi 0.32.x.
To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.
1.1. OpenShift Container Platform support
AMQ Streams 2.3 is supported on OpenShift Container Platform 4.8 to 4.12.
For more information about the supported platform versions, see the AMQ Streams Supported Configurations.
1.2. Kafka 3.3.1 support
AMQ Streams now supports Apache Kafka version 3.3.1.
AMQ Streams uses Kafka 3.3.1. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to AMQ Streams version 2.3 before you can upgrade brokers and client applications to Kafka 3.3.1. For upgrade instructions, see Upgrading AMQ Streams.
Refer to the Kafka 3.3.0 and Kafka 3.3.1 Release Notes for additional information.
Kafka 3.2.x is supported only for the purpose of upgrading to AMQ Streams 2.3.
For more information on supported versions, see the AMQ Streams Component Details.
Kafka 3.3.1 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. KRaft mode is available as a Developer Preview.
1.3. Supporting the v1beta2 API version
The v1beta2
API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, v1alpha1
and v1beta1
API versions were removed from all AMQ Streams custom resources apart from KafkaTopic
and KafkaUser
.
Upgrade of the custom resources to v1beta2
prepares AMQ Streams for a move to Kubernetes CRD v1
, which is required for Kubernetes v1.22.
If you are upgrading from an AMQ Streams version prior to version 1.7:
- Upgrade to AMQ Streams 1.7
-
Convert the custom resources to
v1beta2
- Upgrade to AMQ Streams 1.8
You must upgrade your custom resources to use API version v1beta2
before upgrading to AMQ Streams version 2.3.
1.3.1. Upgrading custom resources to v1beta2
To support the upgrade of custom resources to v1beta2
, AMQ Streams provided the Red Hat AMQ Streams API Conversion Tool with AMQ Streams 1.8. Download the tool from the AMQ Streams 1.8 software downloads page.
You perform the custom resources upgrades in two steps.
Step one: Convert the format of custom resources
Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2
in one of two ways:
- Converting the YAML files that describe the configuration for AMQ Streams custom resources
- Converting AMQ Streams custom resources directly in the cluster
Alternatively, you can manually convert each custom resource into a format applicable to v1beta2
. Instructions for manually converting custom resources are included in the documentation.
Step two: Upgrade CRDs to v1beta2
Next, using the API conversion tool with the crd-upgrade
command, you must set v1beta2
as the storage API version in your CRDs. You cannot perform this step manually.
For more information, see Upgrading from an AMQ Streams version earlier than 1.7.
1.4. Automatic approval of Cruise Control optimization proposals
When using AMQ Streams with Cruise Control, you can now automate the process of approving the optimization proposals generated. You generate an optimization proposal using the KafkaRebalance
custom resource. To enable auto-approval, you add strimzi.io/rebalance-auto-approval: "true"
as an annotation to the KafkaRebalance
custom resource before you generate the proposal.
With manual approval, you make another request when a generated proposal has a ProposalReady
status. You approve the proposal by adding the strimzi.io/rebalance: approve
annotation to the KafkaRebalance
resource in the new request.
With automatic approval, the proposal is generated and approved to complete the rebalance in a single request.
1.5. Support for multiple operations in ACL rule configuration
The KafkaUser
custom resource has been updated to make configuration of ACL lists easier to manage.
Previously, you configured operations for ACL rules separately for each resource using the operation
property.
Old format for configuring ACL rules
authorization: type: simple acls: - resource: type: topic name: my-topic operation: Read - resource: type: topic name: my-topic operation: Describe - resource: type: topic name: my-topic operation: Write - resource: type: topic name: my-topic operation: Create
A new operations
property allows you to list multiple ACL operations as a single rule for the same resource.
New format for configuring ACL rules
authorization: type: simple acls: - resource: type: topic name: my-topic operations: - Read - Describe - Create - Write
The operation
property for the old configuration format is deprecated, but still supported.
1.6. New cluster-ip
internal listener type
Listeners are used for client connection to Kafka brokers. They are configured in the Kafka
resource using .spec.kafka.listeners
properties.
A new cluster-ip
type of internal listener exposes a Kafka cluster based on per-broker ClusterIP
services.
Example cluster-ip
listener configuration
#... spec: kafka: #... listeners: - name: external-cluster-ip type: cluster-ip tls: false port: 9096 #...
This is a useful option when you can’t route through the headless service or you wish to incorporate a custom access mechanism. For example, you might use this listener when building your own type of external listener for a specific Ingress controller or the Kubernetes Gateway API.
1.7. Cluster Operator leader election to run multiple replicas
Use leader election to run multiple parallel replicas of the Cluster Operator. One replica is elected as the active leader and operates the deployed resources. The other replicas run in standby mode. Replicas are useful for high availability. Additional replicas safeguard against disruption caused by major failure. This is especially important since the introduction of StrimziPodSets, whereby AMQ Streams handles the creation and management of pods for Kafka clusters. The Cluster Operator is responsible for restarting the pods.
To enable leader election, the STRIMZI_LEADER_ELECTION_ENABLED
environment variable for the Cluster Operator must be set to true (default). The environment variable is set, along with related environment variables, in the Deployment
custom resource that is used to deploy the Cluster Operator. By default, AMQ Streams runs with a single Cluster Operator replica that is always the leader replica. To add more replicas, you update the spec.replicas
value in the Deployment
custom resource.
Deployment
configuration for Cluster Operator replicas and leader election
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 # ... spec: # ... containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel8-operator:2.3.0 # ... env: # ... - name: STRIMZI_LEADER_ELECTION_ENABLED value: "true" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: "strimzi-cluster-operator" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: # ...
See Running multiple Cluster Operator replicas with leader election and Leader election environment variables.
1.8. Support for IBM Z and LinuxONE architecture
AMQ Streams 2.3 is enabled to run on IBM Z and LinuxONE s390x architecture.
Support for IBM Z and LinuxONE applies to AMQ Streams running with Kafka on OpenShift Container Platform 4.10 and later.
1.8.1. Requirements for IBM Z and LinuxONE
- OpenShift Container Platform 4.10 and later
1.8.2. Unsupported on IBM Z and LinuxONE
- AMQ Streams on disconnected OpenShift Container Platform environments
- AMQ Streams OPA integration
1.9. Support for IBM Power architecture
AMQ Streams 2.3 is enabled to run on IBM Power ppc64le architecture.
Support for IBM Power applies to AMQ Streams running with Kafka on OpenShift Container Platform 4.9 and later.
1.9.1. Requirements for IBM Power
- OpenShift Container Platform 4.9 and later
1.9.2. Unsupported on IBM Power
- AMQ Streams on disconnected OpenShift Container Platform environments
1.10. Red Hat build of Debezium for change data capture
The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium has multiple uses, including:
- Data replication
- Updating caches and search indexes
- Simplifying monolithic applications
- Data integration
- Enabling streaming queries
Debezium provides connectors (based on Kafka Connect) for the following common databases:
- Db2
- MongoDB
- MySQL
- PostgreSQL
- SQL Server
For more information on deploying Debezium with AMQ Streams, refer to the Red Hat build of Debezium documentation.
1.11. Red Hat build of Apicurio Registry for schema validation
You can use the Red Hat build of Apicurio Registry as a centralized store of service schemas for data streaming. For Kafka, you can use the Red Hat build of Apicurio Registry to store Apache Avro or JSON schema.
Apicurio Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.
Using Apicurio Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.
For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.
Kafka client applications can push or pull their schemas from Apicurio Registry at runtime.
For more information on using the Red Hat build of Apicurio Registry with AMQ Streams, refer to the Red Hat build of Apicurio Registry documentation.