Chapter 1. Features


AMQ Streams 2.4 introduces the features described in this section.

AMQ Streams 2.4 on OpenShift is based on Apache Kafka 3.4.0 and Strimzi 0.34.x.

Note

To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.

1.1. OpenShift Container Platform support

AMQ Streams 2.4 is supported on OpenShift Container Platform 4.10 to 4.13.

For more information about the supported platform versions, see the AMQ Streams Supported Configurations.

1.2. Kafka 3.4.0 support

AMQ Streams now supports Apache Kafka version 3.4.0.

AMQ Streams uses Kafka 3.4.0. Only Kafka distributions built by Red Hat are supported.

You must upgrade the Cluster Operator to AMQ Streams version 2.4 before you can upgrade brokers and client applications to Kafka 3.4.0. For upgrade instructions, see Upgrading AMQ Streams.

Refer to the Kafka 3.4.0 Release Notes for additional information.

Note

Kafka 3.3.x is supported only for the purpose of upgrading to AMQ Streams 2.4.

For more information on supported versions, see the AMQ Streams Component Details.

Note

Kafka 3.4.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. KRaft mode is available as a Developer Preview.

1.3. Supporting the v1beta2 API version

The v1beta2 API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, v1alpha1 and v1beta1 API versions were removed from all AMQ Streams custom resources apart from KafkaTopic and KafkaUser.

Upgrade of the custom resources to v1beta2 prepares AMQ Streams for a move to Kubernetes CRD v1, which is required for Kubernetes v1.22.

If you are upgrading from an AMQ Streams version prior to version 1.7:

  1. Upgrade to AMQ Streams 1.7
  2. Convert the custom resources to v1beta2
  3. Upgrade to AMQ Streams 1.8
Important

You must upgrade your custom resources to use API version v1beta2 before upgrading to AMQ Streams version 2.4.

1.3.1. Upgrading custom resources to v1beta2

To support the upgrade of custom resources to v1beta2, AMQ Streams provides an API conversion tool, which you can download from the AMQ Streams software downloads page.

You perform the custom resources upgrades in two steps.

Step one: Convert the format of custom resources

Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:

  • Converting the YAML files that describe the configuration for AMQ Streams custom resources
  • Converting AMQ Streams custom resources directly in the cluster

Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.

Step two: Upgrade CRDs to v1beta2

Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.

For more information, see Upgrading from an AMQ Streams version earlier than 1.7.

1.4. New StableConnectIdentities feature gate to manage pods

This release introduces the StableConnectIdentities feature gate, which is disabled by default. This feature gate is at an alpha level of maturity, and should be treated as a developer preview.

The StableConnectIdentities feature gate controls the use of StrimziPodSet resources to manage Kafka Connect and Kafka MirrorMaker 2 pods instead of using OpenShift Deployment resources. This helps to minimize the number of rebalances of connector tasks.

To enable the StableConnectIdentities feature gate, specify +StableConnectIdentities as a value for the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.

Enabling the UseKRaft feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: +StableConnectIdentities

See StableConnectIdentities feature gate.

1.5. Improved FIPS support

Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. From this release, AMQ Streams can run on FIPS-enabled OpenShift clusters without special configuration and automatically switches to FIPS mode.

Prior to this release, running AMQ Streams on FIPS-enabled OpenShift clusters was possible only by disabling FIPS mode using the FIPS_MODE environment variable in the deployment configuration for the Cluster Operator. If you are currently running AMQ Streams on a FIPS-enabled OpenShift cluster with the FIPS_MODE set to disabled, to be FIPs-compliant you can enable it when upgrading to AMQ Streams 2.4.

A couple of things to note when running AMQ Streams in FIPS mode:

  • SCRAM-SHA-512 passwords need to be at least 32 characters long. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration.
  • If you are using FIPS-enabled OpenShift clusters, you may experience higher memory consumption compared to regular OpenShift clusters. To avoid any issues, we suggest increasing the memory request to at least 512Mi.

See the following:

1.6. Automatic restarts for connectors

A new configuration property enables automatic restarts of failed connectors and tasks for Kafka Connect and Kafka MirrorMaker 2. If the autoRestart property is set to true, up to seven restart attempts are made, after which restarts must be made manually.

For Kafka Connect connectors, you configure the autoRestart property in the KafkaConnector custom resource.

Example Kafka Connect configuration with automatic restarts enabled

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
  name: my-source-connector
  labels:
    strimzi.io/cluster: my-connect-cluster
spec:
  class: org.apache.kafka.connect.file.FileStreamSourceConnector
  tasksMax: 2
  autoRestart:
    enabled: true
  config:
    file: "/opt/kafka/LICENSE"
    topic: my-topic
    # ...

For MirrorMaker 2 connectors, you configure the autoRestart property in the KafkaMirrorMaker2 custom resource. You can enable automatic restarts for each of the internal connectors used by MirrorMaker 2: sourceConnector, heartbeatConnector, and checkpointConnector.

Example MirrorMaker 2 configuration with automatic restarts enabled

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaMirrorMaker2
metadata:
  name: my-mm2-cluster
spec:
  mirrors:
  - sourceConnector:
      autoRestart:
        enabled: true
      # ...
    heartbeatConnector:
      autoRestart:
        enabled: true
      # ...
    checkpointConnector:
      autoRestart:
        enabled: true
      # ...

See AutoRestart schema properties.

1.7. Support for IBM Z and LinuxONE architecture

AMQ Streams 2.4 is enabled to run on IBM Z and LinuxONE s390x architecture.

Support for IBM Z and LinuxONE applies to AMQ Streams running with Kafka on OpenShift Container Platform 4.10 and later.

1.7.1. Requirements for IBM Z and LinuxONE

  • OpenShift Container Platform 4.10 and later

1.7.2. Unsupported on IBM Z and LinuxONE

  • AMQ Streams on disconnected OpenShift Container Platform environments
  • AMQ Streams OPA integration
  • FIPS mode

1.8. Support for IBM Power architecture

AMQ Streams 2.4 is enabled to run on IBM Power ppc64le architecture.

Support for IBM Power applies to AMQ Streams running with Kafka on OpenShift Container Platform 4.10 and later.

1.8.1. Requirements for IBM Power

  • OpenShift Container Platform 4.10 and later

1.8.2. Unsupported on IBM Power

  • AMQ Streams on disconnected OpenShift Container Platform environments

1.9. Red Hat build of Debezium for change data capture

The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.

Debezium has multiple uses, including:

  • Data replication
  • Updating caches and search indexes
  • Simplifying monolithic applications
  • Data integration
  • Enabling streaming queries

Debezium provides connectors (based on Kafka Connect) for the following common databases:

  • Db2
  • MongoDB
  • MySQL
  • PostgreSQL
  • SQL Server

For more information on deploying Debezium with AMQ Streams, refer to the Red Hat build of Debezium documentation.

1.10. Red Hat build of Apicurio Registry for schema validation

You can use the Red Hat build of Apicurio Registry as a centralized store of service schemas for data streaming. For Kafka, you can use the Red Hat build of Apicurio Registry to store Apache Avro or JSON schema.

Apicurio Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.

Using Apicurio Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.

For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.

Kafka client applications can push or pull their schemas from Apicurio Registry at runtime.

For more information on using the Red Hat build of Apicurio Registry with AMQ Streams, refer to the Red Hat build of Apicurio Registry documentation.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.