Chapter 2. Enhancements


The enhancements added in this release are outlined below.

2.1. Kafka enhancements

For an overview of the enhancements introduced with:

2.2. Kafka Bridge enhancements

This release includes the following enhancements to the Kafka Bridge component of AMQ Streams.

Retrieve partitions and metadata

The Kafka Bridge now supports the following operations:

  • Retrieve a list of partitions for a given topic:

    GET /topics/{topicname}/partitions
    Copy to Clipboard Toggle word wrap
  • Retrieve metadata for a given partition, such as the partition ID, the leader broker, and the number of replicas:

    GET /topics/{topicname}/partitions/{partitionid}
    Copy to Clipboard Toggle word wrap

See the Kafka Bridge API reference.

Support for Kafka message headers

Messages sent using the Kafka Bridge can now include Kafka message headers.

In a POST request to the /topics endpoint, you can optionally specify headers in the message payload, which is contained in the request body. Message header values must be in binary format and encoded as Base64.

Example request with Kafka message header

curl -X POST \
  http://localhost:8080/topics/my-topic \
  -H 'content-type: application/vnd.kafka.json.v2+json' \
  -d '{
    "records": [
        {
            "key": "my-key",
            "value": "sales-lead-0001"
            "partition": 2
            "headers": [
              {
                "key": "key1",
                "value": "QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ=="
              }
            ]
        },
    ]
}'
Copy to Clipboard Toggle word wrap

See Requests to the Kafka Bridge

2.3. OAuth 2.0 authentication and authorization

This release includes the following enhancements to OAuth 2.0 token-based authentication and authorization.

Session re-authentication

OAuth 2.0 authentication in AMQ Streams now supports session re-authentication for Kafka brokers. This defines the maximum duration of an authenticated OAuth 2.0 session between a Kafka client and a Kafka broker. Session re-authentication is supported for both types of token validation: fast local JWT and introspection endpoint.

To configure session re-authentication, use the new maxSecondsWithoutReauthentication option in the OAuth 2.0 configuration for Kafka brokers.

For a specific listener, maxSecondsWithoutReauthentication allows you to:

  • Enable session re-authentication
  • Set the maximum duration, in seconds, of an authenticated session between a Kafka client and a Kafka broker

Example configuration for session re-authentication after 1 hour

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
spec:
  kafka:
    listeners:
      #...
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: oauth
          maxSecondsWithoutReauthentication: 3600
          # ...
Copy to Clipboard Toggle word wrap

An authenticated session is closed if it exceeds the configured maxSecondsWithoutReauthentication, or if the access token expiry time is reached. Then, the client must log in to the authorization server again, obtain a new access token, and then re-authenticate to the Kafka broker. This will establish a new authenticated session over the existing connection.

When re-authentication is next required, any operation that is attempted by the client (apart from re-authentication) will cause the broker to terminate the connection.

See: Session re-authentication for Kafka brokers and Configuring OAuth 2.0 support for Kafka brokers.

JWKS keys refresh interval

When configuring Kafka brokers to use fast local JWT token validation, you can now set the jwksMinRefreshPauseSeconds option in the external listener configuration. This defines the minimum interval between attempts by the broker to refresh JSON Web Key Set (JWKS) public keys issued by the authorization server.

With this release, the Kafka broker will attempt to refresh JWKS keys immediately, without waiting for the regular refresh schedule, if it detects an unknown signing key.

Example configuration for a 2-minute pause between attempts to refresh JWKS keys

    listeners:
      #...
      - name: external2
        port: 9095
        type: loadbalancer
        tls: false
        authentication:
          type: oauth
          validIssuerUri: <https://<auth-server-address>/auth/realms/external>
          jwksEndpointUri: <https://<auth-server-address>/auth/realms/external/protocol/openid-connect/certs>
          userNameClaim: preferred_username
          tlsTrustedCertificates:
          - secretName: oauth-server-cert
            certificate: ca.crt
          disableTlsHostnameVerification: true
          jwksExpirySeconds: 360
          jwksRefreshSeconds: 300
          jwksMinRefreshPauseSeconds: 120
          enableECDSA: "true"
Copy to Clipboard Toggle word wrap

The refresh schedule for JWKS keys is set in the jwksRefreshSeconds option. When an unknown signing key is encountered, a JWKS keys refresh is scheduled outside of the refresh schedule. The refresh will not start until the time since the last refresh reaches the interval specified in jwksMinRefreshPauseSeconds.

jwksMinRefreshPauseSeconds has a default value of 1.

See Configuring OAuth 2.0 support for Kafka brokers.

Refreshing grants from Red Hat Single Sign-On

New configuration options have been added for OAuth 2.0 token-based authorization through Red Hat Single Sign-On. When configuring Kafka brokers, you can now define the following options related to refreshing grants from Red Hat SSO Authorization Services:

  • grantsRefreshPeriodSeconds: The time between two consecutive grants refresh runs. The default value is 60. If set to 0 or less, refreshing of grants is disabled.
  • grantsRefreshPoolSize: The number of threads that can fetch grants for the active session in parallel. The default value is 5.

See Using OAuth 2.0 token-based authorization and Configuring OAuth 2.0 authorization support.

Detection of permission changes in Red Hat Single Sign-On

With this release, the keycloak (Red Hat SSO) authorization regularly checks for changes in permissions for the active sessions. User changes and permissions management changes are now detected in real time.

2.4. Metrics for Kafka Bridge and Cruise Control

You can now add metrics configuration to Kafka Bridge and Cruise Control.

Example metrics files for Kafka Bridge and Cruise Control are provided with AMQ Streams, including:

  • Custom resource YAML files with metrics configuration
  • Grafana dashboard JSON files

With the metrics configuration deployed, and Prometheus and Grafana set up, you can use the example Grafana dashboards for monitoring.

Example metrics files provided with AMQ Streams

metrics
├── grafana-dashboards
│   ├── strimzi-cruise-control.json
│   ├── strimzi-kafka-bridge.json
│   ├── strimzi-kafka-connect.json
│   ├── strimzi-kafka-exporter.json
│   ├── strimzi-kafka-mirror-maker-2.json
│   ├── strimzi-kafka.json
│   ├── strimzi-operators.json
│   └── strimzi-zookeeper.json
├── grafana-install
│   └── grafana.yaml
├── prometheus-additional-properties
│   └── prometheus-additional.yaml
├── prometheus-alertmanager-config
│   └── alert-manager-config.yaml
├── prometheus-install
│    ├── alert-manager.yaml
│    ├── prometheus-rules.yaml
│    ├── prometheus.yaml
│    ├── strimzi-pod-monitor.yaml
├── kafka-bridge-metrics.yaml
├── kafka-connect-metrics.yaml
├── kafka-cruise-control-metrics.yaml
├── kafka-metrics.yaml
└── kafka-mirror-maker-2-metrics.yaml
Copy to Clipboard Toggle word wrap

Expand
Table 2.1. Example custom resources with metrics configuration
ComponentCustom resourceExample YAML file

Kafka and ZooKeeper

Kafka

kafka-metrics.yaml

Kafka Connect

KafkaConnect and KafkaConnectS2I

kafka-connect-metrics.yaml

Kafka MirrorMaker 2.0

KafkaMirrorMaker2

kafka-mirror-maker-2-metrics.yaml

Kafka Bridge

KafkaBridge

kafka-bridge-metrics.yaml

Cruise Control

Kafka

kafka-cruise-control-metrics.yaml

See Introducing Metrics to Kafka.

Note

The Prometheus server is not supported as part of the AMQ Streams distribution. However, the Prometheus endpoint and JMX exporter used to expose the metrics are supported.

2.5. Dynamic updates for logging changes

With this release, changing the logging levels, both inline and external, of most custom resources no longer triggers rolling updates to the Kafka cluster. Logging changes are now applied dynamically (without a restart).

This enhancement applies to the following resources:

  • Kafka clusters
  • Kafka Connect and Kafka Connect S2I
  • Kafka Mirror Maker 2.0
  • Kafka Bridge

It does not apply to Mirror Maker or Cruise Control.

If you use external logging via a ConfigMap, a rolling update is still triggered when you change a logging appender. For example:

log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
Copy to Clipboard Toggle word wrap

See External logging and the Deployment configuration chapter of the Using AMQ Streams on OpenShift guide.

2.6. PodMonitors used for metrics scraping

The way that Prometheus metrics are scraped from pods (for Kafka, ZooKeeper, Kafka Connect, and others) has changed in this release.

Metrics are now scraped from pods by PodMonitors only, defined in strimzi-pod-monitor.yaml. Previously, this was performed by ServiceMonitors and PodMonitors. ServiceMonitors have been removed from AMQ Streams in this release.

You need to upgrade your monitoring stack to use PodMonitors as described in Upgrading your monitoring stack to use PodMonitors, below.

As a result of this change, the following elements have been removed from services related to Kafka and ZooKeeper:

  • The tcp-prometheus monitoring port (port 9404)
  • Prometheus annotations

This change applies to the following services:

  • cluster-name-zookeeper-client
  • cluster-name-kafka-brokers

To add a Prometheus annotation, you should now use the template property in the relevant AMQ Streams custom resource, as described in Customizing OpenShift resources.

Upgrading your monitoring stack to use PodMonitors

To avoid an interruption to the monitoring of your Kafka cluster, perform the following steps before upgrading to AMQ Streams 1.6.

  1. Using the new AMQ Streams 1.6 installation artifacts, apply the strimzi-pod-monitor.yaml file to your AMQ Streams 1.5 cluster:

    oc apply -f examples/metrics/prometheus-install/strimzi-pod-monitor.yaml
    Copy to Clipboard Toggle word wrap
  2. Delete the existing ServiceMonitor resources from your AMQ Streams 1.5 cluster.
  3. Delete the Secret named additional-scrape-configs.
  4. Create a new Secret, also named additional-scrape-configs, from the prometheus-additional.yaml file provided in the AMQ Streams 1.6 installation artifacts.
  5. Check that the Prometheus targets for the Prometheus user interface are up and running again.
  6. Proceed with the upgrade to AMQ Streams 1.6, starting with Upgrading the Cluster Operator.

After completing the upgrade to AMQ Streams 1.6, you can load the example Grafana dashboards for AMQ Streams 1.6.

See Introducing Metrics to Kafka.

2.7. Generic listener configuration

A GenericKafkaListener schema is introduced in this release.

The schema is for the configuration of Kafka listeners in a Kafka resource, and replaces the KafkaListeners schema, which is deprecated.

With the GenericKafkaListener schema, you can configure as many listeners as required, as long as their names and ports are unique. The listeners configuration is defined as an array, but the deprecated format is also supported.

See GenericKafkaListener schema reference

Updating listeners to the new configuration

The KafkaListeners schema uses sub-properties for plain, tls and external listeners, with fixed ports for each. After a Kafka upgrade, you can convert listeners configured using the KafkaListeners schema into the format of the GenericKafkaListener schema.

For example, if you are currently using the following configuration in your Kafka configuration:

Old listener configuration

listeners:
  plain:
    # ...
  tls:
    # ...
  external:
    type: loadbalancer
    # ...
Copy to Clipboard Toggle word wrap

Convert the listeners into the new format using:

New listener configuration

listeners:
  #...
  - name: plain
    port: 9092
    type: internal
    tls: false 
1

  - name: tls
    port: 9093
    type: internal
    tls: true
  - name: external
    port: 9094
    type: EXTERNAL-LISTENER-TYPE 
2

    tls: true
Copy to Clipboard Toggle word wrap

1
The TLS property is now required for all listeners.
2
Options: ingress, loadbalancer, nodeport, route.

Make sure to use the the exact names and port numbers shown.

For any additional configuration or overrides properties used with the old format, you need to update them to the new format.

Changes introduced to the listener configuration:

  • overrides is merged with the configuration section
  • dnsAnnotations has been renamed annotations
  • preferredAddressType has been renamed preferredNodePortAddressType
  • address has been renamed alternativenames
  • loadBalancerSourceRanges and externalTrafficPolicy move to the listener configuration from the now deprecated template

All listeners now support configuring the advertised hostname and port.

For example, this configuration:

Old additional listener configuration

listeners:
  external:
    type: loadbalancer
    authentication:
      type: tls
    overrides:
      bootstrap:
        dnsAnnotations:
          #...
Copy to Clipboard Toggle word wrap

Changes to:

New additional listener configuration

listeners:
    #...
  - name: external
    port: 9094
    type:loadbalancer
    tls: true
    authentication:
      type: tls
    configuration:
      bootstrap:
        annotations:
          #...
Copy to Clipboard Toggle word wrap

Important

The name and port numbers shown in the new listener configuration must be used for backwards compatibility. Using any other values will cause renaming of the Kafka listeners and Kubernetes services.

2.8. MirrorMaker 2.0 topic renaming update

The MirrorMaker 2.0 architecture supports bidirectional replication by automatically renaming remote topics to represent the source cluster. The name of the originating cluster is prepended to the name of the topic.

Optionally, you can now override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names.

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirrorMaker2
metadata:
  name: my-mirror-maker2
spec:
  #...
  mirrors:
  - sourceCluster: "my-cluster-source"
    targetCluster: "my-cluster-target"
    sourceConnector:
      config:
        replication.factor: 1
        offset-syncs.topic.replication.factor: 1
        sync.topic.acls.enabled: "false"
        replication.policy.separator: ""
        replication.policy.class: "io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy" 
1

  #...
Copy to Clipboard Toggle word wrap
1
Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name.

The override is useful, for example, in an active/passive cluster configuration where you want to make backups or migrate data to another cluster. In either situation, you might not want automatic renaming of remote topics.

See Kafka MirrorMaker 2.0 configuration

2.9. Support for hostAliases

It is now possible to configure hostAliases when customizing a deployment of Kubernetes templates and pods.

Example hostAliases configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
#...
spec:
  # ...
  template:
    pod:
      hostAliases:
      - ip: "192.168.1.86"
        hostnames:
        - "my-host-1"
        - "my-host-2"
      #...
Copy to Clipboard Toggle word wrap

If a list of hosts and IPs is specified, they are injected into the /etc/hosts file of the pod. This is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users.

See PodTemplate schema reference

2.10. Reconciled resource metric

A new operator metric provides information about the status of a specified resource, that is, whether or not it was reconciled successfully.

Reconciled resource metric definition

strimzi_resource_state{kind="Kafka", name="my-cluster", resource-namespace="my-kafka-namespace"}
Copy to Clipboard Toggle word wrap

2.11. Secret metadata for KafkaUser

You can now use template properties for the Secret created by the User Operator. Using KafkaUserTemplate, you can use labels and annotations to configure metadata that defines how the Secret is generated for the KafkaUser resource.

An example showing the KafkaUserTemplate

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: my-user
  labels:
    strimzi.io/cluster: my-cluster
spec:
  authentication:
    type: tls
  template:
    secret:
      metadata:
        labels:
          label1: value1
        annotations:
          anno1: value1
  # ...
Copy to Clipboard Toggle word wrap

See KafkaUserTemplate schema reference

2.12. Additional tools in container images

The following tools have been added to the AMQ Streams container images:

  • jstack
  • jcmd
  • jmap
  • netstat (net-tools)
  • lsof

2.13. Removal of Kafka Exporter service

The Kafka Exporter service has been removed from AMQ Streams. This service is no longer required because Prometheus now scrapes the Kafka Exporter metrics directly from the Kafka Exporter pods through the PodMonitor declaration.

See Introducing Metrics to Kafka.

The --zookeeper option was deprecated in the following Kafka administrative tools:

  • bin/kafka-configs.sh
  • bin/kafka-leader-election.sh
  • bin/kafka-topics.sh

When using these tools, you should now use the --bootstrap-server option to specify the Kafka broker to connect to. For example:

kubectl exec BROKER-POD -c kafka -it -- \
  /bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
Copy to Clipboard Toggle word wrap

Although the --zookeeper option still works, it will be removed from all the administrative tools in a future Kafka release. This is part of ongoing work in the Apache Kafka project to remove Kafka’s dependency on ZooKeeper.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top