Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 4. Enhancements


AMQ Streams 2.5 adds a number of enhancements.

4.1. Kafka 3.5.x enhancements

The AMQ Streams 2.5.x release supports Kafka 3.5.0. Upgrading to the 2.5.2 patch release incorporates the updates and improvements from Kafka 3.5.2.

For an overview of the enhancements introduced with Kafka 3.5.x, refer to the Kafka 3.5.0, Kafka 3.5.1, and Kafka 3.5.2 Release Notes.

4.2. UseStrimziPodSets feature gate moves to GA

The UseStrimziPodSets feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled.

StrimziPodSet resources are now used to manage pods instead of StatefulSet resources. This means that AMQ Streams handles the creation and management of pods instead of OpenShift, providing more control over the functionality.

See UseStrimziPodSets feature gate and Feature gate releases.

4.3. KRaft requires node pool configuration

To deploy a Kafka cluster in KRaft mode, you must now enable the UseStrimziPodSets and KafkaNodePools feature gates. KRaft mode is supported only by using KafkaNodePool resources to manage the configuration of Kafka nodes.

For more information, see the following:

4.4. OAuth 2.0 support for KRaft mode

KeycloakRBACAuthorizer, the Red Hat Single Sign-On authorizer provided with AMQ Streams, has been replaced with the KeycloakAuthorizer. The new authorizer is compatible with using AMQ Streams with ZooKeeper cluster management or in KRaft mode. As with the previous authorizer, to be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure KeycloakAuthorizer on the Kafka broker. KeycloakRBACAuthorizer can still be used when using AMQ Streams with ZooKeeper cluster management, but you should migrate to the new authorizer.

4.5. OAuth 2.0 configuration properties for grant management

You can now use additional configuration to manage OAuth 2.0 grants from the authorization server.

If you are using Red Hat Single Sign-On for OAuth 2.0 authorization, you can add the following properties to the authorization configuration of your Kafka brokers:

  • grantsMaxIdleTimeSeconds specifies the time in seconds after which an idle grant in the cache can be evicted. The default value is 300.
  • grantsGcPeriodSeconds specifies the time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300.
  • grantsAlwaysLatest controls whether the latest grants are fetched for a new session. When enabled, grants are retrieved from Red Hat Single Sign-On and cached for the user. The default value is false.

Kafka configuration to use OAuth 2.0 authorization

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    authorization:
      type: keycloak
      tokenEndpointUri: <https://<auth_server_-_address>/auth/realms/external/protocol/openid-connect/token>
      clientId: kafka
      # ...
      grantsMaxIdleSeconds: 300
      grantsGcPeriodSeconds: 300
      grantsAlwaysLatest: false
    #...

See Configuring OAuth 2.0 authorization support.

4.6. Oauth 2.0 support for JsonPath queries when extracting usernames

To use OAuth 2.0 authentication in a Kafka cluster, you specify listener configuration in the Kafka custom resource with the authentication method oauth. When configuring the listener properties, it is now possible to use a JsonPath query to extract a username from the authorization server being used. You can use a JsonPath query to specify username extraction options in your listener for the userNameClaim and fallbackUserNameClaim properties. This allows you to extract a username from a token by accessing a specific value within a nested data structure. For example, you might have a username that is contained within a user info data structure within a JSON token data structure.

The following example shows how JsonPath queries are used with the properties when configuring token validation using an introspection endpoint.

Configuring token validation using an introspection endpoint

- name: external
  port: 9094
  type: loadbalancer
  tls: true
  authentication:
    type: oauth
    validIssuerUri: <https://<auth-server-address>/auth/realms/external>
    introspectionEndpointUri: <https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token/introspect>
    clientId: kafka-broker
    clientSecret:
      secretName: my-cluster-oauth
      key: clientSecret
    userNameClaim: "['user.info'].['user.id']" 
1

    maxSecondsWithoutReauthentication: 3600
    fallbackUserNameClaim: "['client.info'].['client.id']" 
2

    fallbackUserNamePrefix: client-account-
    # ...

1
The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The userNameClaim value depends on the authorization server used.
2
An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID. When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. If required, you can use JsonPath query to target nested attributes.

See Configuring OAuth 2.0 support for Kafka brokers.

4.7. Added Kafka Exporter support to exclude topics and consumer groups

Support for Kafka Exporter deployment configuration introduces new properties to exclude specified topics and consumer groups from the metrics extracted from Kafka brokers.

You can use the following properties in the Kafka Exporter specification:

  • groupExcludeRegex to exclude specific consumer groups
  • topicExcludeRegex to exclude specific topics

In the following example configuration, the two properties exclude topics and consumer groups that start with the prefix excluded-.

Example configuration for deploying Kafka Exporter

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  # ...
  kafkaExporter:
    image: my-registry.io/my-org/my-exporter-cluster:latest
    groupRegex: ".*"
    topicRegex: ".*"
    groupExcludeRegex: "^excluded-.*"
    topicExcludeRegex: "^excluded-.*"
# ...

See KafkaExporterSpec schema reference.

4.8. Kafka Bridge enhancements for metrics and OpenAPI

The latest release of the Kafka Bridge introduces the following changes:

  • Removes the remote and local labels from HTTP server-related metrics to prevent time series sample growth.
  • Eliminates accounting HTTP server metrics for requests on the /metrics endpoint.
  • Exposes the /metrics endpoint through the OpenAPI specification, providing a standardized interface for metrics access and management.
  • Fixes the OffsetRecordSentList component schema to return record offsets or errors.
  • Fixes the ConsumerRecord component schema to return key and value as objects, not just (JSON) strings.
  • Corrects the HTTP status codes returned by the /ready and /healthy endpoints:

    • Changes the successful response code from 200 to 204, indicating no content in the response for success.
    • Adds the 500 status code to the specification for the failure case, indicating no content in the response for errors.

See Using the AMQ Streams Kafka Bridge.

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben