Chapter 4. Enhancements


AMQ Streams 2.5 adds a number of enhancements.

4.1. Kafka 3.5.x enhancements

The AMQ Streams 2.5.x release supports Kafka 3.5.0. Upgrading to the 2.5.2 patch release incorporates the updates and improvements from Kafka 3.5.2.

For an overview of the enhancements introduced with Kafka 3.5.x, refer to the Kafka 3.5.0, Kafka 3.5.1, and Kafka 3.5.2 Release Notes.

4.2. OAuth 2.0 support for Kraft mode

KeycloakRBACAuthorizer, the Red Hat Single Sign-On authorizer provided with AMQ Streams, has been replaced with the KeycloakAuthorizer. The new authorizer is compatible with using AMQ Streams with ZooKeeper cluster management or in KRaft mode. As with the previous authorizer, to be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure KeycloakAuthorizer on the Kafka broker. KeycloakRBACAuthorizer can still be used when using AMQ Streams with ZooKeeper cluster management, but you should migrate to the new authorizer.

4.3. OAuth 2.0 configuration properties for grant management

You can now use additional configuration to manage OAuth 2.0 grants from the authorization server.

If you are using Red Hat Single Sign-On for OAuth 2.0 authorization, you can add the following properties to the authorization configuration of your Kafka brokers:

  • strimzi.authorization.grants.max.idle.time.seconds specifies the time in seconds after which an idle grant in the cache can be evicted. The default value is 300.
  • strimzi.authorization.grants.gc.period.seconds specifies the time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300.
  • strimzi.authorization.reuse.grants controls whether the latest grants are fetched for a new session. When disabled, grants are retrieved from Red Hat Single Sign-On and cached for the user. The default value is true.

Kafka configuration to use OAuth 2.0 authorization

strimzi.authorization.grants.max.idle.time.seconds="300"
strimzi.authorization.grants.gc.period.seconds="300"
strimzi.authorization.reuse.grants="false"

See Configuring OAuth 2.0 authorization support.

4.4. Oauth 2.0 support for JsonPath queries when extracting usernames

To use OAuth 2.0 authentication in a Kafka cluster, you specify listener configuration with an OAUTH authentication mechanism. When configuring the listener properties, it is now possible to use a JsonPath query to extract a username from the authorization server being used. You can use a JsonPath query to specify username extraction options in your listener for the oauth.username.claim and oauth.fallback.username.claim properties. This allows you to extract a username from a token by accessing a specific value within a nested data structure. For example, you might have a username that is contained within a user info data structure within a JSON token data structure.

The following example shows how JsonPath queries are specified for the properties when configuring token validation using an introspection endpoint.

Configuring token validation using an introspection endpoint

# ...
listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ;
  # ...
  oauth.username.claim="['user.info'].['user.id']" \ 1
  oauth.fallback.username.claim="['client.info'].['client.id']" \ 2
  # ...

1
The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The userNameClaim value depends on the authorization server used.
2
An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID. When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available.

See Configuring OAuth 2.0 support for Kafka brokers.

4.5. Kafka Bridge enhancements for metrics and OpenAPI

The latest release of the Kafka Bridge introduces the following changes:

  • Removes the remote and local labels from HTTP server-related metrics to prevent time series sample growth.
  • Eliminates accounting HTTP server metrics for requests on the /metrics endpoint.
  • Exposes the /metrics endpoint through the OpenAPI specification, providing a standardized interface for metrics access and management.
  • Fixes the OffsetRecordSentList component schema to return record offsets or errors.
  • Fixes the ConsumerRecord component schema to return key and value as objects, not just (JSON) strings.
  • Corrects the HTTP status codes returned by the /ready and /healthy endpoints:

    • Changes the successful response code from 200 to 204, indicating no content in the response for success.
    • Adds the 500 status code to the specification for the failure case, indicating no content in the response for errors.

See Using the AMQ Streams Kafka Bridge.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.