Chapter 4. Enhancements
AMQ Streams 2.5 adds a number of enhancements.
4.1. Kafka 3.5.x enhancements Copy linkLink copied to clipboard!
The AMQ Streams 2.5.x release supports Kafka 3.5.0. Upgrading to the 2.5.2 patch release incorporates the updates and improvements from Kafka 3.5.2.
For an overview of the enhancements introduced with Kafka 3.5.x, refer to the Kafka 3.5.0, Kafka 3.5.1, and Kafka 3.5.2 Release Notes.
4.2. OAuth 2.0 support for Kraft mode Copy linkLink copied to clipboard!
KeycloakRBACAuthorizer, the Red Hat Single Sign-On authorizer provided with AMQ Streams, has been replaced with the KeycloakAuthorizer. The new authorizer is compatible with using AMQ Streams with ZooKeeper cluster management or in KRaft mode. As with the previous authorizer, to be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure KeycloakAuthorizer on the Kafka broker. KeycloakRBACAuthorizer can still be used when using AMQ Streams with ZooKeeper cluster management, but you should migrate to the new authorizer.
4.3. OAuth 2.0 configuration properties for grant management Copy linkLink copied to clipboard!
You can now use additional configuration to manage OAuth 2.0 grants from the authorization server.
If you are using Red Hat Single Sign-On for OAuth 2.0 authorization, you can add the following properties to the authorization configuration of your Kafka brokers:
-
strimzi.authorization.grants.max.idle.time.secondsspecifies the time in seconds after which an idle grant in the cache can be evicted. The default value is 300. -
strimzi.authorization.grants.gc.period.secondsspecifies the time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. -
strimzi.authorization.reuse.grantscontrols whether the latest grants are fetched for a new session. When disabled, grants are retrieved from Red Hat Single Sign-On and cached for the user. The default value istrue.
Kafka configuration to use OAuth 2.0 authorization
strimzi.authorization.grants.max.idle.time.seconds="300"
strimzi.authorization.grants.gc.period.seconds="300"
strimzi.authorization.reuse.grants="false"
4.4. Oauth 2.0 support for JsonPath queries when extracting usernames Copy linkLink copied to clipboard!
To use OAuth 2.0 authentication in a Kafka cluster, you specify listener configuration with an OAUTH authentication mechanism. When configuring the listener properties, it is now possible to use a JsonPath query to extract a username from the authorization server being used. You can use a JsonPath query to specify username extraction options in your listener for the oauth.username.claim and oauth.fallback.username.claim properties. This allows you to extract a username from a token by accessing a specific value within a nested data structure. For example, you might have a username that is contained within a user info data structure within a JSON token data structure.
The following example shows how JsonPath queries are specified for the properties when configuring token validation using an introspection endpoint.
Configuring token validation using an introspection endpoint
# ...
listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ;
# ...
oauth.username.claim="['user.info'].['user.id']" \
oauth.fallback.username.claim="['client.info'].['client.id']" \
# ...
- 1
- The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The
userNameClaimvalue depends on the authorization server used. - 2
- An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID. When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available.
4.5. Kafka Bridge enhancements for metrics and OpenAPI Copy linkLink copied to clipboard!
The latest release of the Kafka Bridge introduces the following changes:
-
Removes the
remoteandlocallabels from HTTP server-related metrics to prevent time series sample growth. -
Eliminates accounting HTTP server metrics for requests on the
/metricsendpoint. -
Exposes the
/metricsendpoint through the OpenAPI specification, providing a standardized interface for metrics access and management. -
Fixes the
OffsetRecordSentListcomponent schema to return record offsets or errors. -
Fixes the
ConsumerRecordcomponent schema to return key and value as objects, not just (JSON) strings. Corrects the HTTP status codes returned by the
/readyand/healthyendpoints:-
Changes the successful response code from
200to204, indicating no content in the response for success. -
Adds the
500status code to the specification for the failure case, indicating no content in the response for errors.
-
Changes the successful response code from