Chapter 3. Enhancements
The enhancements added in this release are outlined below.
3.1. Kafka 2.8.0 enhancements
For an overview of the enhancements introduced with Kafka 2.8.0, refer to the Kafka 2.8.0 Release Notes.
3.2. Kafka Connect build configuration updates
You can use build
configuration so that AMQ Streams automatically builds a container image with the connector plugins you require for your data connections.
A dedicated service account is now created with Kafka Connect build pods. The service account is distinct from Kafka Connect itself. Before this release, the build ran under the default service account. Having its own identity is useful when specifying authentication and access.
Kafka Connect build now also works behind proxies if standard HTTP proxies (HTTP_PROXY
, HTTPS_PROXY
, and NO_PROXY
) are set as environment variables for the AMQ Streams deployment.
See:
3.3. Kubernetes Configuration Provider for external configuration data
Use the Kubernetes Configuration Provider plugin to load configuration data from external sources. You can load data from OpenShift Secrets or ConfigMaps.
The provider operates independently of AMQ Streams. It loads the data without needing to restart the Kafka component, even when using a new Secret or ConfigMap.
You can use it to load configuration data for all Kafka components, including producers and consumers. Use it, for example, to provide the credentials for a Kafka Connect instance hosting multiple connectors without disruption
3.4. Log filters with markers
If you are using a ConfigMap to configure the (log4j2) logging levels for AMQ Streams Operators, you can now define logging filters to limit what is returned in the log. You add the filter properties to the ConfigMap.
The filters use markers to specify what to include in the log. You specify a kind, namespace and name for the marker. For example, if a Kafka cluster is failing, you can isolate the logs by specifying the kind as Kafka
, and use the namespace and name of the failing cluster.
This example shows a marker filter for a Kafka cluster named my-kafka-cluster
.
Basic logging filter configuration
appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4
3.5. OAuth 2.0 authentication enhancements
Configure audience and scope
You can now configure the clientAudience
and clientScope
properties when obtaining a token from the authorization server. The property values are passed to the token endpoint as audience
and scope
parameters. Both properties are configured in the OAuth 2.0 authentication listener configuration in the Kafka
custom resource.
Use these properties in the following scenarios:
- When obtaining an access token for inter-broker authentication
In the name of a client for OAuth 2.0 over PLAIN client authentication, using a
clientId
andsecret
Specifically, the
audience
andscope
can now be included in the request when the PLAIN callback first exchanges theclientID
(as the username) and thesecret
(as the password) with the authorization server in order to obtain an access token.
These properties affect whether a client can obtain a token and the content of the token. They do not affect token validation rules imposed by the listener.
Example configuration for clientAudience
and clientScope
properties
# ... authentication: type: oauth # ... clientAudience: AUDIENCE clientScope: SCOPE
Authorization servers sometimes provide aud
(audience) claims in JWT access tokens. When audience checks are enabled, the Kafka broker rejects tokens that do not contain the broker’s clientId
in their aud
claims. To enable audience checks, set the checkAudience
option to true
in the oauth
listener configuration. Audience checks are disabled by default.
See Configuring OAuth 2.0 support for Kafka brokers and KafkaListenerAuthenticationOAuth schema reference
Specify audience
for Kafka Connect and Kafka Bridge
You can now specify the audience
option when configuring OAuth client authentication for Kafka Connect or the Kafka Bridge in their respective custom resources. Previously, only the scope
option was supported for these resources.
See KafkaClientAuthenticationOAuth schema reference
Token endpoint not required with OAuth 2.0 over PLAIN
The tokenEndpointUri
option is no longer required when using the "client ID and secret" method for OAuth 2.0 over PLAIN authentication.
Example OAuth 2.0 over PLAIN configuration with token endpoint URI specified
# ... authentication: type: oauth # ... enablePlain: true tokenEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/token
If the tokenEndpointUri
is not specified, the listener treats the:
-
username
parameter as the account name -
password
parameter as the raw access token, which is passed to the authorization server for validation (the same behavior as for OAUTHBEARER authentication)
The behavior of the "long-lived access token" method for OAuth 2.0 over PLAIN authentication is unchanged. The tokenEndpointUri
is not required when using this method.
3.6. User quotas
The handling of user quotas through the User Operator is no longer managed by ZooKeeper. Instead, user quotas are handled through the API.
Additionally, support has been added for Kafka’s mutation rate quota. This quota limits the number of partition mutations allowed per second. The quota prevents Kafka clusters from being overwhelmed by concurrent topic operations.
The number of partition mutations includes the following types of user requests:
- Creating partitions for a new topic
- Adding partitions to an existing topic
- Deleting partitions from a topic
You can configure a mutation rate quota to control the rate at which mutations are accepted for user requests. The rate is accumulated from the number of partitions created or deleted.
Use the controllerMutationRate
option to apply the quota to the Kafka user. In this example, 10 partition creation and deletion operations are allowed per second.
Example KafkaUser
configuration with user quotas
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-cluster
spec:
# ...
quotas:
#...
controllerMutationRate: 10 1
See User quotas
3.7. Pause reconciliation of custom resources
You can pause the reconciliation of custom resources managed by AMQ Streams operators to perform fixes or make updates. You can also pause reconciliation of custom resources you are creating. The custom resource is created, but it is ignored.
You add an annotation to the custom resource to pause it.
Example annotation for pausing reconciliation
oc annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation="true"
It is now possible to pause reconciliation of KafkaTopic
custom resources.
3.8. Kafka Exporter update
The custom version of Kafka Exporter that is provided with AMQ Streams has been updated to version 1.3.1. AMQ Streams includes an example Grafana dashboard for Kafka Exporter in the examples provided (examples/metrics/grafana-dashboards/strimzi-kafka-exporter.json
).
3.9. Kafka Connect build uses hashes to name download files
You can configure a KafkaConnect
resource to create a custom Kafka Connect container image. Using spec.build
configuration automates the process. You configure plugins
to specify the implementation artifacts and output
to reference the container registry that stores the image. AMQ Streams downloads and adds the connector plugins into the new container image.
The build process now uses the URL hash to name downloaded artifact files. Previously, it used the last segment of the download URL. If your plugin artifact requires a specific name, you can use a new other
artifact type and its fileName
field.
Example naming of a plugin artifact
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: other url: https://my-domain.tld/my-other-file.ext sha512sum: 589...ab4 fileName: name-of-file.ext #...