此内容没有您所选择的语言版本。
Chapter 2. Enhancements
The enhancements added in this release are outlined below.
2.1. Kafka 3.0.0 enhancements 复制链接链接已复制到粘贴板!
For an overview of the enhancements introduced with Kafka 3.0.0, refer to the Kafka 3.0.0 Release Notes.
2.2. Disable automatic network policies for listeners 复制链接链接已复制到粘贴板!
You can now disable the automatic creation of NetworkPolicy resources for listeners and use your own custom network policies instead.
By default, AMQ Streams automatically creates a NetworkPolicy resource for every listener that is enabled on a Kafka broker. For a particular listener, the network policy allow applications across all namespaces to connect. This behavior can be restricted using networkPolicyPeers.
To disable the auto-created NetworkPolicies, set the STRIMZI_NETWORK_POLICY_GENERATION environment variable to false in the Cluster Operator configuration.
2.3. SCRAM users now managed using Kafka Admin API 复制链接链接已复制到粘贴板!
The User Operator now uses the Kafka Admin API instead of ZooKeeper to manage the credentials of SCRAM-SHA-512 users. The Operator connects directly to Kafka for SCRAM-SHA-512 credentials management.
This change is part of ongoing work in the Apache Kafka project to remove Kafka’s dependency on ZooKeeper.
Some ZooKeeper related configuration has been deprecated and removed.
2.4. Use Maven coordinates for connector plugins 复制链接链接已复制到粘贴板!
You can deploy Kafka Connect with build configuration that automatically builds a container image with the connector plugins you require for your data connections. You can now specify the connector plugin artifacts as Maven coordinates.
AMQ Streams supports the following types of artifacts:
- JAR files, which are downloaded and used directly
- TGZ archives, which are downloaded and unpacked
- Maven artifacts, which uses Maven coordinates
- Other artifacts, which are downloaded and used directly
The Maven coordinates identify plugin artifacts and dependencies so that they can be located and fetched from a Maven repository.
Example Maven artifact
Use the Environment Variables Configuration Provider plugin to load configuration data from environment variables.
You can use the provider to load configuration data for all Kafka components, including producers and consumers. Use the provider, for example, to provide the credentials for Kafka Connect connector configuration.
The values for the environment variables can be mapped from secrets or config maps. You can use the Environment Variables Configuration Provider, for example, to load certificates or JAAS configuration from environment variables mapped from OpenShift secrets.
It is now possible to download connector plugin artifacts from insecure servers. This applies to JAR, TGZ, and other files that are downloaded and added to a container image. You can set this using the insecure property in the connector build configuration. When you specify the plugin artifact for download, you set the property to true. This disables all TLS verification. If the server is insecure, the artifact still downloads.
Example TGZ plugin artifact allowing insecure download
You can now specify the container registry secret used to pull base images for Kafka Connect builds on OpenShift. The secret is configured as a template customization. You can use the buildConfig template in the Kafka Connect spec to specify the secret using the pullSecret property. The secret contains the credentials for pulling the base image.
Example template customization for a pull secret
2.8. Specify the volume of the /tmp directory 复制链接链接已复制到粘贴板!
You can now configure the total local storage size for /tmp, the temporary emptyDir volume. The specification currently works for single containers running in a pod. The default storage size is 1Mi. The size is configured as a template customization. You can use the pod template in the spec of the resource to specify the volume size using the tmpDirSizeLimit property.
Example template customization to specify local storage size for /tmp
# ...
template:
pod:
tmpDirSizeLimit: "2Mi"
# ...
# ...
template:
pod:
tmpDirSizeLimit: "2Mi"
# ...
2.9. Metering 复制链接链接已复制到粘贴板!
As a cluster administrator, you can use the Metering tool that is available on OpenShift to analyze what is happening in your cluster. You can now use the tool to generate reports and analyze your installed AMQ Streams components. Using Prometheus as a default data source, you can generate reports on pods, namespaces, and most other Kubernetes resources. You can also use the OpenShift Metering operator to analyze your installed AMQ Streams components to determine whether you are in compliance with your Red Hat subscription.
To use metering with AMQ Streams, you must first install and configure the Metering operator on OpenShift Container Platform.