此内容没有您所选择的语言版本。

Chapter 2. Enhancements


The enhancements added in this release are outlined below.

2.1. Kafka 3.0.0 enhancements

For an overview of the enhancements introduced with Kafka 3.0.0, refer to the Kafka 3.0.0 Release Notes.

2.2. Disable automatic network policies for listeners

You can now disable the automatic creation of NetworkPolicy resources for listeners and use your own custom network policies instead.

By default, AMQ Streams automatically creates a NetworkPolicy resource for every listener that is enabled on a Kafka broker. For a particular listener, the network policy allow applications across all namespaces to connect. This behavior can be restricted using networkPolicyPeers.

To disable the auto-created NetworkPolicies, set the STRIMZI_NETWORK_POLICY_GENERATION environment variable to false in the Cluster Operator configuration.

See Cluster Operator configuration and Network policies.

2.3. SCRAM users now managed using Kafka Admin API

The User Operator now uses the Kafka Admin API instead of ZooKeeper to manage the credentials of SCRAM-SHA-512 users. The Operator connects directly to Kafka for SCRAM-SHA-512 credentials management.

This change is part of ongoing work in the Apache Kafka project to remove Kafka’s dependency on ZooKeeper.

Some ZooKeeper related configuration has been deprecated and removed.

See Deprecated features and Using the User Operator.

2.4. Use Maven coordinates for connector plugins

You can deploy Kafka Connect with build configuration that automatically builds a container image with the connector plugins you require for your data connections. You can now specify the connector plugin artifacts as Maven coordinates.

AMQ Streams supports the following types of artifacts:

  • JAR files, which are downloaded and used directly
  • TGZ archives, which are downloaded and unpacked
  • Maven artifacts, which uses Maven coordinates
  • Other artifacts, which are downloaded and used directly

The Maven coordinates identify plugin artifacts and dependencies so that they can be located and fetched from a Maven repository.

Example Maven artifact

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  build:
    output:
      #...
    plugins:
      - name: my-plugin
        artifacts:
          - type: maven 
1

          - repository: https://mvnrepository.com 
2

          - group: org.apache.camel.kafkaconnector 
3

          - artifact: camel-kafka-connector 
4

          - version: 0.11.0 
5

  #...
Copy to Clipboard Toggle word wrap

1
(Required) Type of artifact.
2
(Optional) Maven repository to download the artifacts from. If you do not specify a repository, Maven Central repository is used by default.
3
(Required) Maven group ID.
4
(Required) Maven artifact type.
5
(Required) Maven version number.

See Build schema reference

Use the Environment Variables Configuration Provider plugin to load configuration data from environment variables.

You can use the provider to load configuration data for all Kafka components, including producers and consumers. Use the provider, for example, to provide the credentials for Kafka Connect connector configuration.

The values for the environment variables can be mapped from secrets or config maps. You can use the Environment Variables Configuration Provider, for example, to load certificates or JAAS configuration from environment variables mapped from OpenShift secrets.

See Loading configuration values from external sources

It is now possible to download connector plugin artifacts from insecure servers. This applies to JAR, TGZ, and other files that are downloaded and added to a container image. You can set this using the insecure property in the connector build configuration. When you specify the plugin artifact for download, you set the property to true. This disables all TLS verification. If the server is insecure, the artifact still downloads.

Example TGZ plugin artifact allowing insecure download

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  build:
    output:
      #...
    plugins:
      - name: my-plugin
        artifacts:
          - type: tgz
            url: https://my-domain.tld/my-connector-archive.tgz
            sha512sum: 158...jg10
            insecure: true
  #...
Copy to Clipboard Toggle word wrap

See Build schema reference

You can now specify the container registry secret used to pull base images for Kafka Connect builds on OpenShift. The secret is configured as a template customization. You can use the buildConfig template in the Kafka Connect spec to specify the secret using the pullSecret property. The secret contains the credentials for pulling the base image.

Example template customization for a pull secret

# ...
template:
  buildConfig:
    metadata:
      labels:
        label1: value1
        label2: value2
      annotations:
        annotation1: value1
        annotation2: value2
    pullSecret: "<secret_credentials>"
# ...
Copy to Clipboard Toggle word wrap

2.8. Specify the volume of the /tmp directory

You can now configure the total local storage size for /tmp, the temporary emptyDir volume. The specification currently works for single containers running in a pod. The default storage size is 1Mi. The size is configured as a template customization. You can use the pod template in the spec of the resource to specify the volume size using the tmpDirSizeLimit property.

Example template customization to specify local storage size for /tmp

# ...
template:
  pod:
    tmpDirSizeLimit: "2Mi"
# ...
Copy to Clipboard Toggle word wrap

2.9. Metering

As a cluster administrator, you can use the Metering tool that is available on OpenShift to analyze what is happening in your cluster. You can now use the tool to generate reports and analyze your installed AMQ Streams components. Using Prometheus as a default data source, you can generate reports on pods, namespaces, and most other Kubernetes resources. You can also use the OpenShift Metering operator to analyze your installed AMQ Streams components to determine whether you are in compliance with your Red Hat subscription.

To use metering with AMQ Streams, you must first install and configure the Metering operator on OpenShift Container Platform.

See Using Metering on AMQ Streams

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat