Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 5. KafkaClusterSpec schema reference

download PDF

Used in: KafkaSpec

Full list of KafkaClusterSpec schema properties

Configures a Kafka cluster.

5.1. listeners

Use the listeners property to configure listeners to provide access to Kafka brokers.

Example configuration of a plain (unencrypted) listener without authentication

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  kafka:
    # ...
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
    # ...
  zookeeper:
    # ...

5.2. config

Use the config properties to configure Kafka broker options as keys.

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

Exceptions

You can specify and configure the options listed in the Apache Kafka documentation.

However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed:

  • Security (encryption, authentication, and authorization)
  • Listener configuration
  • Broker ID configuration
  • Configuration of log data directories
  • Inter-broker communication
  • ZooKeeper connectivity

Properties with the following prefixes cannot be set:

  • advertised.
  • authorizer.
  • broker.
  • controller
  • cruise.control.metrics.reporter.bootstrap.
  • cruise.control.metrics.topic
  • host.name
  • inter.broker.listener.name
  • listener.
  • listeners.
  • log.dir
  • password.
  • port
  • process.roles
  • sasl.
  • security.
  • servers,node.id
  • ssl.
  • super.user
  • zookeeper.clientCnxnSocket
  • zookeeper.connect
  • zookeeper.set.acl
  • zookeeper.ssl

If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by Streams for Apache Kafka:

  • Any ssl configuration for supported TLS versions and cipher suites
  • Configuration for the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection
  • Cruise Control metrics properties:

    • cruise.control.metrics.topic.num.partitions
    • cruise.control.metrics.topic.replication.factor
    • cruise.control.metrics.topic.retention.ms
    • cruise.control.metrics.topic.auto.create.retries
    • cruise.control.metrics.topic.auto.create.timeout.ms
    • cruise.control.metrics.topic.min.insync.replicas
  • Controller properties:

    • controller.quorum.election.backoff.max.ms
    • controller.quorum.election.timeout.ms
    • controller.quorum.fetch.timeout.ms

Example Kafka broker configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    config:
      num.partitions: 1
      num.recovery.threads.per.data.dir: 1
      default.replication.factor: 3
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 1
      log.retention.hours: 168
      log.segment.bytes: 1073741824
      log.retention.check.interval.ms: 300000
      num.network.threads: 3
      num.io.threads: 8
      socket.send.buffer.bytes: 102400
      socket.receive.buffer.bytes: 102400
      socket.request.max.bytes: 104857600
      group.initial.rebalance.delay.ms: 0
      zookeeper.connection.timeout.ms: 6000
    # ...

5.3. brokerRackInitImage

When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image used for this container can be configured using the brokerRackInitImage property. When the brokerRackInitImage field is missing, the following images are used in order of priority:

  1. Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration.
  2. registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 container image.

Example brokerRackInitImage configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    rack:
      topologyKey: topology.kubernetes.io/zone
    brokerRackInitImage: my-org/my-image:latest
    # ...

Note

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container registry used by Streams for Apache Kafka. In this case, you should either copy the Streams for Apache Kafka images or build them from the source. If the configured image is not compatible with Streams for Apache Kafka images, it might not work properly.

5.4. logging

Kafka has its own configurable loggers, which include the following:

  • log4j.logger.org.I0Itec.zkclient.ZkClient
  • log4j.logger.org.apache.zookeeper
  • log4j.logger.kafka
  • log4j.logger.org.apache.kafka
  • log4j.logger.kafka.request.logger
  • log4j.logger.kafka.network.Processor
  • log4j.logger.kafka.server.KafkaApis
  • log4j.logger.kafka.network.RequestChannel$
  • log4j.logger.kafka.controller
  • log4j.logger.kafka.log.LogCleaner
  • log4j.logger.state.change.logger
  • log4j.logger.kafka.authorizer.logger

Kafka uses the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  kafka:
    # ...
    logging:
      type: inline
      loggers:
        kafka.root.logger.level: INFO
        log4j.logger.kafka.coordinator.transaction: TRACE
        log4j.logger.kafka.log.LogCleanerManager: DEBUG
        log4j.logger.kafka.request.logger: DEBUG
        log4j.logger.io.strimzi.kafka.oauth: DEBUG
        log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG
  # ...

Note

Setting a log level to DEBUG may result in a large amount of log output and may have performance implications.

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        name: customConfigMap
        key: kafka-log4j.properties
  # ...

Any available loggers that are not configured have their level set to OFF.

If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically.

If you use external logging, a rolling update is triggered when logging appenders are changed.

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

5.5. KafkaClusterSpec schema properties

PropertyProperty typeDescription

version

string

The Kafka broker version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version.

metadataVersion

string

Added in Streams for Apache Kafka 2.7. The KRaft metadata version used by the Kafka cluster. This property is ignored when running in ZooKeeper mode. If the property is not set, it defaults to the metadata version that corresponds to the version property.

replicas

integer

The number of pods in the cluster. This property is required when node pools are not used.

image

string

The container image used for Kafka pods. If the property is not set, the default Kafka image version is determined based on the version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. Changing the Kafka image version does not automatically update the image versions for other components, such as Kafka Exporter.

listeners

GenericKafkaListener array

Configures listeners of Kafka brokers.

config

map

Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers, node.id, process.roles, controller., metadata.log.dir, zookeeper.metadata.migration.enable (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation, cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms, cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms, cruise.control.metrics.topic.min.insync.replicas, controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms).

storage

EphemeralStorage, PersistentClaimStorage, JbodStorage

Storage configuration (disk). Cannot be updated. This property is required when node pools are not used.

authorization

KafkaAuthorizationSimple, KafkaAuthorizationOpa, KafkaAuthorizationKeycloak, KafkaAuthorizationCustom

Authorization configuration for Kafka brokers.

rack

Rack

Configuration of the broker.rack broker config.

brokerRackInitImage

string

The image of the init container used for initializing the broker.rack.

livenessProbe

Probe

Pod liveness checking.

readinessProbe

Probe

Pod readiness checking.

jvmOptions

JvmOptions

JVM Options for pods.

jmxOptions

KafkaJmxOptions

JMX Options for Kafka brokers.

resources

ResourceRequirements

CPU and memory resources to reserve.

metricsConfig

JmxPrometheusExporterMetrics

Metrics configuration.

logging

InlineLogging, ExternalLogging

Logging configuration for Kafka.

template

KafkaClusterTemplate

Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated.

tieredStorage

TieredStorageCustom

Configure the tiered storage feature for Kafka brokers.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.