Chapter 5. KafkaClusterSpec schema reference


Used in: KafkaSpec

Full list of KafkaClusterSpec schema properties

Configures a Kafka cluster using the Kafka custom resource.

The config properties are one part of the overall configuration for the resource. Use the config properties to configure Kafka broker options as keys.

Example Kafka configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  annotations:
    strimzi.io/node-pools: enabled
    strimzi.io/kraft: enabled
spec:
  kafka:
    version: 4.0.0
    metadataVersion: 4.0
    # ...
    config:
      auto.create.topics.enable: "false"
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
# ...
Copy to Clipboard Toggle word wrap

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

Exceptions

You can specify and configure the options listed in the Apache Kafka documentation.

However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed:

  • Security (encryption, authentication, and authorization)
  • Listener configuration
  • Broker ID configuration
  • Configuration of log data directories
  • Inter-broker communication

Properties with the following prefixes cannot be set:

  • advertised.
  • authorizer.
  • broker.
  • controller
  • cruise.control.metrics.reporter.bootstrap.
  • cruise.control.metrics.topic
  • host.name
  • inter.broker.listener.name
  • listener.
  • listeners.
  • log.dir
  • password.
  • port
  • process.roles
  • sasl.
  • security.
  • servers,node.id
  • ssl.
  • super.user
Note

Streams for Apache Kafka supports only KRaft-based Kafka deployments. As a result, ZooKeeper-related configuration options are not supported.

If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by Streams for Apache Kafka:

  • Any ssl configuration for supported TLS versions and cipher suites
  • Cruise Control metrics properties:

    • cruise.control.metrics.topic.num.partitions
    • cruise.control.metrics.topic.replication.factor
    • cruise.control.metrics.topic.retention.ms
    • cruise.control.metrics.topic.auto.create.retries
    • cruise.control.metrics.topic.auto.create.timeout.ms
    • cruise.control.metrics.topic.min.insync.replicas
  • Controller properties:

    • controller.quorum.election.backoff.max.ms
    • controller.quorum.election.timeout.ms
    • controller.quorum.fetch.timeout.ms

Rack awareness is enabled using the rack property. When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image for this init container can be specified using the brokerRackInitImage property. If the brokerRackInitImage field is not provided, the images used are prioritized as follows:

  1. Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration.
  2. registry.redhat.io/amq-streams/strimzi-rhel9-operator:3.0.1 container image.

Example brokerRackInitImage configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  annotations:
    strimzi.io/node-pools: enabled
    strimzi.io/kraft: enabled
spec:
  kafka:
    # ...
    rack:
      topologyKey: topology.kubernetes.io/zone
    brokerRackInitImage: my-org/my-image:latest
    # ...
Copy to Clipboard Toggle word wrap

Note

Overriding container images is recommended only in special situations, such as when your network does not allow access to the container registry used by Streams for Apache Kafka. In such cases, you should either copy the Streams for Apache Kafka images or build them from the source. Be aware that if the configured image is not compatible with Streams for Apache Kafka images, it might not work properly.

5.2. Logging

Warning

Kafka 3.9 and earlier versions use log4j1 for logging. For log4j1-based configuration examples, refer to the Streams for Apache Kafka 2.9 documentation.

Kafka has its own preconfigured loggers:

Expand
LoggerDescriptionDefault Level

rootLogger

Default logger for all classes

INFO

kafka

Logs Kafka node classes

INFO

orgapachekafka

Logs Kafka library classes

INFO

requestlogger

Logs client request details

WARN

requestchannel

Logs request handling in the broker

WARN

controller

Logs controller activity, such as leadership changes

INFO

logcleaner

Logs log compaction and cleanup processes

INFO

statechange

Logs broker and partition state transitions

INFO

authorizer

Logs access control decisions

INFO

Kafka uses the Apache log4j2 logger implementation. Use the logging property to configure loggers and logger levels.

You can set log levels using either the inline or external logging configuration types.

Specify loggers and levels directly in the custom resource for inline configuration:

Example inline logging configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  kafka:
    # ...
    logging:
      type: inline
      loggers:
        rootLogger.level: INFO
        logger.kafka.level: DEBUG
        logger.logcleaner.level: DEBUG
        logger.authorizer.level: TRACE
  # ...
Copy to Clipboard Toggle word wrap

You can define additional loggers by specifying the full class or package name using logger.<name>.name. For example, to configure logging for OAuth components inline:

Example custom inline loggers

# ...
logger.oauth.name: io.strimzi.kafka.oauth 
1

logger.oauth.level: DEBUG 
2
Copy to Clipboard Toggle word wrap

1
Creates a logger for the io.strimzi.kafka.oauth package.
2
Sets the logging level for the OAuth package.

Alternatively, you can reference an external ConfigMap containing a complete log4j2.properties file that defines your own log4j2 configuration, including loggers, appenders, and layout configuration:

Example external logging configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        # name and key are mandatory
        name: customConfigMap
        key: log4j2.properties
  # ...
Copy to Clipboard Toggle word wrap

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

5.3. KafkaClusterSpec schema properties

Expand
PropertyProperty typeDescription

version

string

The Kafka broker version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version.

metadataVersion

string

Added in Streams for Apache Kafka 2.7. The KRaft metadata version used by the Kafka cluster. This property is ignored when running in ZooKeeper mode. If the property is not set, it defaults to the metadata version that corresponds to the version property.

replicas

integer

The replicas property has been deprecated. Use KafkaNodePool resources. Replicas are now configured in KafkaNodePool resources and this option is ignored.

image

string

The container image used for Kafka pods. If the property is not set, the default Kafka image version is determined based on the version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. Changing the Kafka image version does not automatically update the image versions for other components, such as Kafka Exporter.

listeners

GenericKafkaListener array

Configures listeners to provide access to Kafka brokers.

config

map

Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers, node.id, process.roles, controller., metadata.log.dir, zookeeper.metadata.migration.enable, client.quota.callback.static.kafka.admin., client.quota.callback.static.produce, client.quota.callback.static.fetch, client.quota.callback.static.storage.per.volume.limit.min.available., client.quota.callback.static.excluded.principal.name.list (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation, cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms, cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms, cruise.control.metrics.topic.min.insync.replicas, controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms).

storage

EphemeralStorage, PersistentClaimStorage, JbodStorage

The storage property has been deprecated. Use KafkaNodePool resources. Storage is now configured in the KafkaNodePool resources and this option is ignored.

authorization

KafkaAuthorizationSimple, KafkaAuthorizationOpa, KafkaAuthorizationKeycloak, KafkaAuthorizationCustom

Authorization configuration for Kafka brokers.

rack

Rack

Configuration of the broker.rack broker config.

brokerRackInitImage

string

The image of the init container used for initializing the broker.rack.

livenessProbe

Probe

Pod liveness checking.

readinessProbe

Probe

Pod readiness checking.

jvmOptions

JvmOptions

JVM Options for pods.

jmxOptions

KafkaJmxOptions

JMX Options for Kafka brokers.

resources

ResourceRequirements

CPU and memory resources to reserve.

metricsConfig

JmxPrometheusExporterMetrics

Metrics configuration.

logging

InlineLogging, ExternalLogging

Logging configuration for Kafka.

template

KafkaClusterTemplate

Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated.

tieredStorage

TieredStorageCustom

Configure the tiered storage feature for Kafka brokers.

quotas

QuotasPluginKafka, QuotasPluginStrimzi

Quotas plugin configuration for Kafka brokers allows setting quotas for disk usage, produce/fetch rates, and more. Supported plugin types include kafka (default) and strimzi. If not specified, the default kafka quotas plugin is used.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat