Chapter 5. KafkaClusterSpec schema reference
Used in: KafkaSpec
Full list of KafkaClusterSpec schema properties
Configures a Kafka cluster using the Kafka custom resource.
The config properties are one part of the overall configuration for the resource. Use the config properties to configure Kafka broker options as keys.
Example Kafka configuration
The values can be one of the following JSON types:
- String
- Number
- Boolean
Exceptions
You can specify and configure the options listed in the Apache Kafka documentation.
However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed:
- Security (encryption, authentication, and authorization)
- Listener configuration
- Broker ID configuration
- Configuration of log data directories
- Inter-broker communication
Properties with the following prefixes cannot be set:
-
advertised. -
authorizer. -
broker. -
controller -
cruise.control.metrics.reporter.bootstrap. -
cruise.control.metrics.topic -
host.name -
inter.broker.listener.name -
listener. -
listeners. -
log.dir -
password. -
port -
process.roles -
sasl. -
security. -
servers,node.id -
ssl. -
super.user
Streams for Apache Kafka supports only KRaft-based Kafka deployments. As a result, ZooKeeper-related configuration options are not supported.
If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by Streams for Apache Kafka:
-
Any
sslconfiguration for supported TLS versions and cipher suites Cruise Control metrics properties:
-
cruise.control.metrics.topic.num.partitions -
cruise.control.metrics.topic.replication.factor -
cruise.control.metrics.topic.retention.ms -
cruise.control.metrics.topic.auto.create.retries -
cruise.control.metrics.topic.auto.create.timeout.ms -
cruise.control.metrics.topic.min.insync.replicas
-
Controller properties:
-
controller.quorum.election.backoff.max.ms -
controller.quorum.election.timeout.ms -
controller.quorum.fetch.timeout.ms
-
5.1. Configuring rack awareness and init container images Copy linkLink copied to clipboard!
Rack awareness is enabled using the rack property. When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image for this init container can be specified using the brokerRackInitImage property. If the brokerRackInitImage field is not provided, the images used are prioritized as follows:
-
Container image specified in
STRIMZI_DEFAULT_KAFKA_INIT_IMAGEenvironment variable in the Cluster Operator configuration. -
registry.redhat.io/amq-streams/strimzi-rhel9-operator:3.0.1container image.
Example brokerRackInitImage configuration
Overriding container images is recommended only in special situations, such as when your network does not allow access to the container registry used by Streams for Apache Kafka. In such cases, you should either copy the Streams for Apache Kafka images or build them from the source. Be aware that if the configured image is not compatible with Streams for Apache Kafka images, it might not work properly.
5.2. Logging Copy linkLink copied to clipboard!
Kafka 3.9 and earlier versions use log4j1 for logging. For log4j1-based configuration examples, refer to the Streams for Apache Kafka 2.9 documentation.
Kafka has its own preconfigured loggers:
| Logger | Description | Default Level |
|---|---|---|
|
| Default logger for all classes | INFO |
|
| Logs Kafka node classes | INFO |
|
| Logs Kafka library classes | INFO |
|
| Logs client request details | WARN |
|
| Logs request handling in the broker | WARN |
|
| Logs controller activity, such as leadership changes | INFO |
|
| Logs log compaction and cleanup processes | INFO |
|
| Logs broker and partition state transitions | INFO |
|
| Logs access control decisions | INFO |
Kafka uses the Apache log4j2 logger implementation. Use the logging property to configure loggers and logger levels.
You can set log levels using either the inline or external logging configuration types.
Specify loggers and levels directly in the custom resource for inline configuration:
Example inline logging configuration
You can define additional loggers by specifying the full class or package name using logger.<name>.name. For example, to configure logging for OAuth components inline:
Example custom inline loggers
# ... logger.oauth.name: io.strimzi.kafka.oauth logger.oauth.level: DEBUG
# ...
logger.oauth.name: io.strimzi.kafka.oauth
logger.oauth.level: DEBUG
Alternatively, you can reference an external ConfigMap containing a complete log4j2.properties file that defines your own log4j2 configuration, including loggers, appenders, and layout configuration:
Example external logging configuration
Garbage collector (GC)
Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.
5.3. KafkaClusterSpec schema properties Copy linkLink copied to clipboard!
| Property | Property type | Description |
|---|---|---|
| version | string | The Kafka broker version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. |
| metadataVersion | string |
Added in Streams for Apache Kafka 2.7. The KRaft metadata version used by the Kafka cluster. This property is ignored when running in ZooKeeper mode. If the property is not set, it defaults to the metadata version that corresponds to the |
| replicas | integer |
The |
| image | string |
The container image used for Kafka pods. If the property is not set, the default Kafka image version is determined based on the |
| listeners |
| Configures listeners to provide access to Kafka brokers. |
| config | map | Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers, node.id, process.roles, controller., metadata.log.dir, zookeeper.metadata.migration.enable, client.quota.callback.static.kafka.admin., client.quota.callback.static.produce, client.quota.callback.static.fetch, client.quota.callback.static.storage.per.volume.limit.min.available., client.quota.callback.static.excluded.principal.name.list (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation, cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms, cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms, cruise.control.metrics.topic.min.insync.replicas, controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms). |
| storage |
The | |
| authorization |
| Authorization configuration for Kafka brokers. |
| rack |
Configuration of the | |
| brokerRackInitImage | string |
The image of the init container used for initializing the |
| livenessProbe | Pod liveness checking. | |
| readinessProbe | Pod readiness checking. | |
| jvmOptions | JVM Options for pods. | |
| jmxOptions | JMX Options for Kafka brokers. | |
| resources | CPU and memory resources to reserve. | |
| metricsConfig | Metrics configuration. | |
| logging | Logging configuration for Kafka. | |
| template | Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated. | |
| tieredStorage | Configure the tiered storage feature for Kafka brokers. | |
| quotas |
Quotas plugin configuration for Kafka brokers allows setting quotas for disk usage, produce/fetch rates, and more. Supported plugin types include |