Este contenido no está disponible en el idioma seleccionado.
Chapter 5. KafkaClusterSpec schema reference
Used in: KafkaSpec
Full list of KafkaClusterSpec
schema properties
Configures a Kafka cluster using the Kafka
custom resource.
The config
properties are one part of the overall configuration for the resource. Use the config
properties to configure Kafka broker options as keys.
Example Kafka configuration
The values can be one of the following JSON types:
- String
- Number
- Boolean
Exceptions
You can specify and configure the options listed in the Apache Kafka documentation.
However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed:
- Security (encryption, authentication, and authorization)
- Listener configuration
- Broker ID configuration
- Configuration of log data directories
- Inter-broker communication
- ZooKeeper connectivity
Properties with the following prefixes cannot be set:
-
advertised.
-
authorizer.
-
broker.
-
controller
-
cruise.control.metrics.reporter.bootstrap.
-
cruise.control.metrics.topic
-
host.name
-
inter.broker.listener.name
-
listener.
-
listeners.
-
log.dir
-
password.
-
port
-
process.roles
-
sasl.
-
security.
-
servers,node.id
-
ssl.
-
super.user
-
zookeeper.clientCnxnSocket
-
zookeeper.connect
-
zookeeper.set.acl
-
zookeeper.ssl
If the config
property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by Streams for Apache Kafka:
-
Any
ssl
configuration for supported TLS versions and cipher suites -
Configuration for the
zookeeper.connection.timeout.ms
property to set the maximum time allowed for establishing a ZooKeeper connection Cruise Control metrics properties:
-
cruise.control.metrics.topic.num.partitions
-
cruise.control.metrics.topic.replication.factor
-
cruise.control.metrics.topic.retention.ms
-
cruise.control.metrics.topic.auto.create.retries
-
cruise.control.metrics.topic.auto.create.timeout.ms
-
cruise.control.metrics.topic.min.insync.replicas
-
Controller properties:
-
controller.quorum.election.backoff.max.ms
-
controller.quorum.election.timeout.ms
-
controller.quorum.fetch.timeout.ms
-
5.1. Configuring rack awareness and init container images Copiar enlaceEnlace copiado en el portapapeles!
Rack awareness is enabled using the rack
property. When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image for this init container can be specified using the brokerRackInitImage
property. If the brokerRackInitImage
field is not provided, the images used are prioritized as follows:
-
Container image specified in
STRIMZI_DEFAULT_KAFKA_INIT_IMAGE
environment variable in the Cluster Operator configuration. -
registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.8.0
container image.
Example brokerRackInitImage
configuration
Overriding container images is recommended only in special situations, such as when your network does not allow access to the container registry used by Streams for Apache Kafka. In such cases, you should either copy the Streams for Apache Kafka images or build them from the source. Be aware that if the configured image is not compatible with Streams for Apache Kafka images, it might not work properly.
5.2. Logging Copiar enlaceEnlace copiado en el portapapeles!
Kafka has its own configurable loggers, which include the following:
-
log4j.logger.org.apache.zookeeper
-
log4j.logger.kafka
-
log4j.logger.org.apache.kafka
-
log4j.logger.kafka.request.logger
-
log4j.logger.kafka.network.Processor
-
log4j.logger.kafka.server.KafkaApis
-
log4j.logger.kafka.network.RequestChannel$
-
log4j.logger.kafka.controller
-
log4j.logger.kafka.log.LogCleaner
-
log4j.logger.state.change.logger
-
log4j.logger.kafka.authorizer.logger
Kafka uses the Apache log4j
logger implementation.
Use the logging
property to configure loggers and logger levels.
You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name
property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties
. Both logging.valueFrom.configMapKeyRef.name
and logging.valueFrom.configMapKeyRef.key
properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.
Here we see examples of inline
and external
logging. The inline
logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property.
Inline logging
Setting a log level to DEBUG
may result in a large amount of log output and may have performance implications.
External logging
Any available loggers that are not configured have their level set to OFF
.
If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically.
If you use external logging, a rolling update is triggered when logging appenders are changed.
Garbage collector (GC)
Garbage collector logging can also be enabled (or disabled) using the jvmOptions
property.
5.3. KafkaClusterSpec schema properties Copiar enlaceEnlace copiado en el portapapeles!
Property | Property type | Description |
---|---|---|
version | string | The Kafka broker version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. |
metadataVersion | string |
Added in Streams for Apache Kafka 2.7. The KRaft metadata version used by the Kafka cluster. This property is ignored when running in ZooKeeper mode. If the property is not set, it defaults to the metadata version that corresponds to the |
replicas | integer | The number of pods in the cluster. This property is required when node pools are not used. |
image | string |
The container image used for Kafka pods. If the property is not set, the default Kafka image version is determined based on the |
listeners |
| Configures listeners to provide access to Kafka brokers. |
config | map | Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers, node.id, process.roles, controller., metadata.log.dir, zookeeper.metadata.migration.enable, client.quota.callback.static.kafka.admin., client.quota.callback.static.produce, client.quota.callback.static.fetch, client.quota.callback.static.storage.per.volume.limit.min.available., client.quota.callback.static.excluded.principal.name.list (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation, cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms, cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms, cruise.control.metrics.topic.min.insync.replicas, controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms). |
storage | Storage configuration (disk). Cannot be updated. This property is required when node pools are not used. | |
authorization |
| Authorization configuration for Kafka brokers. |
rack |
Configuration of the | |
brokerRackInitImage | string |
The image of the init container used for initializing the |
livenessProbe | Pod liveness checking. | |
readinessProbe | Pod readiness checking. | |
jvmOptions | JVM Options for pods. | |
jmxOptions | JMX Options for Kafka brokers. | |
resources | CPU and memory resources to reserve. | |
metricsConfig | Metrics configuration. | |
logging | Logging configuration for Kafka. | |
template | Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated. | |
tieredStorage | Configure the tiered storage feature for Kafka brokers. | |
quotas |
Quotas plugin configuration for Kafka brokers allows setting quotas for disk usage, produce/fetch rates, and more. Supported plugin types include |