此内容没有您所选择的语言版本。
Chapter 81. KafkaConnectSpec schema reference
Used in: KafkaConnect
Full list of KafkaConnectSpec
schema properties
Configures a Kafka Connect cluster.
The config
properties are one part of the overall configuration for the resource. Use the config
properties to configure Kafka Connect options as keys.
Example Kafka Connect configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ...
The values can be one of the following JSON types:
- String
- Number
- Boolean
Certain options have default values:
-
group.id
with default valueconnect-cluster
-
offset.storage.topic
with default valueconnect-cluster-offsets
-
config.storage.topic
with default valueconnect-cluster-configs
-
status.storage.topic
with default valueconnect-cluster-status
-
key.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
-
value.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
These options are automatically configured in case they are not present in the KafkaConnect.spec.config
properties.
Exceptions
You can specify and configure the options listed in the Apache Kafka documentation.
However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed:
- Kafka cluster bootstrap address
- Security (encryption, authentication, and authorization)
- Listener and REST interface configuration
- Plugin path configuration
Properties with the following prefixes cannot be set:
-
bootstrap.servers
-
consumer.interceptor.classes
-
listeners.
-
plugin.path
-
producer.interceptor.classes
-
rest.
-
sasl.
-
security.
-
ssl.
If the config
property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Connect, including the following exceptions to the options configured by Streams for Apache Kafka:
-
Any
ssl
configuration for supported TLS versions and cipher suites
The Cluster Operator does not validate keys or values in the config
object provided. If an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.
81.1. Logging
Kafka Connect has its own configurable loggers:
-
connect.root.logger.level
-
log4j.logger.org.reflections
Further loggers are added depending on the Kafka Connect plugins running.
Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod:
curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/
Kafka Connect uses the Apache log4j
logger implementation.
Use the logging
property to configure loggers and logger levels.
You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name
property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties
. Both logging.valueFrom.configMapKeyRef.name
and logging.valueFrom.configMapKeyRef.key
properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.
Here we see examples of inline
and external
logging. The inline
logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property.
Inline logging
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG # ...
Setting a log level to DEBUG
may result in a large amount of log output and may have performance implications.
External logging
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j # ...
Any available loggers that are not configured have their level set to OFF
.
If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically.
If you use external logging, a rolling update is triggered when logging appenders are changed.
Garbage collector (GC)
Garbage collector logging can also be enabled (or disabled) using the jvmOptions
property.
81.2. KafkaConnectSpec
schema properties
Property | Property type | Description |
---|---|---|
version | string | The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. |
replicas | integer |
The number of pods in the Kafka Connect group. Defaults to |
image | string |
The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the |
bootstrapServers | string | Bootstrap servers to connect to. This should be given as a comma separated list of <hostname>:_<port>_ pairs. |
tls | TLS configuration. | |
authentication |
| Authentication configuration for Kafka Connect. |
config | map | The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
resources | The maximum limits for CPU and memory resources and the requested initial resources. | |
livenessProbe | Pod liveness checking. | |
readinessProbe | Pod readiness checking. | |
jvmOptions | JVM Options for pods. | |
jmxOptions | JMX Options. | |
logging | Logging configuration for Kafka Connect. | |
clientRackInitImage | string |
The image of the init container used for initializing the |
rack |
Configuration of the node label which will be used as the | |
metricsConfig | Metrics configuration. | |
tracing | The configuration of tracing in Kafka Connect. | |
template |
Template for Kafka Connect and Kafka MirrorMaker 2 resources. The template allows users to specify how the | |
externalConfiguration |
The | |
build | Configures how the Connect container image should be built. Optional. |