Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 71. KafkaConnectSpec schema reference

download PDF

Used in: KafkaConnect

Full list of KafkaConnectSpec schema properties

Configures a Kafka Connect cluster.

71.1. config

Use the config properties to configure Kafka Connect options as keys.

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

Certain options have default values:

  • group.id with default value connect-cluster
  • offset.storage.topic with default value connect-cluster-offsets
  • config.storage.topic with default value connect-cluster-configs
  • status.storage.topic with default value connect-cluster-status
  • key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • value.converter with default value org.apache.kafka.connect.json.JsonConverter

These options are automatically configured in case they are not present in the KafkaConnect.spec.config properties.

Exceptions

You can specify and configure the options listed in the Apache Kafka documentation.

However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed:

  • Kafka cluster bootstrap address
  • Security (encryption, authentication, and authorization)
  • Listener and REST interface configuration
  • Plugin path configuration

Properties with the following prefixes cannot be set:

  • bootstrap.servers
  • consumer.interceptor.classes
  • listeners.
  • plugin.path
  • producer.interceptor.classes
  • rest.
  • sasl.
  • security.
  • ssl.

If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Connect, including the following exceptions to the options configured by Streams for Apache Kafka:

Example Kafka Connect configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  config:
    group.id: my-connect-cluster
    offset.storage.topic: my-connect-cluster-offsets
    config.storage.topic: my-connect-cluster-configs
    status.storage.topic: my-connect-cluster-status
    key.converter: org.apache.kafka.connect.json.JsonConverter
    value.converter: org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable: true
    value.converter.schemas.enable: true
    config.storage.replication.factor: 3
    offset.storage.replication.factor: 3
    status.storage.replication.factor: 3
  # ...

Important

The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.

71.2. logging

Kafka Connect has its own configurable loggers:

  • connect.root.logger.level
  • log4j.logger.org.reflections

Further loggers are added depending on the Kafka Connect plugins running.

Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod:

curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/

Kafka Connect uses the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
spec:
  # ...
  logging:
    type: inline
    loggers:
      connect.root.logger.level: INFO
      log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE
      log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG
  # ...

Note

Setting a log level to DEBUG may result in a large amount of log output and may have performance implications.

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        name: customConfigMap
        key: connect-logging.log4j
  # ...

Any available loggers that are not configured have their level set to OFF.

If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically.

If you use external logging, a rolling update is triggered when logging appenders are changed.

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

71.3. KafkaConnectSpec schema properties

PropertyProperty typeDescription

version

string

The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version.

replicas

integer

The number of pods in the Kafka Connect group. Defaults to 3.

image

string

The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration.

bootstrapServers

string

Bootstrap servers to connect to. This should be given as a comma separated list of <hostname>:_<port>_ pairs.

tls

ClientTls

TLS configuration.

authentication

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha256, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

Authentication configuration for Kafka Connect.

config

map

The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

resources

ResourceRequirements

The maximum limits for CPU and memory resources and the requested initial resources.

livenessProbe

Probe

Pod liveness checking.

readinessProbe

Probe

Pod readiness checking.

jvmOptions

JvmOptions

JVM Options for pods.

jmxOptions

KafkaJmxOptions

JMX Options.

logging

InlineLogging, ExternalLogging

Logging configuration for Kafka Connect.

clientRackInitImage

string

The image of the init container used for initializing the client.rack.

rack

Rack

Configuration of the node label which will be used as the client.rack consumer configuration.

tracing

JaegerTracing, OpenTelemetryTracing

The configuration of tracing in Kafka Connect.

template

KafkaConnectTemplate

Template for Kafka Connect and Kafka Mirror Maker 2 resources. The template allows users to specify how the Pods, Service, and other services are generated.

externalConfiguration

ExternalConfiguration

Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors.

build

Build

Configures how the Connect container image should be built. Optional.

metricsConfig

JmxPrometheusExporterMetrics

Metrics configuration.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.