Search

Chapter 125. KafkaBridgeSpec schema reference

download PDF

Used in: KafkaBridge

Full list of KafkaBridgeSpec schema properties

Configures a Kafka Bridge cluster.

Configuration options relate to:

  • Kafka cluster bootstrap address
  • Security (encryption, authentication, and authorization)
  • Consumer configuration
  • Producer configuration
  • HTTP configuration

125.1. logging

Kafka Bridge has its own configurable loggers:

  • rootLogger.level
  • logger.<operation-id>

You can replace <operation-id> in the logger.<operation-id> logger to set log levels for specific operations:

  • createConsumer
  • deleteConsumer
  • subscribe
  • unsubscribe
  • poll
  • assign
  • commit
  • send
  • sendToPartition
  • seekToBeginning
  • seekToEnd
  • seek
  • healthy
  • ready
  • openapi

Each operation is defined according OpenAPI specification, and has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to create fine-grained logging information about the incoming and outgoing HTTP requests.

Each logger has to be configured assigning it a name as http.openapi.operation.<operation-id>. For example, configuring the logging level for the send operation logger means defining the following:

logger.send.name = http.openapi.operation.send
logger.send.level = DEBUG

Kafka Bridge uses the Apache log4j2 logger implementation. Loggers are defined in the log4j2.properties file, which has the following default configuration for healthy and ready endpoints:

logger.healthy.name = http.openapi.operation.healthy
logger.healthy.level = WARN
logger.ready.name = http.openapi.operation.ready
logger.ready.level = WARN

The log level of all other operations is set to INFO by default.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. The logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. Default logging is used if the name or key is not set. Inside the ConfigMap, the logging configuration is described using log4j.properties. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaBridge
spec:
  # ...
  logging:
    type: inline
    loggers:
      rootLogger.level: INFO
      # enabling DEBUG just for send operation
      logger.send.name: "http.openapi.operation.send"
      logger.send.level: DEBUG
  # ...

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaBridge
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        name: customConfigMap
        key: bridge-logj42.properties
  # ...

Any available loggers that are not configured have their level set to OFF.

If the Kafka Bridge was deployed using the Cluster Operator, changes to Kafka Bridge logging levels are applied dynamically.

If you use external logging, a rolling update is triggered when logging appenders are changed.

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

125.2. KafkaBridgeSpec schema properties

PropertyProperty typeDescription

replicas

integer

The number of pods in the Deployment. Defaults to 1.

image

string

The container image used for Kafka Bridge pods. If no image name is explicitly specified, the image name corresponds to the image specified in the Cluster Operator configuration. If an image name is not defined in the Cluster Operator configuration, a default value is used.

bootstrapServers

string

A list of host:port pairs for establishing the initial connection to the Kafka cluster.

tls

ClientTls

TLS configuration for connecting Kafka Bridge to the cluster.

authentication

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha256, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

Authentication configuration for connecting to the cluster.

http

KafkaBridgeHttpConfig

The HTTP related configuration.

adminClient

KafkaBridgeAdminClientSpec

Kafka AdminClient related configuration.

consumer

KafkaBridgeConsumerSpec

Kafka consumer related configuration.

producer

KafkaBridgeProducerSpec

Kafka producer related configuration.

resources

ResourceRequirements

CPU and memory resources to reserve.

jvmOptions

JvmOptions

Currently not supported JVM Options for pods.

logging

InlineLogging, ExternalLogging

Logging configuration for Kafka Bridge.

clientRackInitImage

string

The image of the init container used for initializing the client.rack.

rack

Rack

Configuration of the node label which will be used as the client.rack consumer configuration.

enableMetrics

boolean

Enable the metrics for the Kafka Bridge. Default is false.

livenessProbe

Probe

Pod liveness checking.

readinessProbe

Probe

Pod readiness checking.

template

KafkaBridgeTemplate

Template for Kafka Bridge resources. The template allows users to specify how a Deployment and Pod is generated.

tracing

JaegerTracing, OpenTelemetryTracing

The configuration of tracing in Kafka Bridge.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.