Chapter 12. Configuring logging for Kafka components


Configure the logging levels of Kafka components directly in the configuration properties. You can also change the broker levels dynamically for Kafka brokers, Kafka Connect, and MirrorMaker 2.

Increasing the log level detail, such as from INFO to DEBUG, can aid in troubleshooting a Kafka cluster. However, more verbose logs may also negatively impact performance and make it more difficult to diagnose issues.

Warning

Strimzi operators and Kafka components use log4j2 for logging. However, Kafka 3.9 and earlier versions rely on log4j1. For log4j1-based configuration examples, refer to the Streams for Apache Kafka 2.9 documentation

12.1. Configuring Kafka logging properties

Kafka components use log4j2 for error logging. By default, logging configuration is read from the classpath or config directory using YAML configuration files:

  • log4j2.yaml for Kafka
  • connect-log4j2.yaml for Kafka Connect & MirrorMaker 2

If a logger is not explicitly configured, it inherits the Root logger level defined in its respective file. You can modify logging levels directly in these files or dynamically adjust them at runtime.

The KAFKA_LOG4J_OPTS environment variable allows you to specify the name and location of a custom logging configuration file. This variable is used by the startup script for each Kafka component.

Kafka nodes

export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/log4j2.yaml"
./bin/kafka-server-start.sh ./config/server.properties

Kafka Connect

export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml"
./bin/connect-distributed.sh ./config/connect-distributed.properties

MirrorMaker 2

export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml"
./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties

12.2. Configuring logging for Kafka tools

The tools-log4j2.yaml configuration file is specifically defined for logging related to Kafka tools, such as kafka-topics.sh, kafka-configs.sh, and kafka-consumer-groups.sh.

The file allows you to control the verbosity of logs and set filters on the logs returned, helping diagnose issues when running Kafka tools.

To increase verbosity when troubleshooting, modify tools-log4j2.yaml and adjust the logging level. For example, change the Root logger from WARN to DEBUG:

Changing the logging level

Loggers:
  Root:
    level: DEBUG
# ...

After making this change, Kafka tools provide more detailed logs, which can help with troubleshooting.

To specify a custom logging configuration file, use the KAFKA_LOG4J_OPTS environment variable:

Custom log4j2 configuration for Kafka tools

export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/tools-log4j2.yaml"
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list

Kafka logging is provided by loggers on Kafka nodes. You can dynamically change logging levels at runtime without restarting the node.

You can also reset broker loggers dynamically to their default logging levels.

Prerequisites

Procedure

  1. List all loggers for a Kafka node using the kafka-configs.sh tool:

    ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0

    Here, --entity-name 0 specifies Kafka node 0. The node ID corresponds to the broker or controller ID in the Kafka cluster.

    This returns the logging level for each logger: TRACE, DEBUG, INFO, WARN, ERROR, or FATAL.

    Example output:

    #...
    kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={}
    kafka.log.TimeIndex=INFO sensitive=false synonyms={}
  2. Change the logging level for one or more loggers. Use the --alter and --add-config options and specify each logger and its level as a comma-separated list in double quotes:

    ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config "kafka.log.LogCleaner=TRACE" --entity-type broker-loggers --entity-name 0

    Here, we update the logging level for kafka.log.LogCleaner from to TRACE.

    If successful, the command returns:

    Completed updating config for broker: 0.

Resetting a broker logger

Reset one or more loggers using kafka-configs.sh with --delete-config:

./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config "kafka.server.KafkaServer,kafka.log.LogCleaner" --entity-type broker-loggers --entity-name 0

You can dynamically change logging levels for Kafka Connect workers or MirrorMaker 2 connectors at runtime without restarting.

Kafka Connect provides REST API endpoints (/admin/loggers) to view and modify log levels temporarily. These changes do not modify the static connect-log4j2.yaml configuration file. To make changes permanent, update connect-log4j2.yaml manually.

Note

MirrorMaker 2 supports runtime log level changes only in standalone or distributed mode. Dedicated MirrorMaker 2 clusters do not expose a Kafka Connect REST API, so their log levels cannot be changed dynamically.

Kafka Connect’s admin/loggers API defaults to port 8083. You can change this or enable TLS authentication with admin.listeners.

Example listener configuration for the admin endpoint

admin.listeners=https://localhost:8083
admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks
admin.listeners.https.ssl.truststore.password=123456
admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks
admin.listeners.https.ssl.keystore.password=123456

If you do not want the admin endpoint to be available, you can disable it in the configuration by specifying an empty string.

Example listener configuration to disable the admin endpoint

admin.listeners=

Prerequisites

Procedure

  1. Check the current logging levels in the connect-log4j2.yaml file:

    cat ./config/connect-log4j2.yaml

    Example output:

    Root:
      level: INFO
    # ...

    Use a curl command to check the logging levels from the admin/loggers endpoint of the Kafka Connect API:

    curl -s http://localhost:8083/admin/loggers/ | jq

    Example response:

    {
      "Root": {
        "level": "INFO"
      }
    }

    jq prints the output in JSON format. The list shows the standard root level logger, plus any specific loggers with modified logging levels.

    If TLS is enabled, use https:// instead of http://, and specify the port configured in admin.listeners.

    You can also get the log level of a specific logger:

    curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.Worker | jq

    Here, we retrieve the log level for org.apache.kafka.connect.runtime.Worker.

    Example response:

    {
      "level": "INFO"
    }
  2. Change a logger’s level dynamically using a PUT request:

    curl -X PUT -H 'Content-Type: application/json' -d '{"level": "TRACE"}' http://localhost:8083/admin/loggers/root

    Example response:

    {
      "Root": {
        "level": "TRACE"
      }
    }
    # ...

    Changing the Root logger affects all loggers that inherit from it.

    You can also adjust the logging level for a specific component:

    curl -X PUT -H 'Content-Type: application/json' -d '{"level": "DEBUG"}' http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.Worker

    Example response:

    {
      "org.apache.kafka.connect.runtime.Worker": {
        "level": "DEBUG"
      }
    }
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top