Chapter 12. Configuring logging for Kafka components
Configure the logging levels of Kafka components directly in the configuration properties. You can also change the broker levels dynamically for Kafka brokers, Kafka Connect, and MirrorMaker 2.
Increasing the log level detail, such as from INFO to DEBUG, can aid in troubleshooting a Kafka cluster. However, more verbose logs may also negatively impact performance and make it more difficult to diagnose issues.
Strimzi operators and Kafka components use log4j2 for logging. However, Kafka 3.9 and earlier versions rely on log4j1. For log4j1-based configuration examples, refer to the Streams for Apache Kafka 2.9 documentation
12.1. Configuring Kafka logging properties Copy linkLink copied to clipboard!
Kafka components use log4j2 for error logging. By default, logging configuration is read from the classpath or config directory using YAML configuration files:
-
log4j2.yamlfor Kafka -
connect-log4j2.yamlfor Kafka Connect & MirrorMaker 2
If a logger is not explicitly configured, it inherits the Root logger level defined in its respective file. You can modify logging levels directly in these files or dynamically adjust them at runtime.
The KAFKA_LOG4J_OPTS environment variable allows you to specify the name and location of a custom logging configuration file. This variable is used by the startup script for each Kafka component.
Kafka nodes
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/log4j2.yaml"
./bin/kafka-server-start.sh ./config/server.properties
Kafka Connect
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml"
./bin/connect-distributed.sh ./config/connect-distributed.properties
MirrorMaker 2
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml"
./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties
12.2. Configuring logging for Kafka tools Copy linkLink copied to clipboard!
The tools-log4j2.yaml configuration file is specifically defined for logging related to Kafka tools, such as kafka-topics.sh, kafka-configs.sh, and kafka-consumer-groups.sh.
The file allows you to control the verbosity of logs and set filters on the logs returned, helping diagnose issues when running Kafka tools.
To increase verbosity when troubleshooting, modify tools-log4j2.yaml and adjust the logging level. For example, change the Root logger from WARN to DEBUG:
Changing the logging level
Loggers:
Root:
level: DEBUG
# ...
After making this change, Kafka tools provide more detailed logs, which can help with troubleshooting.
To specify a custom logging configuration file, use the KAFKA_LOG4J_OPTS environment variable:
Custom log4j2 configuration for Kafka tools
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/tools-log4j2.yaml"
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
12.3. Dynamically change logging levels for Kafka nodes Copy linkLink copied to clipboard!
Kafka logging is provided by loggers on Kafka nodes. You can dynamically change logging levels at runtime without restarting the node.
You can also reset broker loggers dynamically to their default logging levels.
Prerequisites
- Streams for Apache Kafka is installed on each host, and the configuration files are available.
- Kafka is running.
Procedure
List all loggers for a Kafka node using the
kafka-configs.shtool:./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0Here,
--entity-name 0specifies Kafka node 0. The node ID corresponds to the broker or controller ID in the Kafka cluster.This returns the logging level for each logger:
TRACE,DEBUG,INFO,WARN,ERROR, orFATAL.Example output:
#... kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={}Change the logging level for one or more loggers. Use the
--alterand--add-configoptions and specify each logger and its level as a comma-separated list in double quotes:./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config "kafka.log.LogCleaner=TRACE" --entity-type broker-loggers --entity-name 0Here, we update the logging level for
kafka.log.LogCleanerfrom toTRACE.If successful, the command returns:
Completed updating config for broker: 0.
Resetting a broker logger
Reset one or more loggers using kafka-configs.sh with --delete-config:
./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config "kafka.server.KafkaServer,kafka.log.LogCleaner" --entity-type broker-loggers --entity-name 0
12.4. Dynamically change logging levels for Kafka Connect and MirrorMaker 2 Copy linkLink copied to clipboard!
You can dynamically change logging levels for Kafka Connect workers or MirrorMaker 2 connectors at runtime without restarting.
Kafka Connect provides REST API endpoints (/admin/loggers) to view and modify log levels temporarily. These changes do not modify the static connect-log4j2.yaml configuration file. To make changes permanent, update connect-log4j2.yaml manually.
MirrorMaker 2 supports runtime log level changes only in standalone or distributed mode. Dedicated MirrorMaker 2 clusters do not expose a Kafka Connect REST API, so their log levels cannot be changed dynamically.
Kafka Connect’s admin/loggers API defaults to port 8083. You can change this or enable TLS authentication with admin.listeners.
Example listener configuration for the admin endpoint
admin.listeners=https://localhost:8083
admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks
admin.listeners.https.ssl.truststore.password=123456
admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks
admin.listeners.https.ssl.keystore.password=123456
If you do not want the admin endpoint to be available, you can disable it in the configuration by specifying an empty string.
Example listener configuration to disable the admin endpoint
admin.listeners=
Prerequisites
- Streams for Apache Kafka is installed on each host, and the configuration files are available.
- Kafka is running.
- Kafka Connect or MirrorMaker 2 is running.
Procedure
Check the current logging levels in the
connect-log4j2.yamlfile:cat ./config/connect-log4j2.yamlExample output:
Root: level: INFO # ...Use a curl command to check the logging levels from the
admin/loggersendpoint of the Kafka Connect API:curl -s http://localhost:8083/admin/loggers/ | jqExample response:
{ "Root": { "level": "INFO" } }jqprints the output in JSON format. The list shows the standardrootlevel logger, plus any specific loggers with modified logging levels.If TLS is enabled, use
https://instead ofhttp://, and specify the port configured inadmin.listeners.You can also get the log level of a specific logger:
curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.Worker | jqHere, we retrieve the log level for
org.apache.kafka.connect.runtime.Worker.Example response:
{ "level": "INFO" }Change a logger’s level dynamically using a PUT request:
curl -X PUT -H 'Content-Type: application/json' -d '{"level": "TRACE"}' http://localhost:8083/admin/loggers/rootExample response:
{ "Root": { "level": "TRACE" } } # ...Changing the
Rootlogger affects all loggers that inherit from it.You can also adjust the logging level for a specific component:
curl -X PUT -H 'Content-Type: application/json' -d '{"level": "DEBUG"}' http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.WorkerExample response:
{ "org.apache.kafka.connect.runtime.Worker": { "level": "DEBUG" } }