이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 12. Configuring logging for Kafka components
Configure the logging levels of Kafka components directly in the configuration properties. You can also change the broker levels dynamically for Kafka brokers, Kafka Connect, and MirrorMaker 2.
Increasing the log level detail, such as from INFO to DEBUG, can aid in troubleshooting a Kafka cluster. However, more verbose logs may also negatively impact performance and make it more difficult to diagnose issues.
Strimzi operators and Kafka components use log4j2 for logging. However, Kafka 3.9 and earlier versions rely on log4j1. For log4j1-based configuration examples, refer to the Streams for Apache Kafka 2.9 documentation
12.1. Configuring Kafka logging properties 링크 복사링크가 클립보드에 복사되었습니다!
Kafka components use log4j2 for error logging. By default, logging configuration is read from the classpath or config directory using YAML configuration files:
-
log4j2.yamlfor Kafka -
connect-log4j2.yamlfor Kafka Connect & MirrorMaker 2
If a logger is not explicitly configured, it inherits the Root logger level defined in its respective file. You can modify logging levels directly in these files or dynamically adjust them at runtime.
The KAFKA_LOG4J_OPTS environment variable allows you to specify the name and location of a custom logging configuration file. This variable is used by the startup script for each Kafka component.
Kafka nodes
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/log4j2.yaml" ./bin/kafka-server-start.sh ./config/server.properties
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/log4j2.yaml"
./bin/kafka-server-start.sh ./config/server.properties
Kafka Connect
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml" ./bin/connect-distributed.sh ./config/connect-distributed.properties
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml"
./bin/connect-distributed.sh ./config/connect-distributed.properties
MirrorMaker 2
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml" ./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/connect-log4j2.yaml"
./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties
12.2. Configuring logging for Kafka tools 링크 복사링크가 클립보드에 복사되었습니다!
The tools-log4j2.yaml configuration file is specifically defined for logging related to Kafka tools, such as kafka-topics.sh, kafka-configs.sh, and kafka-consumer-groups.sh.
The file allows you to control the verbosity of logs and set filters on the logs returned, helping diagnose issues when running Kafka tools.
To increase verbosity when troubleshooting, modify tools-log4j2.yaml and adjust the logging level. For example, change the Root logger from WARN to DEBUG:
Changing the logging level
Loggers:
Root:
level: DEBUG
# ...
Loggers:
Root:
level: DEBUG
# ...
After making this change, Kafka tools provide more detailed logs, which can help with troubleshooting.
To specify a custom logging configuration file, use the KAFKA_LOG4J_OPTS environment variable:
Custom log4j2 configuration for Kafka tools
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/tools-log4j2.yaml" ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
export KAFKA_LOG4J_OPTS="-Dlog4j2.configurationFile=/my/path/to/tools-log4j2.yaml"
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
12.3. Dynamically change logging levels for Kafka nodes 링크 복사링크가 클립보드에 복사되었습니다!
Kafka logging is provided by loggers on Kafka nodes. You can dynamically change logging levels at runtime without restarting the node.
You can also reset broker loggers dynamically to their default logging levels.
Prerequisites
- Streams for Apache Kafka is installed on each host, and the configuration files are available.
- Kafka is running.
Procedure
List all loggers for a Kafka node using the
kafka-configs.shtool:./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0
./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here,
--entity-name 0specifies Kafka node 0. The node ID corresponds to the broker or controller ID in the Kafka cluster.This returns the logging level for each logger:
TRACE,DEBUG,INFO,WARN,ERROR, orFATAL.Example output:
#... kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={}#... kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the logging level for one or more loggers. Use the
--alterand--add-configoptions and specify each logger and its level as a comma-separated list in double quotes:./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config "kafka.log.LogCleaner=TRACE" --entity-type broker-loggers --entity-name 0
./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config "kafka.log.LogCleaner=TRACE" --entity-type broker-loggers --entity-name 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here, we update the logging level for
kafka.log.LogCleanerfrom toTRACE.If successful, the command returns:
Completed updating config for broker: 0.
Completed updating config for broker: 0.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Resetting a broker logger
Reset one or more loggers using kafka-configs.sh with --delete-config:
./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config "kafka.server.KafkaServer,kafka.log.LogCleaner" --entity-type broker-loggers --entity-name 0
./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config "kafka.server.KafkaServer,kafka.log.LogCleaner" --entity-type broker-loggers --entity-name 0
12.4. Dynamically change logging levels for Kafka Connect and MirrorMaker 2 링크 복사링크가 클립보드에 복사되었습니다!
You can dynamically change logging levels for Kafka Connect workers or MirrorMaker 2 connectors at runtime without restarting.
Kafka Connect provides REST API endpoints (/admin/loggers) to view and modify log levels temporarily. These changes do not modify the static connect-log4j2.yaml configuration file. To make changes permanent, update connect-log4j2.yaml manually.
MirrorMaker 2 supports runtime log level changes only in standalone or distributed mode. Dedicated MirrorMaker 2 clusters do not expose a Kafka Connect REST API, so their log levels cannot be changed dynamically.
Kafka Connect’s admin/loggers API defaults to port 8083. You can change this or enable TLS authentication with admin.listeners.
Example listener configuration for the admin endpoint
admin.listeners=https://localhost:8083 admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks admin.listeners.https.ssl.truststore.password=123456 admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks admin.listeners.https.ssl.keystore.password=123456
admin.listeners=https://localhost:8083
admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks
admin.listeners.https.ssl.truststore.password=123456
admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks
admin.listeners.https.ssl.keystore.password=123456
If you do not want the admin endpoint to be available, you can disable it in the configuration by specifying an empty string.
Example listener configuration to disable the admin endpoint
admin.listeners=
admin.listeners=
Prerequisites
- Streams for Apache Kafka is installed on each host, and the configuration files are available.
- Kafka is running.
- Kafka Connect or MirrorMaker 2 is running.
Procedure
Check the current logging levels in the
connect-log4j2.yamlfile:cat ./config/connect-log4j2.yaml
cat ./config/connect-log4j2.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Root: level: INFO # ...
Root: level: INFO # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a curl command to check the logging levels from the
admin/loggersendpoint of the Kafka Connect API:curl -s http://localhost:8083/admin/loggers/ | jq
curl -s http://localhost:8083/admin/loggers/ | jqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example response:
{ "Root": { "level": "INFO" } }{ "Root": { "level": "INFO" } }Copy to Clipboard Copied! Toggle word wrap Toggle overflow jqprints the output in JSON format. The list shows the standardrootlevel logger, plus any specific loggers with modified logging levels.If TLS is enabled, use
https://instead ofhttp://, and specify the port configured inadmin.listeners.You can also get the log level of a specific logger:
curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.Worker | jq
curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.Worker | jqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Here, we retrieve the log level for
org.apache.kafka.connect.runtime.Worker.Example response:
{ "level": "INFO" }{ "level": "INFO" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change a logger’s level dynamically using a PUT request:
curl -X PUT -H 'Content-Type: application/json' -d '{"level": "TRACE"}' http://localhost:8083/admin/loggers/rootcurl -X PUT -H 'Content-Type: application/json' -d '{"level": "TRACE"}' http://localhost:8083/admin/loggers/rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Changing the
Rootlogger affects all loggers that inherit from it.You can also adjust the logging level for a specific component:
curl -X PUT -H 'Content-Type: application/json' -d '{"level": "DEBUG"}' http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.Workercurl -X PUT -H 'Content-Type: application/json' -d '{"level": "DEBUG"}' http://localhost:8083/admin/loggers/org.apache.kafka.connect.runtime.WorkerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example response:
{ "org.apache.kafka.connect.runtime.Worker": { "level": "DEBUG" } }{ "org.apache.kafka.connect.runtime.Worker": { "level": "DEBUG" } }Copy to Clipboard Copied! Toggle word wrap Toggle overflow