Chapter 1. Initial Troubleshooting
This chapter includes information on:
- How to start troubleshooting Ceph errors (Section 1.1, “Identifying Problems”)
-
Most common
ceph health
error messages (Section 1.2, “Understanding the Output of theceph health
Command”) - Most common Ceph log error messages (Section 1.3, “Understanding Ceph Logs”)
1.1. Identifying Problems
To determine possible causes of the error with Red Hat Ceph Storage you encounter, answer the following question:
- Certain problems can arise when using unsupported configurations. Ensure that your configuration is supported. See the Red Hat Ceph Storage: Supported configurations article for details.
Do you know what Ceph component causes the problem?
- No. Follow Section 1.1.1, “Diagnosing the Health of a Ceph Storage Cluster”.
- Monitors. See Chapter 4, Troubleshooting Monitors.
- OSDs. See Chapter 5, Troubleshooting OSDs.
- Placement groups. See Chapter 6, Troubleshooting Placement Groups.
1.1.1. Diagnosing the Health of a Ceph Storage Cluster
This procedure lists basic steps to diagnose the health of a Ceph Storage Cluster.
Check the overall status of the cluster:
# ceph health detail
If the command returns
HEALTH_WARN
orHEALTH_ERR
see Section 1.2, “Understanding the Output of theceph health
Command” for details.-
Check the Ceph logs for any error messages listed in Section 1.3, “Understanding Ceph Logs”. The logs are located by default in the
/var/log/ceph/
directory. - If the logs do not include sufficient amount of information, increase the debugging level and try to reproduce the action that failed. See Chapter 2, Configuring Logging for details.
1.2. Understanding the Output of the ceph health
Command
The ceph health
command returns information about the status of the Ceph Storage Cluster:
-
HEALTH_OK
indicates that the cluster is healthy. -
HEALTH_WARN
indicates a warning. In some cases, the Ceph status returns toHEALTH_OK
automatically, for example when Ceph finishes the rebalancing process. However, consider further troubleshooting if a cluster is in theHEALTH_WARN
state for longer time. -
HEALTH_ERR
indicates a more serious problem that requires your immediate attention.
Use the ceph health detail
and ceph -s
commands to get a more detailed output.
The following tables list the most common HEALTH_ERR
and HEALTH_WARN
error messages related to Monitors, OSDs, and placement groups. The tables provide links to corresponding sections that explain the errors and point to specific procedures to fix problems.
Error message | See |
---|---|
| |
| |
| |
|
Error message | See |
---|---|
| |
| |
| |
| |
| |
| |
|
Error message | See |
---|---|
| |
| |
| |
| |
| |
| |
|
1.3. Understanding Ceph Logs
By default, Ceph stores its logs in the /var/log/ceph/
directory.
The <cluster-name>.log
is the main cluster log file that includes the global cluster events. By default, this log is named ceph.log
. Only the Monitor hosts include the main cluster log.
Each OSD and Monitor has its own log file, named <cluster-name>-osd.<number>.log
and <cluster-name>-mon.<hostname>.log
.
When you increase debugging level for Ceph subsystems, Ceph generates a new log files for those subsystems as well. For details about logging, see Chapter 2, Configuring Logging.
The following tables list the most common Ceph log error messages related to Monitors and OSDs. The tables provide links to corresponding sections that explain the errors and point to specific procedures to fix them.
Error message | Log file | See |
---|---|---|
| Main cluster log | |
| Main cluster log | |
| Monitor log | |
| Monitor log | |
| Monitor log |
Error message | Log file | See |
---|---|---|
| Main cluster log | |
| Main cluster log | |
| Main cluster log | |
| OSD log | |
| OSD log |