Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Troubleshooting Networking Issues
This chapter lists basic troubleshooting procedures connected with networking and Network Time Protocol (NTP).
3.1. Basic Networking Troubleshooting Copier lienLien copié sur presse-papiers!
Red Hat Ceph Storage depends heavily on a reliable network connection. Red Hat Ceph Storage nodes use the network for communicating with each other. Networking issues can cause many problems with Ceph OSDs, such as them flapping, or being incorrectly reported as down. Networking issues can also cause the Ceph Monitor’s clock skew errors. In addition, packet loss, high latency, or limited bandwidth can impact the cluster performance and stability.
Procedure: Basic Networking Troubleshooting
Installing the
net-toolspackage can help when troubleshooting network issues that can occur in a Ceph storage cluster:Example
yum install net-tools yum install telnet
[root@mon ~]# yum install net-tools [root@mon ~]# yum install telnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
cluster_networkandpublic_networkparameters in the Ceph configuration file include the correct values:Example
[root@mon ~]# cat /etc/ceph/ceph.conf | grep net cluster_network = 192.168.1.0/24 public_network = 192.168.0.0/24
[root@mon ~]# cat /etc/ceph/ceph.conf | grep net cluster_network = 192.168.1.0/24 public_network = 192.168.0.0/24Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the network interfaces are up:
Example
ip link list
[root@mon ~]# ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp22s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 40:f2:e9:b8:a0:48 brd ff:ff:ff:ff:ff:ffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Ceph nodes are able to reach each other using their short host names. Verify this on each node in the storage cluster:
Syntax
ping SHORT_HOST_NAME
ping SHORT_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ping osd01
[root@mon ~]# ping osd01Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a firewall, ensure that Ceph nodes are able to reach other on their appropriate ports. The
firewall-cmdandtelnettools can validate the port status, and if the port is open respectively:Syntax
firewall-cmd --info-zone=ZONE telnet IP_ADDRESS PORT
firewall-cmd --info-zone=ZONE telnet IP_ADDRESS PORTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that there are no errors on the interface counters. Verify that the network connectivity between nodes has expected latency, and that there is no packet loss.
Using the
ethtoolcommand:Syntax
ethtool -S INTERFACE
ethtool -S INTERFACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
ifconfigcommand:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
netstatcommand:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For performance issues, in addition to the latency checks and to verify the network bandwidth between all nodes of the storage cluster, use the
iperf3tool. Theiperf3tool does a simple point-to-point network bandwidth test between a server and a client.Install the
iperf3package on the Red Hat Ceph Storage nodes you want to check the bandwidth:Example
yum install iperf3
[root@mon ~]# yum install iperf3Copy to Clipboard Copied! Toggle word wrap Toggle overflow On a Red Hat Ceph Storage node, start the
iperf3server:Example
iperf3 -s
[root@mon ~]# iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default port is 5201, but can be set using the
-Pcommand argument.On a different Red Hat Ceph Storage node, start the
iperf3client:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output shows a network bandwidth of 1.1 Gbits/second between the Red Hat Ceph Storage nodes, along with no retransmissions (
Retr) during the test.Red Hat recommends you validate the network bandwidth between all the nodes in the storage cluster.
Ensure that all nodes have the same network interconnect speed. Slower attached nodes might slow down the faster connected ones. Also, ensure that the inter switch links can handle the aggregated bandwidth of the attached nodes:
Syntax
ethtool INTERFACE
ethtool INTERFACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
See Also
- The Networking Guide for Red Hat Enterprise Linux 7.
- See the Verifying and configuring the MTU value section in the Red Hat Ceph Storage Configuration Guide.
- Knowledgebase articles and solutions related to troubleshooting networking issues on the Customer Portal.
3.2. Basic NTP Troubleshooting Copier lienLien copié sur presse-papiers!
This section includes basic NTP troubleshooting steps.
Procedure: Basic NTP Troubleshooting
Verify that the
ntpddaemon is running on the Monitor hosts:systemctl status ntpd
# systemctl status ntpdCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
ntpdis not running, enable and start it:systemctl enable ntpd systemctl start ntpd
# systemctl enable ntpd # systemctl start ntpdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that
ntpdis synchronizing the clocks correctly:ntpq -p
$ ntpq -pCopy to Clipboard Copied! Toggle word wrap Toggle overflow - See the How to troubleshoot NTP issues solution on the Red Hat Customer Portal for advanced NTP troubleshooting steps.