이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 3. Troubleshooting Networking Issues
This chapter lists basic troubleshooting procedures connected with networking and Network Time Protocol (NTP).
3.1. Basic Networking Troubleshooting 링크 복사링크가 클립보드에 복사되었습니다!
Red Hat Ceph Storage depends heavily on a reliable network connection. Red Hat Ceph Storage nodes use the network for communicating with each other. Networking issues can cause many problems with Ceph OSDs, such as them flapping, or being incorrectly reported as down
. Networking issues can also cause the Ceph Monitor’s clock skew errors. In addition, packet loss, high latency, or limited bandwidth can impact the cluster performance and stability.
Procedure: Basic Networking Troubleshooting
Installing the
net-tools
package can help when troubleshooting network issues that can occur in a Ceph storage cluster:Example
yum install net-tools yum install telnet
[root@mon ~]# yum install net-tools [root@mon ~]# yum install telnet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
cluster_network
andpublic_network
parameters in the Ceph configuration file include the correct values:Example
cat /etc/ceph/ceph.conf | grep net cluster_network = 192.168.1.0/24 public_network = 192.168.0.0/24
[root@mon ~]# cat /etc/ceph/ceph.conf | grep net cluster_network = 192.168.1.0/24 public_network = 192.168.0.0/24
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the network interfaces are up:
Example
ip link list
[root@mon ~]# ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp22s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 40:f2:e9:b8:a0:48 brd ff:ff:ff:ff:ff:ff
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Ceph nodes are able to reach each other using their short host names. Verify this on each node in the storage cluster:
Syntax
ping SHORT_HOST_NAME
ping SHORT_HOST_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ping osd01
[root@mon ~]# ping osd01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a firewall, ensure that Ceph nodes are able to reach other on their appropriate ports. The
firewall-cmd
andtelnet
tools can validate the port status, and if the port is open respectively:Syntax
firewall-cmd --info-zone=ZONE telnet IP_ADDRESS PORT
firewall-cmd --info-zone=ZONE telnet IP_ADDRESS PORT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that there are no errors on the interface counters. Verify that the network connectivity between nodes has expected latency, and that there is no packet loss.
Using the
ethtool
command:Syntax
ethtool -S INTERFACE
ethtool -S INTERFACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
ifconfig
command:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
netstat
command:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For performance issues, in addition to the latency checks and to verify the network bandwidth between all nodes of the storage cluster, use the
iperf3
tool. Theiperf3
tool does a simple point-to-point network bandwidth test between a server and a client.Install the
iperf3
package on the Red Hat Ceph Storage nodes you want to check the bandwidth:Example
yum install iperf3
[root@mon ~]# yum install iperf3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On a Red Hat Ceph Storage node, start the
iperf3
server:Example
iperf3 -s
[root@mon ~]# iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default port is 5201, but can be set using the
-P
command argument.On a different Red Hat Ceph Storage node, start the
iperf3
client:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output shows a network bandwidth of 1.1 Gbits/second between the Red Hat Ceph Storage nodes, along with no retransmissions (
Retr
) during the test.Red Hat recommends you validate the network bandwidth between all the nodes in the storage cluster.
Ensure that all nodes have the same network interconnect speed. Slower attached nodes might slow down the faster connected ones. Also, ensure that the inter switch links can handle the aggregated bandwidth of the attached nodes:
Syntax
ethtool INTERFACE
ethtool INTERFACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
See Also
- The Networking Guide for Red Hat Enterprise Linux 7.
- See the Verifying and configuring the MTU value section in the Red Hat Ceph Storage Configuration Guide.
- Knowledgebase articles and solutions related to troubleshooting networking issues on the Customer Portal.
3.2. Basic NTP Troubleshooting 링크 복사링크가 클립보드에 복사되었습니다!
This section includes basic NTP troubleshooting steps.
Procedure: Basic NTP Troubleshooting
Verify that the
ntpd
daemon is running on the Monitor hosts:systemctl status ntpd
# systemctl status ntpd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
ntpd
is not running, enable and start it:systemctl enable ntpd systemctl start ntpd
# systemctl enable ntpd # systemctl start ntpd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that
ntpd
is synchronizing the clocks correctly:ntpq -p
$ ntpq -p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - See the How to troubleshoot NTP issues solution on the Red Hat Customer Portal for advanced NTP troubleshooting steps.