Este contenido no está disponible en el idioma seleccionado.
Chapter 3. Troubleshooting networking issues
This chapter lists basic troubleshooting procedures connected with networking and chrony for Network Time Protocol (NTP).
Prerequisites
- A running Red Hat Ceph Storage cluster.
3.1. Basic networking troubleshooting Copiar enlaceEnlace copiado en el portapapeles!
Red Hat Ceph Storage depends heavily on a reliable network connection. Red Hat Ceph Storage nodes use the network for communicating with each other. Networking issues can cause many problems with Ceph OSDs, such as them flapping, or being incorrectly reported as down. Networking issues can also cause the Ceph Monitor’s clock skew errors. In addition, packet loss, high latency, or limited bandwidth can impact the cluster performance and stability.
Prerequisites
- Root-level access to the node.
Procedure
Installing the
net-toolsandtelnetpackages can help when troubleshooting network issues that can occur in a Ceph storage cluster:Example
dnf install net-tools dnf install telnet
[root@host01 ~]# dnf install net-tools [root@host01 ~]# dnf install telnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the
cephadmshell and verify that thepublic_networkparameters in the Ceph configuration file include the correct values:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the shell and verify that the network interfaces are up:
Example
ip link list
[root@host01 ~]# ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:1a:4a:00:06:72 brd ff:ff:ff:ff:ff:ffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Ceph nodes are able to reach each other using their short host names. Verify this on each node in the storage cluster:
Syntax
ping SHORT_HOST_NAME
ping SHORT_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ping host02
[root@host01 ~]# ping host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a firewall, ensure that Ceph nodes are able to reach each other on their appropriate ports. The
firewall-cmdandtelnettools can validate the port status, and if the port is open respectively:Syntax
firewall-cmd --info-zone=ZONE telnet IP_ADDRESS PORT
firewall-cmd --info-zone=ZONE telnet IP_ADDRESS PORTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that there are no errors on the interface counters. Verify that the network connectivity between nodes has expected latency, and that there is no packet loss.
Using the
ethtoolcommand:Syntax
ethtool -S INTERFACE
ethtool -S INTERFACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
ifconfigcommand:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
netstatcommand:Example
netstat -ai
[root@host01 ~]# netstat -ai Kernel Interface table Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg ens3 1500 311847720 0 364903 0 114341918 0 0 0 BMRU lo 65536 19577001 0 0 0 19577001 0 0 0 LRUCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For performance issues, in addition to the latency checks and to verify the network bandwidth between all nodes of the storage cluster, use the
iperf3tool. Theiperf3tool does a simple point-to-point network bandwidth test between a server and a client.Install the
iperf3package on the Red Hat Ceph Storage nodes you want to check the bandwidth:Example
dnf install iperf3
[root@host01 ~]# dnf install iperf3Copy to Clipboard Copied! Toggle word wrap Toggle overflow On a Red Hat Ceph Storage node, start the
iperf3server:Example
iperf3 -s
[root@host01 ~]# iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default port is 5201, but can be set using the
-Pcommand argument.On a different Red Hat Ceph Storage node, start the
iperf3client:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output shows a network bandwidth of 1.1 Gbits/second between the Red Hat Ceph Storage nodes, along with no retransmissions (
Retr) during the test.Red Hat recommends you validate the network bandwidth between all the nodes in the storage cluster.
Ensure that all nodes have the same network interconnect speed. Slower attached nodes might slow down the faster connected ones. Also, ensure that the inter switch links can handle the aggregated bandwidth of the attached nodes:
Syntax
ethtool INTERFACE
ethtool INTERFACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Basic chrony NTP troubleshooting Copiar enlaceEnlace copiado en el portapapeles!
This section includes basic chrony NTP troubleshooting steps.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph Monitor node.
Procedure
Verify that the
chronyddaemon is running on the Ceph Monitor hosts:Example
systemctl status chronyd
[root@mon ~]# systemctl status chronydCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
chronydis not running, enable and start it:Example
systemctl enable chronyd systemctl start chronyd
[root@mon ~]# systemctl enable chronyd [root@mon ~]# systemctl start chronydCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that
chronydis synchronizing the clocks correctly:Example
chronyc sources chronyc sourcestats chronyc tracking
[root@mon ~]# chronyc sources [root@mon ~]# chronyc sourcestats [root@mon ~]# chronyc trackingCopy to Clipboard Copied! Toggle word wrap Toggle overflow