Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 18. Monitoring Red Hat Gluster Storage Gluster Workload
Monitoring storage volumes is helpful when conducting a capacity planning or performance tuning activity on a Red Hat Gluster Storage volume. You can monitor the Red Hat Gluster Storage volumes with different parameters and use those system outputs to identify and troubleshoot issues.
You can use the
volume top and volume profile commands to view vital performance information and identify bottlenecks on each brick of a volume.
You can also perform a statedump of the brick processes and NFS server process of a volume, and also view volume status and volume information.
Note
If you restart the server process, the existing
profile and top information will be reset.
18.1. Profiling volumes Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
18.1.1. Server-side volume profiling using volume profile Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
The
volume profile command provides an interface to get the per-brick or NFS server I/O information for each File Operation (FOP) of a volume. This information helps in identifying the bottlenecks in the storage system.
This section describes how to use the
volume profile command.
18.1.1.1. Start Profiling Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
To view the file operation information of each brick, start the profiling command:
# gluster volume profile VOLNAME start
For example, to start profiling on test-volume:
gluster volume profile test-volume start
# gluster volume profile test-volume start
Profiling started on test-volume
Important
Running
profile command can affect system performance while the profile information is being collected. Red Hat recommends that profiling should only be used for debugging.
When profiling is started on the volume, the following additional options are displayed when using the
volume info command:
diagnostics.count-fop-hits: on diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
18.1.1.2. Displaying the I/O Information Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
To view the I/O information of the bricks on a volume, use the following command:
# gluster volume profile VOLNAME info
For example, to view the I/O information of test-volume:
To view the I/O information of the NFS server on a specified volume, use the following command:
# gluster volume profile VOLNAME info nfs
For example, to view the I/O information of the NFS server on test-volume:
18.1.1.3. Stop Profiling Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
To stop profiling on a volume, use the following command:
# gluster volume profile VOLNAME stop
For example, to stop profiling on test-volume:
gluster volume profile test-volume stop
# gluster volume profile test-volume stop
Profiling stopped on test-volume
18.1.2. Client-side volume profiling (FUSE only) Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
Red Hat Gluster Storage lets you profile how your mount point is being accessed, so that you can investigate latency issues even when you cannot instrument the application accessing your storage.
The io-stats translator records statistics of all file system activity on a Red Hat Gluster Storage volume that travels through a FUSE mount point. It collects information on files opened from the FUSE mount path, the read and write throughput for these files, the number of blocks read and written, and the latency observed for different file operations.
Run the following command to output all recorded statistics for the specified mount point to the specified output file.
setfattr -n trusted.io-stats-dump -v output_file_id mount_point
# setfattr -n trusted.io-stats-dump -v output_file_id mount_point
This generates a number of files in the
/var/run/gluster directory. The output_file_id is not the whole file name, but is used as part of the name of the generated files.