21.2. Retrieving File Path from the Gluster Volume


The heal info command lists the GFIDs of the files that needs to be healed. If you want to find the path of the files associated with the GFIDs, use the getfattr utility. The getfattr utility enables you to locate a file residing on a gluster volume brick. You can retrieve the path of a file even if the filename is unknown.

21.2.1. Retrieving Known File Name

To retrieve a file path when the file name is known, execute the following command in the Fuse mount directory:
# getfattr -n trusted.glusterfs.pathinfo -e text <path_to_fuse_mount/filename>
Where,
path_to_fuse_mount: The fuse mount where the gluster volume is mounted.
filename: The name of the file for which the path information is to be retrieved.
For example:
# getfattr  -n trusted.glusterfs.pathinfo -e text /mnt/fuse_mnt/File1
getfattr: Removing leading '/' from absolute path names
# file: mnt/fuse_mnt/File1
trusted.glusterfs.pathinfo="(<DISTRIBUTE:testvol-dht> (<REPLICATE:testvol-replicate-0>
<POSIX(/rhgs/brick1):tuxpad:/rhgs/brick1/File1>
<POSIX(/rhgs/brick2):tuxpad:/rhgs/brick2/File1>))"
The command output displays the brick pathinfo under the <POSIX> tag. In this example output, two paths are displayed as the file is replicated twice.

21.2.2. Retrieving Unknown File Name

You can retrieve the file path of an unknown file using its gfid string. The gfid string is the hyphenated version of the trusted.gfid attribute. For example, if the gfid is 80b0b1642ea4478ba4cda9f76c1e6efd, then the gfid string will be 80b0b164-2ea4-478b-a4cd-a9f76c1e6efd.

Note

To obtain the gfid of a file, run the following command:
# getfattr -d -m. -e hex /path/to/file/on/the/brick

21.2.3. Retrieving File Path using gfid String

To retrieve the file path using the gfid string, follow these steps:
  1. Fuse mount the volume with the aux-gfid option enabled.
    # mount -t glusterfs -o aux-gfid-mount hostname:volume-name  <path_to_fuse_mnt>
    Where,
    path_to_fuse_mount: The fuse mount where the gluster volume is mounted.
    For example:
    # mount -t glusterfs -o aux-gfid-mount 127.0.0.2:testvol /mnt/aux_mount
  2. After mounting the volume, execute the following command
    # getfattr -n trusted.glusterfs.pathinfo -e text <path-to-fuse-mnt>/.gfid/<GFID string>
    Where,
    path_to_fuse_mount: The fuse mount where the gluster volume is mounted.
    GFID string: The GFID string.
    For example:
    # getfattr -n trusted.glusterfs.pathinfo -e text /mnt/aux_mount/.gfid/80b0b164-2ea4-478b-a4cd-a9f76c1e6efd
    getfattr: Removing leading '/' from absolute path names
    # file: mnt/aux_mount/.gfid/80b0b164-2ea4-478b-a4cd-a9f76c1e6efd trusted.glusterfs.pathinfo="(<DISTRIBUTE:testvol-dht> (<REPLICATE:testvol-replicate-0> <POSIX(/rhgs/brick2):tuxpad:/rhgs/brick2/File1> <POSIX(/rhgs/brick1):tuxpad:/rhgs/brick1/File1>))
    The command output displays the brick pathinfo under the <POSIX> tag. In this example output, two paths are displayed as the file is replicated twice.

21.2.4. Controlling Self-heal for Dispersed Volumes

For dispersed volumes, when a node with multiple bricks goes offline and comes back online, self-heal daemon starts healing the bricks. This self-heal can lead to high CPU usage in case of large amounts of data and can affect the ongoing I/O operations, thus, decreasing storage efficiency.
To control the CPU and memory usage of the self-heal daemon, follow these steps:
  1. Navigate to the scripts folder using the following command:
    # cd /usr/share/glusterfs/scripts
  2. Determine the PID of the self-heal daemon using the following command:
    # ps -aef | grep glustershd
    The output will be in the following format:
    root      1565     1  0 Feb05 ?        00:09:17 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ed49b959a0dc9b2185913084e3b2b339.socket --xlator-option *replicate*.node-uuid=13dbfa1e-ebbf-4cee-a1ac-ca6763903c55
    root     16766 14420  0 19:00 pts/0    00:00:00 grep --color=auto glustershd
    In this output, 1565 represents the PID of the selfheald service.
  3. Execute the control-cpu-load script using the following command:
    # sh control-cpu-load.sh
  4. When the system prompts for the following input, type the PID of the self-heal daemon acquired from the previous step and press Enter:
    [root@XX-XX scripts]# sh control-cpu-load.sh
    Enter gluster daemon pid for which you want to control CPU.
    1565
  5. When the system prompts for the following input, type y and press Enter:
    If you want to continue the script to attach 1565 with new cgroup_gluster_1565 cgroup Press (y/n)?
    In this example, 1565 represents the PID of the selfheald service. The PID of the selfheald service can vary from system to system.
  6. When the system prompts for the following input, enter the required quota value to be assigned to the self-heal daemon and press Enter:
    Creating child cgroup directory 'cgroup_gluster_1565 cgroup' for glustershd.service.
    Enter quota value in range [10,100]:
    25
    
    In this example, the quota value for the self-heal daemon is set as 25.

    Note

    The recommended quota value for a self-heal daemon is 25. However, the quota value can be set by the user on a run-time basis.
    The system prompts the following notification once the quota value is successfully set:
    Entered quota value is 25
    Setting 25000 to cpu.cfs_quota_us for gluster_cgroup.
    Tasks are attached successfully specific to 1565 to cgroup_gluster_1565.
To check the CPU usage for the self-heal daemon, execute the top command.

Important

Perform this procedure every time the daemon is restarted with the new daemon PID.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.