此内容没有您所选择的语言版本。

16.14. Troubleshooting


  • Situation

    Snapshot creation fails.

    Step 1

    Check if the bricks are thinly provisioned by following these steps:

    1. Execute the mount command and check the device name mounted on the brick path. For example:
      # mount
      /dev/mapper/snap_lvgrp-snap_lgvol on /brick/brick-dirs type xfs (rw)
      /dev/mapper/snap_lvgrp1-snap_lgvol1 on /brick/brick-dirs1 type xfs (rw)
      Copy to Clipboard Toggle word wrap
    2. Run the following command to check if the device has a LV pool name.
      lvs device-name
      Copy to Clipboard Toggle word wrap
      For example:
      #  lvs -o pool_lv /dev/mapper/snap_lvgrp-snap_lgvol
         Pool
         snap_thnpool
      
      
      
      Copy to Clipboard Toggle word wrap
      If the Pool field is empty, then the brick is not thinly provisioned.
    3. Ensure that the brick is thinly provisioned, and retry the snapshot create command.
    Step 2

    Check if the bricks are down by following these steps:

    1. Execute the following command to check the status of the volume:
      # gluster volume status VOLNAME
      Copy to Clipboard Toggle word wrap
    2. If any bricks are down, then start the bricks by executing the following command:
      # gluster volume start VOLNAME force
      Copy to Clipboard Toggle word wrap
    3. To verify if the bricks are up, execute the following command:
      # gluster volume status VOLNAME
      Copy to Clipboard Toggle word wrap
    4. Retry the snapshot create command.
    Step 3

    Check if the node is down by following these steps:

    1. Execute the following command to check the status of the nodes:
      # gluster volume status VOLNAME
      Copy to Clipboard Toggle word wrap
    2. If a brick is not listed in the status, then execute the following command:
      # gluster pool list
      Copy to Clipboard Toggle word wrap
    3. If the status of the node hosting the missing brick is Disconnected, then power-up the node.
    4. Retry the snapshot create command.
    Step 4

    Check if rebalance is in progress by following these steps:

    1. Execute the following command to check the rebalance status:
      gluster volume rebalance VOLNAME status
      Copy to Clipboard Toggle word wrap
    2. If rebalance is in progress, wait for it to finish.
    3. Retry the snapshot create command.
  • Situation

    Snapshot delete fails.

    Step 1

    Check if the server quorum is met by following these steps:

    1. Execute the following command to check the peer status:
      # gluster pool list
      Copy to Clipboard Toggle word wrap
    2. If nodes are down, and the cluster is not in quorum, then power up the nodes.
    3. To verify if the cluster is in quorum, execute the following command:
      # gluster pool list
      Copy to Clipboard Toggle word wrap
    4. Retry the snapshot delete command.
  • Situation

    Snapshot delete command fails on some node(s) during commit phase, leaving the system inconsistent.

    Solution

    1. Identify the node(s) where the delete command failed. This information is available in the delete command's error output. For example:
      # gluster snapshot delete snapshot1
      Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
      snapshot delete: failed: Commit failed on 10.00.00.02. Please check log file for details.
      Snapshot command failed
      Copy to Clipboard Toggle word wrap
    2. On the node where the delete command failed, bring down glusterd using the following command:
      # service glusterd stop
      Copy to Clipboard Toggle word wrap
    3. Delete that particular snaps repository in /var/lib/glusterd/snaps/ from that node. For example:
      # rm -rf /var/lib/glusterd/snaps/snapshot1
      Copy to Clipboard Toggle word wrap
    4. Start glusterd on that node using the following command:
      # service glusterd start.
      Copy to Clipboard Toggle word wrap
    5. Repeat the 2nd, 3rd, and 4th steps on all the nodes where the commit failed as identified in the 1st step.
    6. Retry deleting the snapshot. For example:
      # gluster snapshot delete snapshot1
      Copy to Clipboard Toggle word wrap
  • Situation

    Snapshot restore fails.

    Step 1

    Check if the server quorum is met by following these steps:

    1. Execute the following command to check the peer status:
      # gluster pool list
      Copy to Clipboard Toggle word wrap
    2. If nodes are down, and the cluster is not in quorum, then power up the nodes.
    3. To verify if the cluster is in quorum, execute the following command:
      # gluster pool list
      Copy to Clipboard Toggle word wrap
    4. Retry the snapshot restore command.
    Step 2

    Check if the volume is in Stop state by following these steps:

    1. Execute the following command to check the volume info:
      # gluster volume info VOLNAME
      Copy to Clipboard Toggle word wrap
    2. If the volume is in Started state, then stop the volume using the following command:
      gluster volume stop VOLNAME
      Copy to Clipboard Toggle word wrap
    3. Retry the snapshot restore command.
  • Situation

    The brick process is hung.

    Solution

    Check if the LVM data / metadata utilization had reached 100% by following these steps:

    1. Execute the mount command and check the device name mounted on the brick path. For example:
      # mount 
            /dev/mapper/snap_lvgrp-snap_lgvol on /brick/brick-dirs type xfs (rw)
            /dev/mapper/snap_lvgrp1-snap_lgvol1 on /brick/brick-dirs1 type xfs (rw)
      
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to check if the data/metadatautilization has reached 100%:
      lvs -v device-name
      Copy to Clipboard Toggle word wrap
      For example:
      #  lvs -o data_percent,metadata_percent -v /dev/mapper/snap_lvgrp-snap_lgvol
           Using logical volume(s) on command line
         Data%  Meta%
           0.40
      
      Copy to Clipboard Toggle word wrap

    Note

    Ensure that the data and metadata does not reach the maximum limit. Usage of monitoring tools like Nagios, will ensure you do not come across such situations. For more information about Nagios, see Chapter 17, Monitoring Red Hat Gluster Storage
  • Situation

    Snapshot commands fail.

    Step 1

    Check if there is a mismatch in the operating versions by following these steps:

    1. Open the following file and check for the operating version:
      /var/lib/glusterd/glusterd.info
      Copy to Clipboard Toggle word wrap
      If the operating-version is lesser than 30000, then the snapshot commands are not supported in the version the cluster is operating on.
    2. Upgrade all nodes in the cluster to Red Hat Gluster Storage 3.1.
    3. Retry the snapshot command.
  • Situation

    After rolling upgrade, snapshot feature does not work.

    Solution

    You must ensure to make the following changes on the cluster to enable snapshot:

    1. Restart the volume using the following commands.
      # gluster volume stop VOLNAME
      # gluster volume start VOLNAME
      Copy to Clipboard Toggle word wrap
    2. Restart glusterd services on all nodes.
      # service glusterd restart
      Copy to Clipboard Toggle word wrap
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat