搜索

此内容没有您所选择的语言版本。

5.4.16.8.2. The warn RAID Fault Policy

download PDF
In the following example, the raid_fault_policy field has been set to warn in the lvm.conf file. The RAID logical volume is laid out as follows.
# lvs -a -o name,copy_percent,devices my_vg
  LV               Copy%  Devices                                     
  my_lv            100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
  [my_lv_rimage_0]        /dev/sdh1(1)                                
  [my_lv_rimage_1]        /dev/sdf1(1)                                
  [my_lv_rimage_2]        /dev/sdg1(1)                                
  [my_lv_rmeta_0]         /dev/sdh1(0)                                
  [my_lv_rmeta_1]         /dev/sdf1(0)                                
  [my_lv_rmeta_2]         /dev/sdg1(0)
If the /dev/sdh device fails, the system log will display error messages. In this case, however, LVM will not automatically attempt to repair the RAID device by replacing one of the images. Instead, if the device has failed you can replace the device with the --repair argument of the lvconvert command, as shown below.
# lvconvert --repair my_vg/my_lv
  /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y

# lvs -a -o name,copy_percent,devices my_vg
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
  LV               Copy%  Devices                                     
  my_lv             64.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0)
  [my_lv_rimage_0]        /dev/sde1(1)                                
  [my_lv_rimage_1]        /dev/sdf1(1)                                
  [my_lv_rimage_2]        /dev/sdg1(1)                                
  [my_lv_rmeta_0]         /dev/sde1(0)                                
  [my_lv_rmeta_1]         /dev/sdf1(0)                                
  [my_lv_rmeta_2]         /dev/sdg1(0)
Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG.
If the device failure is a transient failure or you are able to repair the device that failed, as of Red Hat Enterprise Linux release 6.5 you can initiate recovery of the failed device with the --refresh option of the lvchange command. Previously it was necessary to deactivate and then activate the logical volume.
The following command refreshes a logical volume.
# lvchange --refresh my_vg/my_lv
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.