此内容没有您所选择的语言版本。
6.2. Formatting and Mounting Bricks
To create a Red Hat Storage volume, specify the bricks that comprise the volume. After creating the volume, the volume must be started before it can be mounted.
Important
- Red Hat supports formatting a Logical Volume using the XFS file system on the bricks.
Creating a Thinly Provisioned Logical Volume
To create a thinly provisioned logical volume, proceed with the following steps:
- Create a physical volume(PV) by using the
pvcreatecommand.For example:pvcreate --dataalignment 1280K /dev/sdb
pvcreate --dataalignment 1280K /dev/sdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Here,/dev/sdbis a storage device.Use the correctdataalignmentoption based on your device. For more information, see Section 9.2, “Brick Configuration”Note
The device name and the alignment value will vary based on the device you are using. - Create a Volume Group (VG) from the PV using the
vgcreatecommand:For example:vgcreate --physicalextentsize 128K rhs_vg /dev/sdb
vgcreate --physicalextentsize 128K rhs_vg /dev/sdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a thin-pool using the following commands:
- Create an LV to serve as the metadata device using the following command:
lvcreate -L metadev_sz --name metadata_device_name VOLGROUP
lvcreate -L metadev_sz --name metadata_device_name VOLGROUPCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:lvcreate -L 16776960K --name rhs_pool_meta rhs_vg
lvcreate -L 16776960K --name rhs_pool_meta rhs_vgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create an LV to serve as the data device using the following command:
lvcreate -L datadev_sz --name thin_pool VOLGROUP
lvcreate -L datadev_sz --name thin_pool VOLGROUPCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:lvcreate -L 536870400K --name rhs_pool rhs_vg
lvcreate -L 536870400K --name rhs_pool rhs_vgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a thin pool from the data LV and the metadata LV using the following command:
lvconvert --chunksize STRIPE_WIDTH --thinpool VOLGROUP/thin_pool --poolmetadata VOLGROUP/metadata_device_name
lvconvert --chunksize STRIPE_WIDTH --thinpool VOLGROUP/thin_pool --poolmetadata VOLGROUP/metadata_device_nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:lvconvert --chunksize 1280K --thinpool rhs_vg/rhs_pool --poolmetadata rhs_vg/rhs_pool_meta
lvconvert --chunksize 1280K --thinpool rhs_vg/rhs_pool --poolmetadata rhs_vg/rhs_pool_metaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
By default, the newly provisioned chunks in a thin pool are zeroed to prevent data leaking between different block devices. In the case of Red Hat Storage, where data is accessed via a file system, this option can be turned off for better performance.lvchange --zero n VOLGROUP/thin_pool
lvchange --zero n VOLGROUP/thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:lvchange --zero n rhs_vg/rhs_pool
lvchange --zero n rhs_vg/rhs_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Create a thinly provisioned volume from the previously created pool using the
lvcreatecommand:For example:lvcreate -V 1G -T rhs_vg/rhs_pool -n rhs_lv
lvcreate -V 1G -T rhs_vg/rhs_pool -n rhs_lvCopy to Clipboard Copied! Toggle word wrap Toggle overflow It is recommended that only one LV should be created in a thin pool.
Formatting and Mounting Bricks
Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly. To enhance the performance of Red Hat Storage, ensure you read Chapter 9, Configuring Red Hat Storage for Enhancing Performance before formatting the bricks.
- Run
# mkfs.xfs -f -i size=512 -n size=8192 -d su=128K,sw=10 DEVICEto format the bricks to the supported XFS file system format. Here, DEVICE is the created thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by Red Hat Storage. - Run
# mkdir /mountpointto create a directory to link the brick to. - Add an entry in
/etc/fstab:/dev/rhs_vg/rhs_lv /mountpoint xfs rw,inode64,noatime,nouuid 1 2
/dev/rhs_vg/rhs_lv /mountpoint xfs rw,inode64,noatime,nouuid 1 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run
# mount /mountpointto mount the brick. - Run the
df -hcommand to verify the brick is successfully mounted:df -h
# df -h /dev/rhs_vg/rhs_lv 16G 1.2G 15G 7% /exp1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Using Subdirectory as the Brick for Volume
You can create an XFS file system, mount them and point them as bricks while creating a Red Hat Storage volume. If the mount point is unavailable, the data is directly written to the root file system in the unmounted directory.
For example, the
/exp directory is the mounted file system and is used as the brick for volume creation. However, for some reason, if the mount point is unavailable, any write continues to happen in the /exp directory, but now this is under root file system.
To overcome this issue, you can perform the below procedure.
During Red Hat Storage setup, create an XFS file system and mount it. After mounting, create a subdirectory and use this subdirectory as the brick for volume creation. Here, the XFS file system is mounted as
/bricks. After the file system is available, create a directory called /bricks/bricksrv1 and use it for volume creation. Ensure that no more than one brick is created from a single mount. This approach has the following advantages:
- When the
/bricksfile system is unavailable, there is no longer/bricks/bricksrv1directory available in the system. Hence, there will be no data loss by writing to a different location. - This does not require any additional file system for nesting.
Perform the following to use subdirectories as bricks for creating a volume:
- Create the
bricksrv1subdirectory in the mounted file system.mkdir /bricks/bricksrv1
# mkdir /bricks/bricksrv1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat the above steps on all nodes. - Create the Red Hat Storage volume using the subdirectories as bricks.
gluster volume create distdata01 ad-rhs-srv1:/bricks/bricksrv1 ad-rhs-srv2:/bricks/bricksrv2
# gluster volume create distdata01 ad-rhs-srv1:/bricks/bricksrv1 ad-rhs-srv2:/bricks/bricksrv2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the Red Hat Storage volume.
gluster volume start distdata01
# gluster volume start distdata01Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the status of the volume.
gluster volume status distdata01
# gluster volume status distdata01Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reusing a Brick from a Deleted Volume
Bricks can be reused from deleted volumes, however some steps are required to make the brick reusable.
- Brick with a File System Suitable for Reformatting (Optimal Method)
- Run
# mkfs.xfs -f -i size=512 deviceto reformat the brick to supported requirements, and make it available for immediate reuse in a new volume.Note
All data will be erased when the brick is reformatted. - File System on a Parent of a Brick Directory
- If the file system cannot be reformatted, remove the whole brick directory and create it again.
Cleaning An Unusable Brick
If the file system associated with the brick cannot be reformatted, and the brick directory cannot be removed, perform the following steps:
- Delete all previously existing data in the brick, including the
.glusterfssubdirectory. - Run
# setfattr -x trusted.glusterfs.volume-id brickand# setfattr -x trusted.gfid brickto remove the attributes from the root of the brick. - Run
# getfattr -d -m . brickto examine the attributes set on the volume. Take note of the attributes. - Run
# setfattr -x attribute brickto remove the attributes relating to the glusterFS file system.Thetrusted.glusterfs.dhtattribute for a distributed volume is one such example of attributes that need to be removed.