이 콘텐츠는 선택한 언어로 제공되지 않습니다.
10.5. Migrating Volumes
Data can be redistributed across bricks while the trusted storage pool is online and available.Before replacing bricks on the new servers, ensure that the new servers are successfully added to the trusted storage pool.
Note
Before performing a
replace-brick operation, review the known issues related to replace-brick operation in the Red Hat Gluster Storage 3.1 Release Notes.
10.5.1. Replacing a Subvolume on a Distribute or Distribute-replicate Volume 링크 복사링크가 클립보드에 복사되었습니다!
링크 복사링크가 클립보드에 복사되었습니다!
This procedure applies only when at least one brick from the subvolume to be replaced is online. In case of a Distribute volume, the brick that must be replaced must be online. In case of a Distribute-replicate, at least one brick from the subvolume from the replica set that must be replaced must be online.
To replace the entire subvolume with new bricks on a Distribute-replicate volume, follow these steps:
- Add the new bricks to the volume.
# gluster volume add-brick VOLNAME [replica <COUNT>] NEW-BRICKExample 10.1. Adding a Brick to a Distribute Volume
# gluster volume add-brick test-volume server5:/exp5 Add Brick successful - Verify the volume information using the command:
# gluster volume info Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 5 Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server3:/exp3 Brick4: server4:/exp4 Brick5: server5:/exp5Note
In case of a Distribute-replicate volume, you must specify the replica count in theadd-brickcommand and provide the same number of bricks as the replica count to theadd-brickcommand. - Remove the bricks to be replaced from the subvolume.
- Start the
remove-brickoperation using the command:# gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> startExample 10.2. Start a remove-brick operation on a distribute volume
# gluster volume remove-brick test-volume server2:/exp2 start Remove Brick start successful - View the status of the
remove-brickoperation using the command:# gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK statusExample 10.3. View the Status of remove-brick Operation
# gluster volume remove-brick test-volume server2:/exp2 status Node Rebalanced-files size scanned failures status ------------------------------------------------------------------ server2 16 16777216 52 0 in progressKeep monitoring theremove-brickoperation status by executing the above command. When the value of the status field is set tocompletein the output ofremove-brickstatus command, proceed further. - Commit the
remove-brickoperation using the command:# gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commitExample 10.4. Commit the remove-brick Operation on a Distribute Volume
# gluster volume remove-brick test-volume server2:/exp2 commit - Verify the volume information using the command:
# gluster volume info Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 4 Bricks: Brick1: server1:/exp1 Brick3: server3:/exp3 Brick4: server4:/exp4 Brick5: server5:/exp5 - Verify the content on the brick after committing the
remove-brickoperation on the volume. If there are any files leftover, copy it through FUSE or NFS mount.- Verify if there are any pending files on the bricks of the subvolume.Along with files, all the application-specific extended attributes must be copied. glusterFS also uses extended attributes to store its internal data. The extended attributes used by glusterFS are of the form
trusted.glusterfs.*,trusted.afr.*, andtrusted.gfid. Any extended attributes other than ones listed above must also be copied.To copy the application-specific extended attributes and to achieve a an effect similar to the one that is described above, use the following shell script:Syntax:# copy.sh <glusterfs-mount-point> <brick>Example 10.5. Code Snippet Usage
If the mount point is/mnt/glusterfsand brick path is/export/brick1, then the script must be run as:# copy.sh /mnt/glusterfs /export/brick#!/bin/bash MOUNT=$1 BRICK=$2 for file in `find $BRICK ! -type d`; do rpath=`echo $file | sed -e "s#$BRICK\(.*\)#\1#g"` rdir=`dirname $rpath` cp -fv $file $MOUNT/$rdir; for xattr in `getfattr -e hex -m. -d $file 2>/dev/null | sed -e '/^#/d' | grep -v -E "trusted.glusterfs.*" | grep -v -E "trusted.afr.*" | grep -v "trusted.gfid"`; do key=`echo $xattr | cut -d"=" -f 1` value=`echo $xattr | cut -d"=" -f 2` setfattr $MOUNT/$rpath -n $key -v $value done done - To identify a list of files that are in a split-brain state, execute the command:
# gluster volume heal test-volume info split-brain - If there are any files listed in the output of the above command, compare the files across the bricks in a replica set, delete the bad files from the brick and retain the correct copy of the file. Manual intervention by the System Administrator would be required to choose the correct copy of file.
A single brick can be replaced during a hardware failure situation, such as a disk failure or a server failure. The brick that must be replaced could either be online or offline. This procedure is applicable for volumes with replication. In case of a Replicate or Distribute-replicate volume types, after replacing the brick, self-heal is triggered to heal the data on the new brick.
Procedure to replace an old brick with a new brick on a Replicate or Distribute-replicate volume:
- Ensure that the new brick (
sys5:/home/gfs/r2_5) that replaces the old brick (sys0:/home/gfs/r2_0) is empty. Ensure that all the bricks are online. The brick that must be replaced can be in an offline state. - Execute the
replace-brickcommand with theforceoption:# gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit force volume replace-brick: success: replace-brick commit successful - Check if the new brick is online.
# gluster volume status Status of volume: r2 Gluster process Port Online Pid --------------------------------------------------------- Brick sys5:/home/gfs/r2_5 49156 Y 5731 Brick sys1:/home/gfs/r2_1 49153 Y 5354 Brick sys2:/home/gfs/r2_2 49154 Y 5365 Brick sys3:/home/gfs/r2_3 49155 Y 5376 - Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica.
# getfattr -d -m. -e hex /home/gfs/r2_1 getfattr: Removing leading '/' from absolute path names # file: home/gfs/r2_1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 trusted.afr.r2-client-0=0x000000000000000000000000 trusted.afr.r2-client-1=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440Note that in this example, the extended attributestrusted.afr.r2-client-0andtrusted.afr.r2-client-1are set to zero.
10.5.3. Replacing an Old Brick with a New Brick on a Distribute Volume 링크 복사링크가 클립보드에 복사되었습니다!
링크 복사링크가 클립보드에 복사되었습니다!
Important
In case of a Distribute volume type, replacing a brick using this procedure will result in data loss.
- Replace a brick with a commit
forceoption:# gluster volume replace-brick VOLNAME <BRICK> <NEW-BRICK> commit forceExample 10.6. Replace a brick on a Distribute Volume
# gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit force volume replace-brick: success: replace-brick commit successful - Verify if the new brick is online.
# gluster volume status Status of volume: r2 Gluster process Port Online Pid --------------------------------------------------------- Brick sys5:/home/gfs/r2_5 49156 Y 5731 Brick sys1:/home/gfs/r2_1 49153 Y 5354 Brick sys2:/home/gfs/r2_2 49154 Y 5365 Brick sys3:/home/gfs/r2_3 49155 Y 5376
Note
All the
replace-brick command options except the commit force option are deprecated.