Questo contenuto non è disponibile nella lingua selezionata.
11.9. Migrating Volumes
Note
replace-brick
operation, review the known issues related to replace-brick
operation in the Red Hat Gluster Storage Release Notes.
11.9.1. Replacing a Subvolume on a Distribute or Distribute-replicate Volume Copia collegamentoCollegamento copiato negli appunti!
- Add the new bricks to the volume.
gluster volume add-brick VOLNAME [replica <COUNT>] NEW-BRICK
# gluster volume add-brick VOLNAME [replica <COUNT>] NEW-BRICKgluster volume add-brick VOLNAME [replica <COUNT>] NEW-BRICKgluster volume add-brick VOLNAME [replica <COUNT>] NEW-BRICKgluster volume add-brick VOLNAME [replica <COUNT>] NEW-BRICK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.1. Adding a Brick to a Distribute Volume
gluster volume add-brick test-volume server5:/rhgs/brick5
# gluster volume add-brick test-volume server5:/rhgs/brick5 Add Brick successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the volume information using the command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
In case of a Distribute-replicate volume, you must specify the replica count in theadd-brick
command and provide the same number of bricks as the replica count to theadd-brick
command. - Remove the bricks to be replaced from the subvolume.
- Start the
remove-brick
operation using the command:gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> start
# gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> startgluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> startgluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.2. Start a remove-brick operation on a distribute volume
gluster volume remove-brick test-volume server2:/rhgs/brick2 start
# gluster volume remove-brick test-volume server2:/rhgs/brick2 start Remove Brick start successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - View the status of the
remove-brick
operation using the command:gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK status
# gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK statusgluster volume remove-brick VOLNAME [replica <COUNT>] BRICK statusgluster volume remove-brick VOLNAME [replica <COUNT>] BRICK statusgluster volume remove-brick VOLNAME [replica <COUNT>] BRICK statusgluster volume remove-brick VOLNAME [replica <COUNT>] BRICK status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.3. View the Status of remove-brick Operation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Keep monitoring theremove-brick
operation status by executing the above command. In the above example, the estimated time for rebalance to complete is 10 minutes. When the value of the status field is set tocomplete
in the output ofremove-brick
status command, proceed further. - Commit the
remove-brick
operation using the command:gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commit
# gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commitgluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commitgluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.4. Commit the remove-brick Operation on a Distribute Volume
gluster volume remove-brick test-volume server2:/rhgs/brick2 commit
# gluster volume remove-brick test-volume server2:/rhgs/brick2 commit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the volume information using the command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the content on the brick after committing the
remove-brick
operation on the volume. If there are any files leftover, copy it through FUSE or NFS mount.- Verify if there are any pending files on the bricks of the subvolume.Along with files, all the application-specific extended attributes must be copied. glusterFS also uses extended attributes to store its internal data. The extended attributes used by glusterFS are of the form
trusted.glusterfs.*
,trusted.afr.*
, andtrusted.gfid
. Any extended attributes other than ones listed above must also be copied.To copy the application-specific extended attributes and to achieve a an effect similar to the one that is described above, use the following shell script:Syntax:copy.sh <glusterfs-mount-point> <brick>
# copy.sh <glusterfs-mount-point> <brick>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 11.5. Code Snippet Usage
If the mount point is/mnt/glusterfs
and brick path is/rhgs/brick1
, then the script must be run as:copy.sh /mnt/glusterfs /rhgs/brick1
# copy.sh /mnt/glusterfs /rhgs/brick1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To identify a list of files that are in a split-brain state, execute the command:
gluster volume heal test-volume info split-brain
# gluster volume heal test-volume info split-brain
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If there are any files listed in the output of the above command, compare the files across the bricks in a replica set, delete the bad files from the brick and retain the correct copy of the file. Manual intervention by the System Administrator would be required to choose the correct copy of file.
11.9.2. Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume Copia collegamentoCollegamento copiato negli appunti!
- Ensure that the new brick (
server5:/rhgs/brick1
) that replaces the old brick (server0:/rhgs/brick1
) is empty. Ensure that all the bricks are online. The brick that must be replaced can be in an offline state. - Execute the
replace-brick
command with theforce
option:gluster volume replace-brick test-volume server0:/rhgs/brick1 server5:/rhgs/brick1 commit force
# gluster volume replace-brick test-volume server0:/rhgs/brick1 server5:/rhgs/brick1 commit force volume replace-brick: success: replace-brick commit successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check if the new brick is online.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Data on the newly added brick would automatically be healed. It might take time depending upon the amount of data to be healed. It is recommended to check heal information after replacing a brick to make sure all the data has been healed before replacing/removing any other brick.
gluster volume heal VOL_NAME info
# gluster volume heal VOL_NAME info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value ofNumber of entries
field will be displayed as zero if the heal is complete.
11.9.3. Replacing an Old Brick with a New Brick on a Distribute Volume Copia collegamentoCollegamento copiato negli appunti!
- Before making any changes, check the contents of the brick that you want to remove from the volume.
ls /mount/point/OLDBRICK
# ls /mount/point/OLDBRICK file1 file2 ... file5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the new brick to the volume.
gluster volume add-brick VOLNAME NEWSERVER:NEWBRICK
# gluster volume add-brick VOLNAME NEWSERVER:NEWBRICK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start removing the old brick.
gluster volume remove-brick VOLNAME OLDSERVER:OLDBRICK start
# gluster volume remove-brick VOLNAME OLDSERVER:OLDBRICK start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the remove-brick status command shows that the removal is complete.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Finish removing the old brick.
gluster volume remove-brick VOLNAME OLDSERVER:OLDBRICK commit
# gluster volume remove-brick VOLNAME OLDSERVER:OLDBRICK commit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that all files that were on the removed brick are still present on the volume.
11.9.4. Replacing an Old Brick with a New Brick on a Dispersed or Distributed-dispersed Volume Copia collegamentoCollegamento copiato negli appunti!
- Ensure that the new brick that replaces the old brick is empty. The brick that must be replaced can be in an offline state but all other bricks must be online.
- Execute the replace-brick command with the
force
option:gluster volume replace-brick VOL_NAME old_brick_path new_brick_path commit force
# gluster volume replace-brick VOL_NAME old_brick_path new_brick_path commit force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume replace-brick test-volume server1:/rhgs/brick2 server1:/rhgs/brick2new commit force
# gluster volume replace-brick test-volume server1:/rhgs/brick2 server1:/rhgs/brick2new commit force volume replace-brick: success: replace-brick commit successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The new brick you are adding could be from the same server or you can add a new server and then a new brick. - Check if the new brick is online.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Data on the newly added brick would automatically be healed. It might take time depending upon the amount of data to be healed. It is recommended to check heal information after replacing a brick to make sure all the data has been healed before replacing/removing any other brick.
gluster volume heal VOL_NAME info
# gluster volume heal VOL_NAME info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value ofNumber of entries
field will be displayed as zero if the heal is complete. - Red Hat Gluster Storage 3.4 introduces the
summary
option ofheal info
command. This command displays the statistics of entries pending heal in split-brain and the entries undergoing healing. This command prints only the entry count and not the actual file-names or gfids.To get the summary of a volume, run the following command:gluster volume heal VOLNAME info summary
# gluster volume heal VOLNAME info summarygluster volume heal VOLNAME info summarygluster volume heal VOLNAME info summary
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The ‘summary’ option provides a detailed information about the brick unlike the ‘info’ command. The summary information is obtained in a similar way as the ‘info’ command.The --xml parameter provides the output of the summary option in XML formatCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9.5. Reconfiguring a Brick in a Volume Copia collegamentoCollegamento copiato negli appunti!
reset-brick
subcommand is useful when you want to reconfigure a brick rather than replace it. reset-brick
lets you replace a brick with another brick of the same location and UUID. For example, if you initially configured bricks so that they were identified with a hostname, but you want to use that hostname somewhere else, you can use reset-brick
to stop the brick, reconfigure it so that it is identified by an IP address instead of the hostname, and return the reconfigured brick to the cluster.
- Ensure that the quorum minimum will still be met when the brick that you want to reset is taken offline.
- If possible, Red Hat recommends stopping I/O, and verifying that no heal operations are pending on the volume.
- Run the following command to kill the brick that you want to reset.
gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH start
# gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the offline brick according to your needs.
- Check that the volume's
Volume ID
displayed bygluster volume info
matches thevolume-id
(if any) of the offline brick.gluster volume info VOLNAME cat /var/lib/glusterd/vols/VOLNAME/VOLNAME.HOSTNAME.BRICKPATH.vol | grep volume-id
# gluster volume info VOLNAME # cat /var/lib/glusterd/vols/VOLNAME/VOLNAME.HOSTNAME.BRICKPATH.vol | grep volume-id
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, in the following dispersed volume, theVolume ID
and thevolume-id
are bothab8a981a-a6d9-42f2-b8a5-0b28fe2c4548
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat /var/lib/glusterd/vols/vol/vol.myhost.brick-gluster-vol-1.vol | grep volume-id option volume-id ab8a981a-a6d9-42f2-b8a5-0b28fe2c4548
# cat /var/lib/glusterd/vols/vol/vol.myhost.brick-gluster-vol-1.vol | grep volume-id option volume-id ab8a981a-a6d9-42f2-b8a5-0b28fe2c4548
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Bring the reconfigured brick back online. There are two options for this:
- If your brick did not have a
volume-id
in the previous step, run:gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit
# gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If your brick's
volume-id
matches your volume's identifier, Red Hat recommends adding theforce
keyword to ensure that the operation succeeds.gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit force
# gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow