8.4. Shrinking Volumes
You can shrink volumes while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible in a distributed volume because of a hardware or network failure.
Note
When shrinking distributed replicated or distributed striped volumes, the number of bricks being removed must be a multiple of the replica or stripe count. For example, to shrink a distributed striped volume with a stripe count of 2, you need to remove bricks in multiples of 2 (such as 4, 6, 8, etc.). In addition, the bricks you are removing must be from the same sub-volume (the same replica or stripe set). In a non-replicated volume, all bricks must be available in order to migrate data and perform the remove brick operation. In a replicated volume, at least one of the bricks in the replica must be available.
Shrinking a Volume
- Remove a brick using the following command:
# gluster volume remove-brick VOLNAME BRICK start
For example:# gluster volume remove-brick test-volume server2:/exp2 start Remove Brick start successful
Note
If theremove-brick
command is run withforce
or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using thestart
option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. - You can view the status of the remove brick operation using the following command:
# gluster volume remove-brick VOLNAME BRICK status
For example:# gluster volume remove-brick test-volume server2:/exp2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 16 16777216 52 0 in progress 192.168.1.1 13 16723211 47 0 in progress
- When the data migration shown in the previous
status
command is complete, run the following command to commit the brick removal:# gluster volume remove-brick VOLNAME BRICK commit
For example,# gluster volume remove-brick test-volume server2:/exp2 commit
- After the brick removal, you can check the volume information using the following command:
# gluster volume info
The command displays information similar to the following:# gluster volume info Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 3 Bricks: Brick1: server1:/exp1 Brick3: server3:/exp3 Brick4: server4:/exp4
Shrinking a Geo-replicated Volume
- Remove a brick using the following command:
# gluster volume remove-brick VOLNAME BRICK start
For example:# gluster volume remove-brick MASTER_VOL MASTER_HOST:/exp2 start Remove Brick start successful
Note
If theremove-brick
command is run withforce
or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using thestart
option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. - Use geo-replication
config checkpoint
to ensure that all the data in that brick is synced to the slave.- Set a checkpoint to help verify the status of the data synchronization.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint now
- Monitor the checkpoint output using the following command, until the status displays: checkpoint as of
checkpoint as of <time of checkpoint creation> is completed at <time of completion>
.# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
- You can view the status of the remove brick operation using the following command:
# gluster volume remove-brick VOLNAME BRICK status
For example:# gluster volume remove-brick MASTER_VOL MASTER_HOST:/exp2 status
- Stop the geo-replication session between the master and the slave:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- When the data migration shown in the previous
status
command is complete, run the following command to commit the brick removal:# gluster volume remove-brick VOLNAME BRICK commit
For example,# gluster volume remove-brick MASTER_VOL MASTER_HOST:/exp2 commit
- After the brick removal, you can check the volume information using the following command:
# gluster volume info
- Start the geo-replication session between the hosts:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
8.4.1. Stopping a remove-brick
Operation
Important
Stopping a
remove-brick
operation is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
A
remove-brick
operation that is in progress can be stopped by using the stop
command.
Note
Files that were already migrated during
remove-brick
operation will not be migrated back to the same brick when the operation is stopped.
To stop remove brick operation, use the following command:
# gluster volume remove-brick VOLNAME BRICK stop
For example:
gluster volume remove-brick di rhs1:/brick1/di21 rhs1:/brick1/di21 stop Node Rebalanced-files size scanned failures skipped status run-time in secs ---- ------- ---- ---- ------ ----- ----- ------ localhost 23 376Bytes 34 0 0 stopped 2.00 rhs1 0 0Bytes 88 0 0 stopped 2.00 rhs2 0 0Bytes 0 0 0 not started 0.00 'remove-brick' process may be in the middle of a file migration. The process will be fully stopped once the migration of the file is complete. Please check remove-brick process for completion before doing any further brick related tasks on the volume.