Questo contenuto non è disponibile nella lingua selezionata.
11.8. Shrinking Volumes
You can shrink volumes while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible in a distributed volume because of a hardware or network failure.
When shrinking distributed replicated volumes, the number of bricks being removed must be a multiple of the replica count. For example, to shrink a distributed replicated volume with a replica count of 3, you need to remove bricks in multiples of 3 (such as 6, 9, 12, etc.). In addition, the bricks you are removing must be from the same sub-volume (the same replica set). In a non-replicated volume, all bricks must be available in order to migrate data and perform the remove brick operation. In a replicated or arbitrated volume, at least one of the data bricks in the replica set must be available.
The guidelines are identical when removing a distribution set from a distributed replicated volume with arbiter bricks. If you want to reduce the replica count of an arbitrated distributed replicated volume to replica 3, you must remove only the arbiter bricks. If you want to reduce a volume from arbitrated distributed replicated to distributed only, remove the arbiter brick and one replica brick from each replica subvolume.
Shrinking a Volume
- Remove a brick using the following command:
#
gluster volume remove-brick VOLNAME BRICK start
For example:# gluster volume remove-brick test-volume server2:/rhgs/brick2 start Remove Brick start successful
Note
If theremove-brick
command is run withforce
or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using thestart
option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. - You can view the status of the remove brick operation using the following command:
#
gluster volume remove-brick VOLNAME BRICK status
For example:# gluster volume remove-brick test-volume server2:/rhgs/brick2 status Node Rebalanced size scanned failures skipped status run time -files in h:m:s ---------- --------- ------ ------ -------- ------ --------- -------- localhost 5032 43.4MB 27715 0 5604 completed 0:15:05 10.70.43.41 0 0Bytes 0 0 0 completed 0:08:18 volume rebalance: test-volume: success
- When the data migration shown in the previous
status
command is complete, run the following command to commit the brick removal:#
gluster volume remove-brick VOLNAME BRICK commit
For example,# gluster volume remove-brick test-volume server2:/rhgs/brick2 commit
- After the brick removal, you can check the volume information using the following command:
#
gluster volume info
The command displays information similar to the following:# gluster volume info Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 3 Bricks: Brick1: server1:/rhgs/brick1 Brick3: server3:/rhgs/brick3 Brick4: server4:/rhgs/brick4
11.8.1. Shrinking a Geo-replicated Volume
- Remove a brick using the following command:
#
gluster volume remove-brick VOLNAME BRICK start
For example:# gluster volume remove-brick MASTER_VOL MASTER_HOST:/rhgs/brick2 start Remove Brick start successful
Note
If theremove-brick
command is run withforce
or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using thestart
option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. - Use geo-replication
config checkpoint
to ensure that all the data in that brick is synced to the slave.- Set a checkpoint to help verify the status of the data synchronization.
#
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint now
- Verify the checkpoint completion for the geo-replication session using the following command:
#
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail
- You can view the status of the remove brick operation using the following command:
#
gluster volume remove-brick VOLNAME BRICK status
For example:# gluster volume remove-brick MASTER_VOL MASTER_HOST:/rhgs/brick2 status
- Stop the geo-replication session between the master and the slave:
#
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- When the data migration shown in the previous
status
command is complete, run the following command to commit the brick removal:#
gluster volume remove-brick VOLNAME BRICK commit
For example,# gluster volume remove-brick MASTER_VOL MASTER_HOST:/rhgs/brick2 commit
- After the brick removal, you can check the volume information using the following command:
#
gluster volume info
- Start the geo-replication session between the hosts:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
11.8.2. Shrinking a Tiered Volume
Warning
Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3.
You can shrink a tiered volume while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible because of a hardware or network failure.
11.8.2.1. Shrinking a Cold Tier Volume
- Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume (Deprecated)”
- Remove a brick using the following command:
#
gluster volume remove-brick VOLNAME BRICK start
For example:# gluster volume remove-brick test-volume server2:/rhgs/brick2 start Remove Brick start successful
Note
If theremove-brick
command is run withforce
or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using thestart
option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. - You can view the status of the remove brick operation using the following command:
#
gluster volume remove-brick VOLNAME BRICK status
For example:# gluster volume remove-brick test-volume server2:/rhgs/brick2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 16 16777216 52 0 in progress 192.168.1.1 13 16723211 47 0 in progress
- When the data migration shown in the previous
status
command is complete, run the following command to commit the brick removal:#
gluster volume remove-brick VOLNAME BRICK commit
For example,# gluster volume remove-brick test-volume server2:/rhgs/brick2 commit
- Rerun the attach-tier command only with the required set of bricks:
# gluster volume tier VOLNAME attach [replica COUNT] BRICK...
For example,# gluster volume tier test-volume attach replica 3 server1:/rhgs/tier1 server2:/rhgs/tier2 server1:/rhgs/tier3 server2:/rhgs/tier4
Important
When you attach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.
11.8.2.2. Shrinking a Hot Tier Volume
You must first decide on which bricks should be part of the hot tiered volume and which bricks should be removed from the hot tier volume.
- Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume (Deprecated)”
- Rerun the attach-tier command only with the required set of bricks:
# gluster volume tier VOLNAME attach [replica COUNT] brick...
Important
When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.
11.8.3. Stopping a remove-brick
Operation
A
remove-brick
operation that is in progress can be stopped by using the stop
command.
Note
Files that were already migrated during
remove-brick
operation will not be migrated back to the same brick when the operation is stopped.
To stop remove brick operation, use the following command:
# gluster volume remove-brick VOLNAME BRICK stop
For example:
gluster volume remove-brick test-volume server1:/rhgs/brick1/ server2:/brick2/ stop Node Rebalanced-files size scanned failures skipped status run-time in secs ---- ------- ---- ---- ------ ----- ----- ------ localhost 23 376Bytes 34 0 0 stopped 2.00 rhs1 0 0Bytes 88 0 0 stopped 2.00 rhs2 0 0Bytes 0 0 0 not started 0.00 'remove-brick' process may be in the middle of a file migration. The process will be fully stopped once the migration of the file is complete. Please check remove-brick process for completion before doing any further brick related tasks on the volume.