10.4. Shrinking Volumes


You can shrink volumes while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible in a distributed volume because of a hardware or network failure.

Note

When shrinking distributed replicated volumes, the number of bricks being removed must be a multiple of the replica count. For example, to shrink a distributed replicated volume with a replica count of 2, you need to remove bricks in multiples of 2 (such as 4, 6, 8, etc.). In addition, the bricks you are removing must be from the same sub-volume (the same replica set). In a non-replicated volume, all bricks must be available in order to migrate data and perform the remove brick operation. In a replicated volume, at least one of the bricks in the replica must be available.

Shrinking a Volume

  1. Remove a brick using the following command:
    # gluster volume remove-brick VOLNAME BRICK start
    For example:
    # gluster volume remove-brick test-volume server2:/exp2 start
    Remove Brick start successful

    Note

    If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick.
  2. You can view the status of the remove brick operation using the following command:
    # gluster volume remove-brick VOLNAME BRICK status
    For example:
    # gluster volume remove-brick test-volume server2:/exp2 status
          Node    Rebalanced-files          size       scanned      failures         status
     ---------         -----------   -----------   -----------   -----------   ------------
     localhost                  16      16777216            52             0    in progress
    192.168.1.1                 13      16723211            47             0    in progress
  3. When the data migration shown in the previous status command is complete, run the following command to commit the brick removal:
    # gluster volume remove-brick VOLNAME BRICK commit
    For example,
    # gluster volume remove-brick test-volume server2:/exp2 commit
  4. After the brick removal, you can check the volume information using the following command:
    # gluster volume info 
    The command displays information similar to the following:
    # gluster volume info
    Volume Name: test-volume
    Type: Distribute
    Status: Started
    Number of Bricks: 3
    Bricks:
    Brick1: server1:/exp1
    Brick3: server3:/exp3
    Brick4: server4:/exp4

10.4.1. Shrinking a Geo-replicated Volume

  1. Remove a brick using the following command:
    # gluster volume remove-brick VOLNAME BRICK start
    For example:
    # gluster volume remove-brick MASTER_VOL MASTER_HOST:/exp2 start
    Remove Brick start successful

    Note

    If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick.
  2. Use geo-replication config checkpoint to ensure that all the data in that brick is synced to the slave.
    1. Set a checkpoint to help verify the status of the data synchronization.
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint now
    2. Verify the checkpoint completion for the geo-replication session using the following command:
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail
  3. You can view the status of the remove brick operation using the following command:
    # gluster volume remove-brick VOLNAME BRICK status
    For example:
    # gluster volume remove-brick  MASTER_VOL MASTER_HOST:/exp2 status
  4. Stop the geo-replication session between the master and the slave:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
  5. When the data migration shown in the previous status command is complete, run the following command to commit the brick removal:
    # gluster volume remove-brick VOLNAME BRICK commit
    For example,
    # gluster volume remove-brick  MASTER_VOL MASTER_HOST:/exp2 commit
  6. After the brick removal, you can check the volume information using the following command:
    # gluster volume info 
  7. Start the geo-replication session between the hosts:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start

10.4.2. Shrinking a Tiered Volume

You can shrink a tiered volume while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible because of a hardware or network failure.

10.4.2.1. Shrinking a Cold Tier Volume

  1. Detach the tier by performing the steps listed in Section 12.7, “Detaching a Tier from a Volume”
  2. Remove a brick using the following command:
    # gluster volume remove-brick VOLNAME BRICK start
    For example:
    # gluster volume remove-brick test-volume server2:/exp2 start
    Remove Brick start successful

    Note

    If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick.
  3. You can view the status of the remove brick operation using the following command:
    # gluster volume remove-brick VOLNAME BRICK status
    For example:
    # gluster volume remove-brick test-volume server2:/exp2 status
          Node    Rebalanced-files          size       scanned      failures         status
     ---------         -----------   -----------   -----------   -----------   ------------
     localhost                  16      16777216            52             0    in progress
    192.168.1.1                 13      16723211            47             0    in progress
  4. When the data migration shown in the previous status command is complete, run the following command to commit the brick removal:
    # gluster volume remove-brick VOLNAME BRICK commit
    For example,
    # gluster volume remove-brick test-volume server2:/exp2 commit
  5. Rerun the attach-tier command only with the required set of bricks:
    # gluster volume tier VOLNAME attach [replica COUNT] BRICK...
    For example,
    # gluster volume tier test-volume attach replica 2 server1:/exp1/tier1 server1:/exp2/tier2 server2:/exp3/tier3 server2:/exp5/tier5

    Important

    When you attach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.

10.4.2.2. Shrinking a Hot Tier Volume

You must first decide on which bricks should be part of the hot tiered volume and which bricks should be removed from the hot tier volume.
  1. Detach the tier by performing the steps listed in Section 12.7, “Detaching a Tier from a Volume”
  2. Rerun the attach-tier command only with the required set of bricks:
    # gluster volume tier VOLNAME attach [replica COUNT] brick...

    Important

    When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.

10.4.3. Stopping a remove-brick Operation

Important

Stopping a remove-brick operation is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
A remove-brick operation that is in progress can be stopped by using the stop command.

Note

Files that were already migrated during remove-brick operation will not be migrated back to the same brick when the operation is stopped.
To stop remove brick operation, use the following command:
# gluster volume remove-brick VOLNAME BRICK stop
For example:
gluster volume remove-brick di rhs1:/brick1/di21 rhs1:/brick1/di21 stop

Node   Rebalanced-files   size     scanned  failures   skipped   status  run-time in secs
----      -------         ----       ----     ------    -----     -----    ------   
localhost     23          376Bytes    34        0        0      stopped      2.00
rhs1          0           0Bytes      88        0        0      stopped      2.00  
rhs2          0           0Bytes       0        0        0      not started  0.00
'remove-brick' process may be in the middle of a file migration.
The process will be fully stopped once the migration of the file is complete.
Please check remove-brick process for completion before doing any further brick related tasks on the volume.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.