検索

このコンテンツは選択した言語では利用できません。

11.8. Shrinking Volumes

download PDF
You can shrink volumes while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible in a distributed volume because of a hardware or network failure.
When shrinking distributed replicated volumes, the number of bricks being removed must be a multiple of the replica count. For example, to shrink a distributed replicated volume with a replica count of 3, you need to remove bricks in multiples of 3 (such as 6, 9, 12, etc.). In addition, the bricks you are removing must be from the same sub-volume (the same replica set). In a non-replicated volume, all bricks must be available in order to migrate data and perform the remove brick operation. In a replicated or arbitrated volume, at least one of the data bricks in the replica set must be available.
The guidelines are identical when removing a distribution set from a distributed replicated volume with arbiter bricks. If you want to reduce the replica count of an arbitrated distributed replicated volume to replica 2, you must remove only the arbiter bricks. If you want to reduce a volume from arbitrated distributed replicated to distributed only, remove the arbiter brick and one replica brick from each replica subvolume.

Shrinking a Volume

  1. Remove a brick using the following command:
    # gluster volume remove-brick VOLNAME BRICK start
    For example:
    # gluster volume remove-brick test-volume server2:/rhgs/brick2 start
              Remove Brick start successful

    Note

    If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick.
  2. You can view the status of the remove brick operation using the following command:
    # gluster volume remove-brick VOLNAME BRICK status
    For example:
    # gluster volume remove-brick test-volume server2:/rhgs/brick2 status
          Node Rebalanced  size scanned failures skipped   status  run time
    	         -files					                                     in h:m:s
    ----------  --------- ------ ------ -------- ------  --------- --------
    localhost        5032 43.4MB  27715       0    5604  completed  0:15:05
    10.70.43.41         0 0Bytes      0       0       0  completed  0:08:18
    
    volume rebalance: test-volume: success
  3. When the data migration shown in the previous status command is complete, run the following command to commit the brick removal:
    # gluster volume remove-brick VOLNAME BRICK commit
    For example,
    # gluster volume remove-brick test-volume server2:/rhgs/brick2 commit
  4. After the brick removal, you can check the volume information using the following command:
    # gluster volume info 
    The command displays information similar to the following:
    # gluster volume info
    Volume Name: test-volume
    Type: Distribute
    Status: Started
    Number of Bricks: 3
    Bricks:
    Brick1: server1:/rhgs/brick1
    Brick3: server3:/rhgs/brick3
    Brick4: server4:/rhgs/brick4

11.8.1. Shrinking a Geo-replicated Volume

  1. Remove a brick using the following command:
    # gluster volume remove-brick VOLNAME BRICK start
    For example:
    # gluster volume remove-brick MASTER_VOL MASTER_HOST:/rhgs/brick2 start
    Remove Brick start successful

    Note

    If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick.
  2. Use geo-replication config checkpoint to ensure that all the data in that brick is synced to the slave.
    1. Set a checkpoint to help verify the status of the data synchronization.
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint now
    2. Verify the checkpoint completion for the geo-replication session using the following command:
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail
  3. You can view the status of the remove brick operation using the following command:
    # gluster volume remove-brick VOLNAME BRICK status
    For example:
    # gluster volume remove-brick  MASTER_VOL MASTER_HOST:/rhgs/brick2 status
  4. Stop the geo-replication session between the master and the slave:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
  5. When the data migration shown in the previous status command is complete, run the following command to commit the brick removal:
    # gluster volume remove-brick VOLNAME BRICK commit
    For example,
    # gluster volume remove-brick  MASTER_VOL MASTER_HOST:/rhgs/brick2 commit
  6. After the brick removal, you can check the volume information using the following command:
    # gluster volume info 
  7. Start the geo-replication session between the hosts:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start

11.8.2. Shrinking a Tiered Volume

You can shrink a tiered volume while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible because of a hardware or network failure.

11.8.2.1. Shrinking a Cold Tier Volume

  1. Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume”
  2. Remove a brick using the following command:
    # gluster volume remove-brick VOLNAME BRICK start
    For example:
    # gluster volume remove-brick test-volume server2:/rhgs/brick2 start
    Remove Brick start successful

    Note

    If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick.
  3. You can view the status of the remove brick operation using the following command:
    # gluster volume remove-brick VOLNAME BRICK status
    For example:
    # gluster volume remove-brick test-volume server2:/rhgs/brick2 status
          Node    Rebalanced-files          size       scanned      failures         status
     ---------         -----------   -----------   -----------   -----------   ------------
     localhost                  16      16777216            52             0    in progress
    192.168.1.1                 13      16723211            47             0    in progress
  4. When the data migration shown in the previous status command is complete, run the following command to commit the brick removal:
    # gluster volume remove-brick VOLNAME BRICK commit
    For example,
    # gluster volume remove-brick test-volume server2:/rhgs/brick2 commit
  5. Rerun the attach-tier command only with the required set of bricks:
    # gluster volume tier VOLNAME attach [replica COUNT] BRICK...
    For example,
    # gluster volume tier test-volume attach replica 2 server1:/rhgs/tier1 server2:/rhgs/tier2 server1:/rhgs/tier3 server2:/rhgs/tier4

    Important

    When you attach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.

11.8.2.2. Shrinking a Hot Tier Volume

You must first decide on which bricks should be part of the hot tiered volume and which bricks should be removed from the hot tier volume.
  1. Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume”
  2. Rerun the attach-tier command only with the required set of bricks:
    # gluster volume tier VOLNAME attach [replica COUNT] brick...

    Important

    When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.

11.8.3. Stopping a remove-brick Operation

A remove-brick operation that is in progress can be stopped by using the stop command.

Note

Files that were already migrated during remove-brick operation will not be migrated back to the same brick when the operation is stopped.
To stop remove brick operation, use the following command:
# gluster volume remove-brick VOLNAME BRICK stop
For example:
gluster volume remove-brick test-volume server1:/rhgs/brick1/  server2:/brick2/ stop

Node   Rebalanced-files   size     scanned  failures   skipped   status  run-time in secs
----      -------         ----       ----     ------    -----     -----    ------
localhost     23          376Bytes    34        0        0      stopped      2.00
rhs1          0           0Bytes      88        0        0      stopped      2.00
rhs2          0           0Bytes       0        0        0      not started  0.00
'remove-brick' process may be in the middle of a file migration.
The process will be fully stopped once the migration of the file is complete.
Please check remove-brick process for completion before doing any further brick related tasks on the volume.
Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.