이 콘텐츠는 선택한 언어로 제공되지 않습니다.

8.5. Migrating Volumes


Data can be redistributed across bricks while the trusted storage pool is online and available.Before replacing bricks on the new servers, ensure that the new servers are successfully added to the trusted storage pool.

Note

Before performing a replace-brick operation, review the known issues related to replace-brick operation in the Red Hat Storage 3.0 Release Notes.

8.5.1. Replacing a Subvolume on a Distribute or Distribute-replicate Volume

This procedure applies only when at least one brick from the subvolume to be replaced is online. In case of a Distribute volume, the brick that must be replaced must be online. In case of a Distribute-replicate, at least one brick from the subvolume from the replica set that must be replaced must be online.
To replace the entire subvolume with new bricks on a Distribute-replicate volume, follow these steps:
  1. Add the new bricks to the volume.
    # gluster volume add-brick VOLNAME [<stripe|replica> <COUNT>] NEW-BRICKgluster volume add-brick VOLNAME [<stripe|replica> <COUNT>] NEW-BRICKgluster volume add-brick VOLNAME [<stripe|replica> <COUNT>] NEW-BRICKgluster volume add-brick VOLNAME [<stripe|replica> <COUNT>] NEW-BRICK
    Copy to Clipboard Toggle word wrap

    Example 8.1. Adding a Brick to a Distribute Volume

    # gluster volume add-brick test-volume server5:/exp5
    Add Brick successful
    Copy to Clipboard Toggle word wrap
  2. Verify the volume information using the command:
    # gluster volume info
     Volume Name: test-volume
        Type: Distribute
        Status: Started
        Number of Bricks: 5
        Bricks:
        Brick1: server1:/exp1
        Brick2: server2:/exp2
        Brick3: server3:/exp3
        Brick4: server4:/exp4
        Brick5: server5:/exp5
    Copy to Clipboard Toggle word wrap

    Note

    In case of a Distribute-replicate or stripe volume, you must specify the replica or stripe count in the add-brick command and provide the same number of bricks as the replica or stripe count to the add-brick command.
  3. Remove the bricks to be replaced from the subvolume.
    1. Start the remove-brick operation using the command:
      # gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> start# gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> start# gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> start
      Copy to Clipboard Toggle word wrap

      Example 8.2. Start a remove-brick operation on a distribute volume

      #gluster volume remove-brick test-volume server2:/exp2 start
      Remove Brick start successful
      Copy to Clipboard Toggle word wrap
    2. View the status of the remove-brick operation using the command:
      # gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK status# gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK status# gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK status# gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK status# gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK status
      Copy to Clipboard Toggle word wrap

      Example 8.3. View the Status of remove-brick Operation

      # gluster volume remove-brick test-volume server2:/exp2 status
      Node     Rebalanced-files size        scanned failures status
      ------------------------------------------------------------------
      server2  16               16777216    52      0        in progress
      Copy to Clipboard Toggle word wrap
      Keep monitoring the remove-brick operation status by executing the above command. When the value of the status field is set to complete in the output of remove-brick status command, proceed further.
    3. Commit the remove-brick operation using the command:
      #gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commit#gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commit#gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commit
      Copy to Clipboard Toggle word wrap

      Example 8.4. Commit the remove-brick Operation on a Distribute Volume

      #gluster volume remove-brick test-volume server2:/exp2 commit
      Copy to Clipboard Toggle word wrap
    4. Verify the volume information using the command:
      # gluster volume info
      Volume Name: test-volume
      Type: Distribute
      Status: Started
      Number of Bricks: 4
      Bricks:
      Brick1: server1:/exp1
      Brick3: server3:/exp3
      Brick4: server4:/exp4
      Brick5: server5:/exp5
      Copy to Clipboard Toggle word wrap
    5. Verify the content on the brick after committing the remove-brick operation on the volume. If there are any files leftover, copy it through FUSE or NFS mount.
      1. Verify if there are any pending files on the bricks of the subvolume.
        Along with files, all the application-specific extended attributes must be copied. glusterFS also uses extended attributes to store its internal data. The extended attributes used by glusterFS are of the form trusted.glusterfs.*, trusted.afr.*, and trusted.gfid. Any extended attributes other than ones listed above must also be copied.
        To copy the application-specific extended attributes and to achieve a an effect similar to the one that is described above, use the following shell script:
        Syntax:
        # copy.sh <glusterfs-mount-point> <brick>
        Copy to Clipboard Toggle word wrap

        Example 8.5. Code Snippet Usage

        If the mount point is /mnt/glusterfs and brick path is /export/brick1, then the script must be run as:
        # copy.sh /mnt/glusterfs /export/brick
        Copy to Clipboard Toggle word wrap
        #!/bin/bash
        
        MOUNT=$1
        BRICK=$2
        
        for file in `find $BRICK ! -type d`; do
            rpath=`echo $file | sed -e "s#$BRICK\(.*\)#\1#g"`
            rdir=`dirname $rpath`
        
            cp -fv $file $MOUNT/$rdir;
        
            for xattr in `getfattr -e hex -m. -d $file 2>/dev/null | sed -e '/^#/d' | grep -v -E "trusted.glusterfs.*" | grep -v -E "trusted.afr.*" | grep -v "trusted.gfid"`;
            do
                key=`echo $xattr | cut -d"=" -f 1`
                value=`echo $xattr | cut -d"=" -f 2`
        
                setfattr $MOUNT/$rpath -n $key -v $value
            done
        done
        Copy to Clipboard Toggle word wrap
      2. To identify a list of files that are in a split-brain state, execute the command:
        #gluster volume heal test-volume info
        Copy to Clipboard Toggle word wrap
      3. If there are any files listed in the output of the above command, delete those files from the mount point and manually retain the correct copy of the file after comparing the files across the bricks in a replica set. Selecting the correct copy of the file needs manual intervention by the System Administrator.
A single brick can be replaced during a hardware failure situation, such as a disk failure or a server failure. The brick that must be replaced could either be online or offline. This procedure is applicable for volumes with replication. In case of a Replicate or Distribute-replicate volume types, after replacing the brick, self-heal is triggered to heal the data on the new brick.
Procedure to replace an old brick with a new brick on a Replicate or Distribute-replicate volume:
  1. Ensure that the new brick (sys5:/home/gfs/r2_5) that replaces the old brick (sys0:/home/gfs/r2_0) is empty. Ensure that all the bricks are online. The brick that must be replaced can be in an offline state.
  2. Bring the brick that must be replaced to an offline state, if it is not already offline.
    1. Identify the PID of the brick to be replaced, by executing the command:
      # gluster volume status
      Status of volume: r2
      Gluster process                  Port    Online    Pid 
      ------------------------------------------------------- 
      Brick sys0:/home/gfs/r2_0         49152    Y    5342
        
      Brick sys1:/home/gfs/r2_1         49153    Y    5354 
      
      Brick sys2:/home/gfs/r2_2         49154    Y    5365
       
      Brick sys3:/home/gfs/r2_3         49155    Y    5376
      Copy to Clipboard Toggle word wrap
    2. Log in to the host on which the brick to be replaced has its process running and kill the brick.
      #kill -9 <PID>#kill -9 <PID>#kill -9 <PID>
      Copy to Clipboard Toggle word wrap
    3. Ensure that the brick to be replaced is offline and the other bricks are online by executing the command:
      # gluster volume status 
      Status of volume: r2 
      Gluster process                  Port    Online  Pid 
      ------------------------------------------------------
      Brick sys0:/home/gfs/r2_0         N/A      N    5342
      
      Brick sys1:/home/gfs/r2_1         49153    Y    5354
       
      Brick sys2:/home/gfs/r2_2         49154    Y    5365
      
      Brick sys3:/home/gfs/r2_3         49155    Y    5376
      Copy to Clipboard Toggle word wrap
  3. Create a FUSE mount point from any server to edit the extended attributes. Using the NFS and CIFS mount points, you will not be able to edit the extended attributes.
  4. Perform the following operations to change the Automatic File Replication extended attributes so that the heal process happens from the other brick (sys1:/home/gfs/r2_1) in the replica pair to the new brick (sys5:/home/gfs/r2_5).
    Note that /mnt/r2 is the FUSE mount path.
    1. Create a new directory on the mount point and ensure that a directory with such a name is not already present.
      #mkdir /mnt/r2/<name-of-nonexistent-dir>
      Copy to Clipboard Toggle word wrap
    2. Delete the directory and set the extended attributes.
      #rmdir /mnt/r2/<name-of-nonexistent-dir>
      Copy to Clipboard Toggle word wrap
      #setfattr -n trusted.non-existent-key -v abc /mnt/r2 
      #setfattr -x trusted.non-existent-key /mnt/r2
      Copy to Clipboard Toggle word wrap
    3. Ensure that the extended attributes on the other bricks in the replica (in this example, trusted.afr.r2-client-0) is not set to zero.
      #getfattr -d -m. -e hex /home/gfs/r2_1 # file: home/gfs/r2_1 
      security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 
      trusted.afr.r2-client-0=0x000000000000000300000002  
      trusted.afr.r2-client-1=0x000000000000000000000000 
      trusted.gfid=0x00000000000000000000000000000001 
      trusted.glusterfs.dht=0x0000000100000000000000007ffffffe 
      trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
      Copy to Clipboard Toggle word wrap
  5. Execute the replace-brick command with the force option:
    # gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit force 
    volume replace-brick: success: replace-brick commit successful
    Copy to Clipboard Toggle word wrap
  6. Check if the new brick is online.
    # gluster volume status
    Status of volume: r2 
    Gluster process                    Port    Online    Pid 
    --------------------------------------------------------- 
    Brick sys5:/home/gfs/r2_5            49156    Y    5731 
    
    Brick sys1:/home/gfs/r2_1            49153    Y    5354 
    
    Brick sys2:/home/gfs/r2_2            49154    Y    5365 
    
    Brick sys3:/home/gfs/r2_3            49155    Y    5376
    Copy to Clipboard Toggle word wrap
  7. Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica.
    # getfattr -d -m. -e hex /home/gfs/r2_1 
    getfattr: Removing leading '/' from absolute path names 
    # file: home/gfs/r2_1 
    security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 
    trusted.afr.r2-client-0=0x000000000000000000000000
    trusted.afr.r2-client-1=0x000000000000000000000000 
    trusted.gfid=0x00000000000000000000000000000001 
    trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
    trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
    Copy to Clipboard Toggle word wrap
    Note that in this example, the extended attributes trusted.afr.r2-client-0 and trusted.afr.r2-client-1 are set to zero.

8.5.3. Replacing an Old Brick with a New Brick on a Distribute Volume

Important

In case of a Distribute volume type, replacing a brick using this procedure will result in data loss.
  1. Replace a brick with a commit force option:
    # gluster volume replace-brick VOLNAME <BRICK> <NEW-BRICK> commit force# gluster volume replace-brick VOLNAME <BRICK> <NEW-BRICK> commit force# gluster volume replace-brick VOLNAME <BRICK> <NEW-BRICK> commit force
    Copy to Clipboard Toggle word wrap

    Example 8.6. Replace a brick on a Distribute Volume

    # gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit force 
    volume replace-brick: success: replace-brick commit successful
    Copy to Clipboard Toggle word wrap
  2. Verify if the new brick is online.
    # gluster volume status
    Status of volume: r2 
    Gluster process                    Port    Online    Pid 
    --------------------------------------------------------- 
    Brick sys5:/home/gfs/r2_5            49156    Y    5731 
    
    Brick sys1:/home/gfs/r2_1            49153    Y    5354 
    
    Brick sys2:/home/gfs/r2_2            49154    Y    5365 
    
    Brick sys3:/home/gfs/r2_3            49155    Y    5376
    Copy to Clipboard Toggle word wrap

Note

All the replace-brick command options except the commit force option are deprecated.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat