このコンテンツは選択した言語では利用できません。
11.7. Expanding Volumes
Warning
If you follow this process with geo-replication configured, you run the risk of data loss when converting a volume. This race condition is tracked by Bug 1683893 and the workaround is available in the Red Hat Gluster Storage Release Notes.
Volumes can be expanded while the trusted storage pool is online and available. For example, you can add a brick to a distributed volume, which increases distribution and adds capacity to the Red Hat Gluster Storage volume. Similarly, you can add a group of bricks to a replicated or distributed replicated volume, which increases the capacity of the Red Hat Gluster Storage volume.
When expanding replicated or distributed replicated volumes, the number of bricks being added must be a multiple of the replica count. This also applies to arbitrated volumes. For example, to expand a distributed replicated volume with a replica count of 3, you need to add bricks in multiples of 3 (such as 6, 9, 12, etc.).
You can also convert a replica 2 volume into an arbitrated replica 3 volume by following the instructions in Section 5.8.5, “Converting to an arbitrated volume”.
Important
Converting an existing distribute volume to replicate or distribute-replicate volume is not supported.
Expanding a Volume
- From any server in the trusted storage pool, use the following command to probe the server on which you want to add a new brick:
#
gluster peer probe HOSTNAME
For example:# gluster peer probe server5 Probe successful # gluster peer probe server6 Probe successful
- Add the bricks using the following command:
#
gluster volume add-brick VOLNAME NEW_BRICK
For example:# gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/ Add Brick successful
- Check the volume information using the following command:
#
gluster volume info
The command output displays information similar to the following:Volume Name: test-volume Type: Distribute-Replicate Status: Started Number of Bricks: 6 Bricks: Brick1: server1:/rhgs/brick1 Brick2: server2:/rhgs/brick2 Brick3: server3:/rhgs/brick3 Brick4: server4:/rhgs/brick4 Brick5: server5:/rhgs/brick5 Brick6: server6:/rhgs/brick6
- Rebalance the volume to ensure that files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, “Rebalancing Volumes”.The
add-brick
command should be followed by arebalance
operation to ensure better utilization of the added bricks.
11.7.1. Expanding a Tiered Volume
You can add a group of bricks to a cold tier volume and to the hot tier volume to increase the capacity of the Red Hat Gluster Storage volume.
11.7.1.1. Expanding a Cold Tier Volume
Expanding a cold tier volume is same as a non-tiered volume. If you are reusing the brick, ensure to perform the steps listed in “Section 5.4.3, “ Reusing a Brick from a Deleted Volume ”” section.
- Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume”
- From any server in the trusted storage pool, use the following command to probe the server on which you want to add a new brick :
#
gluster peer probe HOSTNAME
For example:# gluster peer probe server5 Probe successful # gluster peer probe server6 Probe successful
- Add the bricks using the following command:
#
gluster volume add-brick VOLNAME NEW_BRICK
For example:# gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/
- Rebalance the volume to ensure that files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, “Rebalancing Volumes”.The
add-brick
command should be followed by arebalance
operation to ensure better utilization of the added bricks. - Reattach the tier to the volume with both old and new (expanded) bricks:
# gluster volume tier VOLNAME attach [replica COUNT] NEW-BRICK...
Important
When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.If you are reusing the brick, be sure to clearly wipe the existing data before attaching it to the tiered volume.
11.7.1.2. Expanding a Hot Tier Volume
You can expand a hot tier volume by attaching and adding bricks for the hot tier.
- Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume”
- Reattach the tier to the volume with both old and new (expanded) bricks:
# gluster volume tier VOLNAME attach [replica COUNT] NEW-BRICK...
For example,# gluster volume tier test-volume attach replica 2 server1:/rhgs/tier5 server2:/rhgs/tier6 server1:/rhgs/tier7 server2:/rhgs/tier8
Important
When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.If you are reusing the brick, be sure to clearly wipe the existing data before attaching it to the tiered volume.
11.7.2. Expanding a Dispersed or Distributed-dispersed Volume
Expansion of a dispersed or distributed-dispersed volume can be done by adding new bricks. The number of additional bricks should be in multiple of basic configuration of the volume. For example, if you have a volume with configuration (4+2 = 6), then you must only add 6 (4+2) or multiple of 6 bricks (such as 12, 18, 24 and so on).
Note
If you add bricks to a
Dispersed
volume, it will be converted to a Distributed-Dispersed
volume, and the existing dispersed volume will be treated as dispersed subvolume.
- From any server in the trusted storage pool, use the following command to probe the server on which you want to add new bricks:
#
gluster peer probe HOSTNAME
For example:# gluster peer probe server4 Probe successful # gluster peer probe server5 Probe successful # gluster peer probe server6 Probe successful
- Add the bricks using the following command:
#
gluster volume add-brick VOLNAME NEW_BRICK
For example:# gluster volume add-brick test-volume server4:/rhgs/brick7 server4:/rhgs/brick8 server5:/rhgs/brick9 server5:/rhgs/brick10 server6:/rhgs/brick11 server6:/rhgs/brick12
- (Optional) View the volume information after adding the bricks:
#
gluster volume info VOLNAME
For example:# gluster volume info test-volume Volume Name: test-volume Type: Distributed-Disperse Volume ID: 2be607f2-f961-4c4b-aa26-51dcb48b97df Status: Started Snapshot Count: 0 Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: server1:/rhgs/brick1 Brick2: server1:/rhgs/brick2 Brick3: server2:/rhgs/brick3 Brick4: server2:/rhgs/brick4 Brick5: server3:/rhgs/brick5 Brick6: server3:/rhgs/brick6 Brick7: server4:/rhgs/brick7 Brick8: server4:/rhgs/brick8 Brick9: server5:/rhgs/brick9 Brick10: server5:/rhgs/brick10 Brick11: server6:/rhgs/brick11 Brick12: server6:/rhgs/brick12 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on
- Rebalance the volume to ensure that the files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, “Rebalancing Volumes”.The
add-brick
command should be followed by arebalance
operation to ensure better utilization of the added bricks.
11.7.3. Expanding Underlying Logical Volume
You can expand the size of a logical volume using the
lvextend
command.
Red Hat recommends following this process when you want to increase the storage capacity of replicated, arbitrated-replicated, or dispersed volumes, but not expanding distributed-replicated, arbitrated-distributed-replicated, or distributed-dispersed volumes.
Warning
It is recommended to involve the Red Hat Support team while performing this operation.
In the case of online logical volume extent, ensure the associated brick process is killed manually. It might occur certain operations are consuming data, or reading or writing a file on an associated brick. Proceeding with the extension before killing the brick process can have an adverse effect on performance. Identify the brick process ID and kill the same using the following command:
# gluster volume status # kill -9 brick-process-id
- Stop all volumes using the brick with the following command:
# gluster volume stop VOLNAME
- Check if new disk is visible using
lsblk
command:# lsblk
- Create new physical volume using following command:
# pvcreate /dev/PHYSICAL_VOLUME_NAME
- Use the following command to verify if the physical volume is created:
# pvs
- Extend the existing volume group:
# vgextend VOLUME_GROUP_NAME /dev/PHYSICAL_VOLUME_NAME
- Use the following commands to check the size of volume group, and verify if it reflects the new addition:
# vgscan
- Ensure the volume group created has enough space to extend the logical volume:
vgdisplay VOLUME_GROUP_NAME
Retrieve the file system name using the following command:# df -h
- Extend the logical volume using the following command:
# lvextend -L+nG /dev/mapper/ LOGICAL_VOLUME_NAME-VOLUME_GROUP_NAME
In case of thin pool, extend the pool using the following command:# lvextend -L+nG VOLUME_GROUP_NAME/POOL_NAME
In the above commands, n is the additional size in GB to be extended.Execute the#lvdisplay
command to fetch the pool name.Use the following command to check if the logical volume is extended:# lvdisplay VOLUME_GROUP_NAME
- Execute the following command to expand the filesystem to accommodate the extended logical volume:
# xfs_growfs /dev/VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
- Remount the file system using the following command:
# mount -o remount /dev/VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME /bricks/path_to_brick
- Start all the volumes with
force
option:# gluster volume start VOLNAME force