11.7. Expanding Volumes
Warning
Do not perform this process if geo-replication is configured. There is a race condition tracked by Bug 1683893 that means data can be lost when converting a volume if geo-replication is enabled.
Volumes can be expanded while the trusted storage pool is online and available. For example, you can add a brick to a distributed volume, which increases distribution and adds capacity to the Red Hat Gluster Storage volume. Similarly, you can add a group of bricks to a replicated or distributed replicated volume, which increases the capacity of the Red Hat Gluster Storage volume.
When expanding replicated or distributed replicated volumes, the number of bricks being added must be a multiple of the replica count. This also applies to arbitrated volumes. For example, to expand a distributed replicated volume with a replica count of 3, you need to add bricks in multiples of 3 (such as 6, 9, 12, etc.).
You can also convert a replica 2 volume into an arbitrated replica 3 volume by following the instructions in Section 5.7.5, “Converting to an arbitrated volume”.
Important
Converting an existing distribute volume to replicate or distribute-replicate volume is not supported.
Expanding a Volume
- From any server in the trusted storage pool, use the following command to probe the server on which you want to add a new brick:
gluster peer probe HOSTNAME
# gluster peer probe HOSTNAMEgluster peer probe HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster peer probe server5 gluster peer probe server6
# gluster peer probe server5 Probe successful # gluster peer probe server6 Probe successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the bricks using the following command:
gluster volume add-brick VOLNAME NEW_BRICK
# gluster volume add-brick VOLNAME NEW_BRICKgluster volume add-brick VOLNAME NEW_BRICKgluster volume add-brick VOLNAME NEW_BRICK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/
# gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/ Add Brick successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the volume information using the following command:
gluster volume info
# gluster volume info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command output displays information similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Rebalance the volume to ensure that files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, “Rebalancing Volumes”.The
add-brick
command should be followed by arebalance
operation to ensure better utilization of the added bricks.
11.7.1. Expanding a Tiered Volume Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
Warning
Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3.
You can add a group of bricks to a cold tier volume and to the hot tier volume to increase the capacity of the Red Hat Gluster Storage volume.
11.7.1.1. Expanding a Cold Tier Volume Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
Expanding a cold tier volume is same as a non-tiered volume. If you are reusing the brick, ensure to perform the steps listed in “Section 5.3.3, “ Reusing a Brick from a Deleted Volume ”” section.
- Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume (Deprecated)”
- From any server in the trusted storage pool, use the following command to probe the server on which you want to add a new brick :
gluster peer probe HOSTNAME
# gluster peer probe HOSTNAMEgluster peer probe HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster peer probe server5 gluster peer probe server6
# gluster peer probe server5 Probe successful # gluster peer probe server6 Probe successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the bricks using the following command:
gluster volume add-brick VOLNAME NEW_BRICK
# gluster volume add-brick VOLNAME NEW_BRICKgluster volume add-brick VOLNAME NEW_BRICKgluster volume add-brick VOLNAME NEW_BRICK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/
# gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Rebalance the volume to ensure that files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, “Rebalancing Volumes”.The
add-brick
command should be followed by arebalance
operation to ensure better utilization of the added bricks. - Reattach the tier to the volume with both old and new (expanded) bricks:
# gluster volume tier VOLNAME attach [replica COUNT] NEW-BRICK...
Important
When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.If you are reusing the brick, be sure to clearly wipe the existing data before attaching it to the tiered volume.
11.7.1.2. Expanding a Hot Tier Volume Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
You can expand a hot tier volume by attaching and adding bricks for the hot tier.
- Detach the tier by performing the steps listed in Section 16.7, “Detaching a Tier from a Volume (Deprecated)”
- Reattach the tier to the volume with both old and new (expanded) bricks:
# gluster volume tier VOLNAME attach [replica COUNT] NEW-BRICK...
For example,gluster volume tier test-volume attach replica 3 server1:/rhgs/tier5 server2:/rhgs/tier6 server1:/rhgs/tier7 server2:/rhgs/tier8
# gluster volume tier test-volume attach replica 3 server1:/rhgs/tier5 server2:/rhgs/tier6 server1:/rhgs/tier7 server2:/rhgs/tier8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities.If you are reusing the brick, be sure to clearly wipe the existing data before attaching it to the tiered volume.
11.7.2. Expanding a Dispersed or Distributed-dispersed Volume Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
Expansion of a dispersed or distributed-dispersed volume can be done by adding new bricks. The number of additional bricks should be in multiple of basic configuration of the volume. For example, if you have a volume with configuration (4+2 = 6), then you must only add 6 (4+2) or multiple of 6 bricks (such as 12, 18, 24 and so on).
Note
If you add bricks to a
Dispersed
volume, it will be converted to a Distributed-Dispersed
volume, and the existing dispersed volume will be treated as dispersed subvolume.
- From any server in the trusted storage pool, use the following command to probe the server on which you want to add new bricks:
gluster peer probe HOSTNAME
# gluster peer probe HOSTNAMEgluster peer probe HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the bricks using the following command:
gluster volume add-brick VOLNAME NEW_BRICK
# gluster volume add-brick VOLNAME NEW_BRICKgluster volume add-brick VOLNAME NEW_BRICKgluster volume add-brick VOLNAME NEW_BRICK
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume add-brick test-volume server4:/rhgs/brick7 server4:/rhgs/brick8 server5:/rhgs/brick9 server5:/rhgs/brick10 server6:/rhgs/brick11 server6:/rhgs/brick12
# gluster volume add-brick test-volume server4:/rhgs/brick7 server4:/rhgs/brick8 server5:/rhgs/brick9 server5:/rhgs/brick10 server6:/rhgs/brick11 server6:/rhgs/brick12
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - (Optional) View the volume information after adding the bricks:
gluster volume info VOLNAME
# gluster volume info VOLNAMEgluster volume info VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Rebalance the volume to ensure that the files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, “Rebalancing Volumes”.The
add-brick
command should be followed by arebalance
operation to ensure better utilization of the added bricks.
11.7.3. Expanding Underlying Logical Volume Copy linkLink copied to clipboard!
Copy linkLink copied to clipboard!
You can expand the size of a logical volume using the
lvextend
command.
Red Hat recommends following this process when you want to increase the storage capacity of replicated, arbitrated-replicated, or dispersed volumes, but not expanding distributed-replicated, arbitrated-distributed-replicated, or distributed-dispersed volumes.
Warning
It is recommended to involve the Red Hat Support team while performing this operation.
In the case of online logical volume extent, ensure the associated brick process is killed manually. It might occur certain operations are consuming data, or reading or writing a file on an associated brick. Proceeding with the extension before killing the brick process can have an adverse effect on performance. Identify the brick process ID and kill the same using the following command:
gluster volume status kill -9 brick-process-id
# gluster volume status
# kill -9 brick-process-id
- Stop all volumes using the brick with the following command:
gluster volume stop VOLNAME
# gluster volume stop VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check if new disk is visible using
lsblk
command:lsblk
# lsblk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create new physical volume using following command:
pvcreate /dev/PHYSICAL_VOLUME_NAME
# pvcreate /dev/PHYSICAL_VOLUME_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Use the following command to verify if the physical volume is created:
pvs
# pvs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Extend the existing volume group:
vgextend VOLUME_GROUP_NAME /dev/PHYSICAL_VOLUME_NAME
# vgextend VOLUME_GROUP_NAME /dev/PHYSICAL_VOLUME_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Use the following commands to check the size of volume group, and verify if it reflects the new addition:
vgscan
# vgscan
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure the volume group created has enough space to extend the logical volume:
vgdisplay VOLUME_GROUP_NAME
vgdisplay VOLUME_GROUP_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the file system name using the following command:df -h
# df -h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Extend the logical volume using the following command:
lvextend -L+nG /dev/mapper/ LOGICAL_VOLUME_NAME-VOLUME_GROUP_NAME
# lvextend -L+nG /dev/mapper/ LOGICAL_VOLUME_NAME-VOLUME_GROUP_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In case of thin pool, extend the pool using the following command:lvextend -L+nG VOLUME_GROUP_NAME/POOL_NAME
# lvextend -L+nG VOLUME_GROUP_NAME/POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the above commands, n is the additional size in GB to be extended.Execute the#lvdisplay
command to fetch the pool name.Use the following command to check if the logical volume is extended:lvdisplay VOLUME_GROUP_NAME
# lvdisplay VOLUME_GROUP_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to expand the filesystem to accommodate the extended logical volume:
xfs_growfs /dev/VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
# xfs_growfs /dev/VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remount the file system using the following command:
mount -o remount /dev/VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME /bricks/path_to_brick
# mount -o remount /dev/VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME /bricks/path_to_brick
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start all the volumes with
force
option:gluster volume start VOLNAME force
# gluster volume start VOLNAME force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow