이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 5. Configure performance improvements
Some deployments benefit from additional configuration to achieve optimal performance. This section covers recommended additional configuration for certain deployments.
5.1. Improving volume performance by changing shard size 링크 복사링크가 클립보드에 복사되었습니다!
The default value of the shard-block-size parameter changed from 4MB to 64MB between Red Hat Hyperconverged Infrastructure for Virtualization version 1.0 and 1.1. This means that all new volumes are created with a shard-block-size value of 64MB. However, existing volumes retain the original shard-block-size value of 4MB.
There is no safe way to modify the shard-block-size value on volumes that contain data. Because shard block size applies only to writes that occur after the value is set, attempting to change the value on a volume that contains data results in a mixed shard block size, which results in poor performance.
This section shows you how to safely modify the shard block size on an existing volume after upgrading to Red Hat Hyperconverged Infrastructure for Virtualization 1.1 or higher, in order to take advantage of the performance benefits of a larger shard size.
5.1.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- A logical thin pool with sufficient free space to create additional logical volumes that are large enough to contain all existing virtual machines.
- A complete backup of your data. For details on how to achieve this, see Configuring backup and recovery options.
5.1.2. Safely changing the shard block size parameter value 링크 복사링크가 클립보드에 복사되었습니다!
A. Create a new storage domain
Create new thin provisioned logical volumes
For an arbitrated replicated volume:
Create an
lv_create_arbitrated.conffile with the following contents:[lv10:{<Gluster_Server_IP1>,<Gluster_Server_IP2>}] action=create lvname=<lv_name> ignore_lv_errors=no vgname=<volgroup_name> mount=<brick_mountpoint> lvtype=thinlv poolname=<thinpool_name> virtualsize=<size> [lv11:<Gluster_Server_IP3>] action=create lvname=<lv_name> ignore_lv_errors=no vgname=<volgroup_name> mount=<brick_mountpoint> lvtype=thinlv poolname=<thinpool_name> virtualsize=<size>Run the following command:
# gdeploy -c lv_create_arbitrated.conf
For a normal replicated volume:
Create an
lv_create_replicated.conffile with the following contents:[lv3] action=create lvname=<lv_name> ignore_lv_errors=no vgname=<volgroup_name> mount=<brick_mountpoint> lvtype=thinlv poolname=<thinpool_name> virtualsize=<size>Run the following command:
# gdeploy -c lv_create_replicated.conf
Create new gluster volumes on the new logical volumes
For an arbitrated replicated volume
Create a
gluster_arb_volume.conffile with the following contents:[volume4] action=create volname=data_one transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size,server.ssl,client.ssl,auth.ssl-allow value=virt,36,36,30,on,off,enable,64MB,on,on,"<Gluster_Server_IP1>;<Gluster_Server_IP2>;<Gluster_Server_IP3>" brick_dirs=<Gluster_Server_IP1>:<brick1_mountpoint>,<Gluster_Server_IP2>:<brick2_mountpoint>,<Gluster_Server_IP3>:<brick3_mountpoint> ignore_volume_errors=no arbiter_count=1Run the following command:
# gdeploy -c gluster_arb_volume.conf
For a normal replicated volume:
Create a gluster_rep_volume.conf file with the following contents:
[volume2] action=create volname=data transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size value=virt,36,36,30,on,off,enable,64MB brick_dirs=<Gluster_Server_IP1>:<brick1_mountpoint>,<Gluster_Server_IP2>:<brick2_mountpoint>,<Gluster_Server_IP3>:<brick3_mountpoint> ignore_volume_errors=noRun the following command:
# gdeploy -c gluster_rep_volume.conf
Create a new storage domain using the new gluster volumes
- Log in to the Administration Portal.
-
Click Storage
Domains and then click New Domain. - Set the Storage Type to GlusterFS and provide a Name for the domain.
- Check the Use managed gluster volume option and select the volume to use.
- Click OK to save.
B. Migrate any virtual machine templates
If your virtual machines are created from templates, copy each template to the new Storage Domain.
In Red Hat Virtualization Manager, click Compute
- Click the name of the template you want to migrate.
- Click the Disks subtab.
- Select the current disk and click Copy. The Copy Disk(s) dialog appears.
- Select the new storage domain as the target domain and click OK.
C. Migrate virtual machine disks to the new storage domain
For each virtual machine:
-
Click Storage
Disks. - Select the disk to move and click Move. The Move Disk(s) dialog opens.
- Select the new storage domain as the target domain and click OK.
You can monitor progress in the Tasks tab.
D. Verify that disk images migrated correctly
-
Click Storage
Disks. For each disk:
- Select the disk to check.
- Click the Storage subtab.
- Verify that the domain listed is the new storage domain.
Do not skip this step. There is no way to retrieve a disk image after a domain is detached and removed, so be sure that all disk images have correctly migrated before you move on.
E. Remove and reclaim the old storage domain
- Move the old storage domain into maintenance mode.
- Detach the old storage domain from the data center.
- Remove the old storage domain from the data center.
5.2. Configuring a logical volume cache (lvmcache) for improved performance 링크 복사링크가 클립보드에 복사되었습니다!
If your main storage devices are not Solid State Disks (SSDs), Red Hat recommends configuring a logical volume cache (lvmcache) to achieve the required performance for Red Hat Hyperconverged Infrastructure for Virtualization deployments.
Create the gdeploy configuration file
Create a gdeploy configuration file named
lvmcache.confthat contains at least the following information. Note that the ssd value should be the device name, not the device path (for example, use sdb not/dev/sdb).Example lvmcache.conf file
[hosts] <Gluster_Network_NodeA> <Gluster_Network_NodeB> <Gluster_Network_NodeC> [lv1] action=setup-cache ssd=sdb vgname=gluster_vg_sdb poolname=gluster_thinpool_sdb cache_lv=lvcache cache_lvsize=220GB #cachemode=writethroughImportantEnsure that disks specified as part of this deployment process do not have any partitions or labels.
ImportantThe default cache mode is
writethrough, butwritebackmode is also supported. To avoid the potential for data loss when implementing lvmcache inwritebackmode, two separate SSD/NVMe devices are highly recommended. By configuring the two devices in a RAID-1 configuration (via software or hardware), the potential of data loss from lost writes is reduced significantly.Run gdeploy
Run the following command to apply the configuration specified in
lvmcache.conf.# gdeploy -c lvmcache.conf
For further information about lvmcache configuration, see Red Hat Enterprise Linux 7 LVM Administration: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/LV.html#lvm_cache_volume_creation.