此内容没有您所选择的语言版本。

Chapter 5. Configure Performance Improvements


Some deployments benefit from additional configuration to achieve optimal performance. This section covers recommended additional configuration for certain deployments.

5.1. Improving volume performance by changing shard size

The default value of the shard-block-size parameter changed from 4MB to 64MB between Red Hat Hyperconverged Infrastructure version 1.0 and 1.1. This means that all new volumes are created with a shard-block-size value of 64MB. However, existing volumes retain the original shard-block-size value of 4MB.

There is no safe way to modify the shard-block-size value on volumes that contain data. Because shard block size applies only to writes that occur after the value is set, attempting to change the value on a volume that contains data results in a mixed shard block size, which results in poor performance.

This section shows you how to safely modify the shard block size on an existing volume after upgrading to Red Hat Hyperconverged Infrastructure 1.1, in order to take advantage of the performance benefits of a larger shard size.

5.1.1. Prerequisites

5.1.2. Safely changing the shard block size parameter value

A. Create a new storage domain

  1. Create new thin provisioned logical volumes

    1. For an arbitrated replicated volume:

      1. Create an lv_create_arbitrated.conf file with the following contents:

        [lv10:{<Gluster_Server_IP1>,<Gluster_Server_IP2>}]
        action=create
        lvname=<lv_name>
        ignore_lv_errors=no
        vgname=<volgroup_name>
        mount=<brick_mountpoint>
        lvtype=thinlv
        poolname=<thinpool_name>
        virtualsize=<size>
        
        [lv11:<Gluster_Server_IP3>]
        action=create
        lvname=<lv_name>
        ignore_lv_errors=no
        vgname=<volgroup_name>
        mount=<brick_mountpoint>
        lvtype=thinlv
        poolname=<thinpool_name>
        virtualsize=<size>
        Copy to Clipboard Toggle word wrap
      2. Run the following command:

        # gdeploy -c lv_create_arbitrated.conf
        Copy to Clipboard Toggle word wrap
    2. For a normal replicated volume:

      1. Create an lv_create_replicated.conf file with the following contents:

        [lv3]
        action=create
        lvname=<lv_name>
        ignore_lv_errors=no
        vgname=<volgroup_name>
        mount=<brick_mountpoint>
        lvtype=thinlv
        poolname=<thinpool_name>
        virtualsize=<size>
        Copy to Clipboard Toggle word wrap
      2. Run the following command:

        # gdeploy -c lv_create_replicated.conf
        Copy to Clipboard Toggle word wrap
  2. Create new gluster volumes on the new logical volumes

    1. For an arbitrated replicated volume

      1. Create a gluster_arb_volume.conf file with the following contents:

        [volume4]
        action=create
        volname=data_one
        transport=tcp
        replica=yes
        replica_count=3
        key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size,server.ssl,client.ssl,auth.ssl-allow
        value=virt,36,36,30,on,off,enable,64MB,on,on,"<Gluster_Server_IP1>;<Gluster_Server_IP2>;<Gluster_Server_IP3>"
        brick_dirs=<Gluster_Server_IP1>:<brick1_mountpoint>,<Gluster_Server_IP2>:<brick2_mountpoint>,<Gluster_Server_IP3>:<brick3_mountpoint>
        ignore_volume_errors=no
        arbiter_count=1
        Copy to Clipboard Toggle word wrap
      2. Run the following command:

        # gdeploy -c gluster_arb_volume.conf
        Copy to Clipboard Toggle word wrap
    2. For a normal replicated volume:

      1. Create a gluster_rep_volume.conf file with the following contents:

        [volume2]
        action=create
        volname=data
        transport=tcp
        replica=yes
        replica_count=3
        key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size
        value=virt,36,36,30,on,off,enable,64MB
        brick_dirs=<Gluster_Server_IP1>:<brick1_mountpoint>,<Gluster_Server_IP2>:<brick2_mountpoint>,<Gluster_Server_IP3>:<brick3_mountpoint>
        ignore_volume_errors=no
        Copy to Clipboard Toggle word wrap
      2. Run the following command:

        # gdeploy -c gluster_rep_volume.conf
        Copy to Clipboard Toggle word wrap
  3. Create a new storage domain using the new gluster volumes

    Browse to the engine and follow the steps in Create the master storage domain to add a new storage domain consisting of the new gluster volume.

B. Migrate any virtual machine templates

If your virtual machines are created from templates, copy each template to the new Storage Domain.

Click the Template tab. For each template to migrate:

  1. Select the template to migrate.
  2. Click the Disks tab.
  3. Click Copy, and select the new storage domain as the target domain.

C. Migrate virtual machine disks to the new storage domain

For each virtual machine:

  1. Right-click VM Data Disks Move.
  2. Select the new storage domain as the target domain.

You can monitor progress in the Tasks tab.

D. Verify that disk images migrated correctly

Click the Disks tab. For each migrated disk:

  1. Select the disk to check.
  2. Click the Storage sub-tab.
  3. Verify that the domain listed is the new storage domain.
Important

Do not skip this step. There is no way to retrieve a disk image after a domain is detached and removed, so be sure that all disk images have correctly migrated before you move on.

E. Remove and reclaim the old storage domain

  1. Move the old storage domain into maintenance mode.
  2. Detach the old storage domain from the data center.
  3. Remove the old storage domain from the data center.

If your main storage devices are not Solid State Disks (SSDs), Red Hat recommends configuring a logical volume cache (lvmcache) to achieve the required performance for Red Hat Hyperconverged Infrastructure deployments.

  1. Create the gdeploy configuration file

    Create a gdeploy configuration file named lvmcache.conf that contains at least the following information. Note that the ssd value should be the device name, not the device path (for example, use sdb not /dev/sdb).

    Example lvmcache.conf file

    [hosts]
    <Gluster_Network_NodeA>
    <Gluster_Network_NodeB>
    <Gluster_Network_NodeC>
    
    [lv1]
    action=setup-cache
    ssd=sdb
    vgname=gluster_vg_sdb
    poolname=gluster_thinpool_sdb
    cache_lv=lvcache
    cache_lvsize=220GB
    #cachemode=writethrough
    Copy to Clipboard Toggle word wrap

    Important

    Ensure that disks specified as part of this deployment process do not have any partitions or labels.

    Important

    The default cache mode is writethrough, but writeback mode is also supported. To avoid the potential for data loss when implementing lvmcache in writeback mode, two separate SSD/NVMe devices are highly recommended. By configuring the two devices in a RAID-1 configuration (via software or hardware), the potential of data loss from lost writes is reduced significantly.

  2. Run gdeploy

    Run the following command to apply the configuration specified in lvmcache.conf.

    # gdeploy -c lvmcache.conf
    Copy to Clipboard Toggle word wrap

For further information about lvmcache configuration, see Red Hat Enterprise Linux 7 LVM Administration: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/LV.html#lvm_cache_volume_creation.

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat