検索

このコンテンツは選択した言語では利用できません。

Chapter 5. Configure performance improvements

download PDF

Some deployments benefit from additional configuration to achieve optimal performance. This section covers recommended additional configuration for certain deployments.

5.1. Improving volume performance by changing shard size

The default value of the shard-block-size parameter changed from 4MB to 64MB between Red Hat Hyperconverged Infrastructure for Virtualization version 1.0 and 1.1. This means that all new volumes are created with a shard-block-size value of 64MB. However, existing volumes retain the original shard-block-size value of 4MB.

There is no safe way to modify the shard-block-size value on volumes that contain data. Because shard block size applies only to writes that occur after the value is set, attempting to change the value on a volume that contains data results in a mixed shard block size, which results in poor performance.

This section shows you how to safely modify the shard block size on an existing volume after upgrading to Red Hat Hyperconverged Infrastructure for Virtualization 1.1 or higher, in order to take advantage of the performance benefits of a larger shard size.

5.1.1. Changing shard size on replicated volumes

  1. Create an inventory file

    Create an inventory file called normal_replicated_inventory.yml based on the following example.

    Replace host1, host2, and host3 with the FQDNs of your hosts, and edit device details to match your environment.

    Example normal_replicated_inventory.yml inventory file

    hc_nodes:
      hosts:
        # Host1
        host1:
          # Dedupe & Compression config
          # If logicalsize >= 1000G then slabsize=32G else slabsize=2G
          #gluster_infra_vdo:
          #   - { name: 'vdo_sdb', device: '/dev/sdb', logicalsize: '3000G', emulate512: 'on', slabsize: '32G',
          #      blockmapcachesize:  '128M', readcachesize: '20M', readcache: 'enabled', writepolicy: 'auto' }
    
          # With Dedupe & Compression
          #gluster_infra_volume_groups:
          #  - vgname: <volgroup_name>
          #    pvname: /dev/mapper/vdo_sdb
    
          # Without Dedupe & Compression
          gluster_infra_volume_groups:
            - vgname: <volgroup_name>
              pvname: /dev/sdb
    
          gluster_infra_mount_devices:
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
    
          gluster_infra_thinpools:
            - {vgname: '<volgroup_name>', thinpoolname: 'thinpool_<volgroup_name>', thinpoolsize: '500G', poolmetadatasize: '4G'}
    
          gluster_infra_lv_logicalvols:
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
    
          # Mount the devices
          gluster_infra_mount_devices:
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
    
        #Host2
        host2:
          # Dedupe & Compression config
          # If logicalsize >= 1000G then slabsize=32G else slabsize=2G
          #gluster_infra_vdo:
          #   - { name: 'vdo_sdb', device: '/dev/sdb', logicalsize: '3000G', emulate512: 'on', slabsize: '32G',
          #      blockmapcachesize:  '128M', readcachesize: '20M', readcache: 'enabled', writepolicy: 'auto' }
    
          # With Dedupe & Compression
          #gluster_infra_volume_groups:
          #  - vgname: <volgroup_name>
          #    pvname: /dev/mapper/vdo_sdb
    
          # Without Dedupe & Compression
          gluster_infra_volume_groups:
            - vgname: <volgroup_name>
              pvname: /dev/sdb
    
          gluster_infra_mount_devices:
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
    
          gluster_infra_thinpools:
            - {vgname: '<volgroup_name>', thinpoolname: 'thinpool_<volgroup_name>', thinpoolsize: '500G', poolmetadatasize: '4G'}
    
          gluster_infra_lv_logicalvols:
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
    
          # Mount the devices
          gluster_infra_mount_devices:
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
    
        #Host3
        host3:
          # Dedupe & Compression config
          # If logicalsize >= 1000G then slabsize=32G else slabsize=2G
          #gluster_infra_vdo:
          #   - { name: 'vdo_sdb', device: '/dev/sdb', logicalsize: '3000G', emulate512: 'on', slabsize: '32G',
          #      blockmapcachesize:  '128M', readcachesize: '20M', readcache: 'enabled', writepolicy: 'auto' }
    
          # With Dedupe & Compression
          #gluster_infra_volume_groups:
          #  - vgname: <volgroup_name>
          #    pvname: /dev/mapper/vdo_sdb
    
          # Without Dedupe & Compression
          gluster_infra_volume_groups:
            - vgname: <volgroup_name>
              pvname: /dev/sdb
    
          gluster_infra_mount_devices:
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
    
          gluster_infra_thinpools:
            - {vgname: '<volgroup_name>', thinpoolname: 'thinpool_<volgroup_name>', thinpoolsize: '500G', poolmetadatasize: '4G'}
    
          gluster_infra_lv_logicalvols:
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
    
          # Mount the devices
          gluster_infra_mount_devices:
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
    
      # Common configurations
      vars:
        cluster_nodes:
           - host1
           - host2
           - host3
        gluster_features_hci_cluster: "{{ cluster_nodes }}"
        gluster_features_hci_volumes:
           - { volname: 'data', brick: '<brick_mountpoint>' }
        gluster_features_hci_volume_options:
          {
            group: 'virt',
            storage.owner-uid: '36',
            storage.owner-gid: '36',
            network.ping-timeout: '30',
            performance.strict-o-direct: 'on',
            network.remote-dio: 'off',
            cluster.granular-entry-heal: 'enable',
            features.shard-block-size: '64MB'
          }

  2. Create the normal_replicated.yml playbook

    Create a normal_replicated.yml playbook file using the following example:

    Example normal_replicated.yml playbook

    ---
    
    # Safely changing the shard block size parameter value for normal replicated volume
    - name: Changing the shard block size
      hosts: hc_nodes
      remote_user: root
      gather_facts: no
      any_errors_fatal: true
    
      roles:
         - gluster.infra
         - gluster.features

  3. Run the playbook

    ansible-playbook -i normal_replicated_inventory.yml normal_replicated.yml

5.1.2. Changing shard size on arbitrated volumes

  1. Create an inventory file

    Create an inventory file called arbitrated_replicated_inventory.yml based on the following example.

    Replace host1, host2, and host3 with the FQDNs of your hosts, and edit device details to match your environment.

    Example arbitrated_replicated_inventory.yml inventory file

    hc_nodes:
      hosts:
        # Host1
        host1:
          # Dedupe & Compression config
          # If logicalsize >= 1000G then slabsize=32G else slabsize=2G
          #gluster_infra_vdo:
          #   - { name: 'vdo_sdb', device: '/dev/sdb', logicalsize: '3000G', emulate512: 'on', slabsize: '32G',
          #      blockmapcachesize:  '128M', readcachesize: '20M', readcache: 'enabled', writepolicy: 'auto' }
    
          # With Dedupe & Compression
          #gluster_infra_volume_groups:
          #  - vgname: <volgroup_name>
          #    pvname: /dev/mapper/vdo_sdb
    
          # Without Dedupe & Compression
          gluster_infra_volume_groups:
            - vgname: <volgroup_name>
              pvname: /dev/sdb
    
          gluster_infra_mount_devices:
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
    
          gluster_infra_thinpools:
            - {vgname: '<volgroup_name>', thinpoolname: 'thinpool_<volgroup_name>', thinpoolsize: '500G', poolmetadatasize: '4G'}
    
          gluster_infra_lv_logicalvols:
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
    
          # Mount the devices
          gluster_infra_mount_devices:
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
    
        #Host2
        host2:
          # Dedupe & Compression config
          # If logicalsize >= 1000G then slabsize=32G else slabsize=2G
          #gluster_infra_vdo:
          #   - { name: 'vdo_sdb', device: '/dev/sdb', logicalsize: '3000G', emulate512: 'on', slabsize: '32G',
          #      blockmapcachesize:  '128M', readcachesize: '20M', readcache: 'enabled', writepolicy: 'auto' }
    
          # With Dedupe & Compression
          #gluster_infra_volume_groups:
          #  - vgname: <volgroup_name>
          #    pvname: /dev/mapper/vdo_sdb
    
          # Without Dedupe & Compression
          gluster_infra_volume_groups:
            - vgname: <volgroup_name>
              pvname: /dev/sdb
    
          gluster_infra_mount_devices:
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
    
          gluster_infra_thinpools:
            - {vgname: '<volgroup_name>', thinpoolname: 'thinpool_<volgroup_name>', thinpoolsize: '500G', poolmetadatasize: '4G'}
    
          gluster_infra_lv_logicalvols:
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
    
          # Mount the devices
          gluster_infra_mount_devices:
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
    
        #Host3
        host3:
          # Dedupe & Compression config
          # If logicalsize >= 1000G then slabsize=32G else slabsize=2G
          #gluster_infra_vdo:
          #   - { name: 'vdo_sdb', device: '/dev/sdb', logicalsize: '3000G', emulate512: 'on', slabsize: '32G',
          #      blockmapcachesize:  '128M', readcachesize: '20M', readcache: 'enabled', writepolicy: 'auto' }
    
          # With Dedupe & Compression
          #gluster_infra_volume_groups:
          #  - vgname: <volgroup_name>
          #    pvname: /dev/mapper/vdo_sdb
    
          # Without Dedupe & Compression
          gluster_infra_volume_groups:
            - vgname: <volgroup_name>
              pvname: /dev/sdb
    
          gluster_infra_mount_devices:
            - path: <brick_mountpoint>
              lvname: <lv_name>
              vgname: <volgroup_name>
    
          gluster_infra_thinpools:
            - {vgname: '<volgroup_name>', thinpoolname: 'thinpool_<volgroup_name>', thinpoolsize: '500G', poolmetadatasize: '4G'}
    
          gluster_infra_lv_logicalvols:
            - vgname: <volgroup_name>
              thinpool: thinpool_<volgroup_name>
              lvname: <lv_name>
              lvsize: <size>G
    
          # Mount the devices
          gluster_infra_mount_devices:
             - { path: '<brick_mountpoint>', vgname: <volgroup_name>, lvname: <lv_name> }
    
      # Common configurations
      vars:
        cluster_nodes:
           - host1
           - host2
           - host3
        gluster_features_hci_cluster: "{{ cluster_nodes }}"
        gluster_features_hci_volumes:
           - { volname: 'data_one', brick: '<brick_mountpoint>', arbiter: 1 }
        gluster_features_hci_volume_options:
          {
            group: 'virt',
            storage.owner-uid: '36',
            storage.owner-gid: '36',
            network.ping-timeout: '30',
            performance.strict-o-direct: 'on',
            network.remote-dio: 'off',
            cluster.granular-entry-heal: 'enable',
            features.shard-block-size: '64MB',
            server.ssl: 'on',
            client.ssl: 'on',
            auth.ssl-allow: '<host1>;<host2>;<host3>'
          }

  2. Create the arbitrated_replicated.yml playbook

    Create a arbitrated_replicated.yml playbook file using the following example:

    Example arbitrated_replicated.yml playbook

    ---
    
    # Safely changing the shard block size parameter value for arbitrated replicated volume
    - name: Changing the shard block size
      hosts: hc_nodes
      remote_user: root
      gather_facts: no
      any_errors_fatal: true
    
      roles:
         - gluster.infra
         - gluster.features

  3. Run the playbook

    ansible-playbook -i arbitrated_replicated_inventory.yml arbitrated_replicated.yml

5.2. Configuring a logical volume cache (lvmcache) for an existing volume

If your main storage devices are not Solid State Disks (SSDs), Red Hat recommends configuring a logical volume cache (lvmcache) to achieve the required performance for Red Hat Hyperconverged Infrastructure for Virtualization deployments.

  1. Create inventory file

    Create an inventory file called cache_inventory.yml based on the example below.

    Replace <host1>, <host2>, and <host3> with the FQDNs of the hosts on which to configure the cache.

    Replace the following values throughout the file.

    <slow_device>,<fast_device>
    Specify the device to which the cache should attach, followed by the cache device, as a comma-delimited list, for example, cachedisk: '/dev/sdb,/dev/sde'.
    <fast_device_name>
    Specify the name of the cache logical volume to create, for example, cachelv_thinpool_gluster_vg_sde
    <fast_device_thinpool>
    Specify the name of the cache thin pool to create, for example, gluster_thinpool_gluster_vg_sde.

    Example cache_inventory.yml file

    hc_nodes:
      hosts:
        # Host1
        <host1>:
          gluster_infra_cache_vars:
            - vgname: gluster_vg_sdb
              cachedisk: '<slow_device>,<fast_device>'
              cachelvname: <fast_device_name>
              cachethinpoolname: <fast_device_thinpool>
              cachelvsize: '10G'
              cachemode: writethrough
    
        #Host2
        <host2>:
          gluster_infra_cache_vars:
            - vgname: gluster_vg_sdb
              cachedisk: '<slow_device>,<fast_device>'
              cachelvname: <fast_device_name>
              cachethinpoolname: <fast_device_thinpool>
              cachelvsize: '10G'
              cachemode: writethrough
    
        #Host3
        <host3>:
          gluster_infra_cache_vars:
            - vgname: gluster_vg_sdb
              cachedisk: '<slow_device>,<fast_device>'
              cachelvname: <fast_device_name>
              cachethinpoolname: <fast_device_thinpool>
              cachelvsize: '10G'
              cachemode: writethrough

  2. Create a playbook file

    Create an ansible playbook file named lvm_cache.yml.

    Example lvm_cache.yml file

    ---
    # Create LVM Cache
    - name: Setup LVM Cache
      hosts: hc_nodes
      remote_user: root
      gather_facts: no
      any_errors_fatal: true
    
      roles:
         - gluster.infra

  3. Run the playbook with the cachesetup tag

    Run the following command to apply the configuration specified in lvm_cache.yml to the hosts and devices specified in cache_inventory.yml.

    ansible-playbook -i cache_inventory.yml lvm_cache.yml --tags cachesetup
Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.