Chapter 1. Add compute and storage resources


Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for Virtualization) can be scaled to 6, 9, or 12 nodes.

You can add compute and storage resources in several ways:

You can also increase the space available on your existing nodes to expand storage without expanding compute resources.

1.1. Creating new bricks using ansible

If you want to create bricks on a lot of hosts at once, you can automate the process by creating an ansible playbook. Follow this process to create and run a playbook that creates, formats, and mounts bricks for use in a hyperconverged environment.

Prerequisites

  • Install the physical machines to host your new bricks.

    Follow the instructions in Install Physical Host Machines.

  • Configure key-based SSH authentication without a password between all nodes.

    Configure this from the node that is running the Web Console to all new nodes, and from the first new node to all other new nodes.

    Important

    RHHI for Virtualization expects key-based SSH authentication without a password between these nodes for both IP addresses and FQDNs. Ensure that you configure key-based SSH authentication between these machines for the IP address and FQDN of all storage and management network interfaces.

    Follow the instructions in Using key-based authentication to configure key-based SSH authentication without a password.

  • Verify that your hosts do not use a Virtual Disk Optimization (VDO) layer. If you have a VDO layer, use Section 1.2, “Creating new bricks above VDO layer using ansible” instead.

Procedure

  1. Create an inventory file

    Create a new inventory file in the /etc/ansible/roles/gluster.infra/playbooks directory using the following example.

    This file lists the hosts on which to create new bricks.

    Example inventory file

    [hosts]
    server4.example.com
    server5.example.com
    server6.example.com

  2. Create a bricks.yml variables file

    Create a new bricks.yml file in the /etc/ansible/roles/gluster.infra/playbooks directory using the following example.

    This file defines the underlying storage infrastructure and settings to be created or used on each host.

    Example bricks.yml variable file

    # gluster_infra_disktype
    # Set a disk type. Options: JBOD, RAID6, RAID10 - Default: JBOD
    gluster_infra_disktype: RAID10
    
    # gluster_infra_dalign
    # Dataalignment, for JBOD default is 256K if not provided.
    # For RAID{6,10} dataalignment is computed by multiplying
    # gluster_infra_diskcount and gluster_infra_stripe_unit_size.
    gluster_infra_dalign: 256K
    
    # gluster_infra_diskcount
    # Required only for RAID6 and RAID10.
    gluster_infra_diskcount: 10
    
    # gluster_infra_stripe_unit_size
    # Required only in case of RAID6 and RAID10. Stripe unit size always in KiB, do
    # not provide the trailing `K' in the value.
    gluster_infra_stripe_unit_size: 128
    
    # gluster_infra_volume_groups
    # Variables for creating volume group
    gluster_infra_volume_groups:
       - { vgname: 'vg_vdb', pvname: '/dev/vdb' }
       - { vgname: 'vg_vdc', pvname: '/dev/vdc' }
    
    # gluster_infra_thick_lvs
    # Variable for thick lv creation
    gluster_infra_thick_lvs:
      - { vgname: 'vg_vdb', lvname: 'vg_vdb_thicklv1', size: '10G' }
    
    # gluster_infra_thinpools
    # thinpoolname is optional, if not provided `vgname' followed by _thinpool is
    # used for name. poolmetadatasize is optional, default 16G is used
    gluster_infra_thinpools:
      - {vgname: 'vg_vdb', thinpoolname: 'foo_thinpool', thinpoolsize: '10G', poolmetadatasize: '1G' }
      - {vgname: 'vg_vdc', thinpoolname: 'bar_thinpool', thinpoolsize: '20G', poolmetadatasize: '1G' }
    
    # gluster_infra_lv_logicalvols
    # Thinvolumes for the brick. `thinpoolname' is optional, if omitted `vgname'
    # followed by _thinpool is used
    gluster_infra_lv_logicalvols:
       - { vgname: 'vg_vdb', thinpool: 'foo_thinpool', lvname: 'vg_vdb_thinlv', lvsize: '500G' }
       - { vgname: 'vg_vdc', thinpool: 'bar_thinpool', lvname: 'vg_vdc_thinlv', lvsize: '500G' }
    
    # Setting up cache using SSD disks
    gluster_infra_cache_vars:
       - { vgname: 'vg_vdb', cachedisk: '/dev/vdd',
           cachethinpoolname: 'foo_thinpool', cachelvname: 'cachelv',
           cachelvsize: '20G', cachemetalvname: 'cachemeta',
           cachemetalvsize: '100M', cachemode: 'writethrough' }
    
    # gluster_infra_mount_devices
    gluster_infra_mount_devices:
       - { path: '/rhgs/thicklv', vgname: 'vg_vdb', lvname: 'vg_vdb_thicklv1' }
       - { path: '/rhgs/thinlv1', vgname: 'vg_vdb', lvname: 'vg_vdb_thinlv' }
       - { path: '/rhgs/thinlv2', vgname: 'vg_vdc', lvname: 'vg_vdc_thinlv' }

    Important

    If the path: defined does not begin with /rhgs the bricks are not detected automatically by the Administration Portal. Synchronize the host storage after running the create_brick.yml playbook to add the new bricks to the Administration Portal.

  3. Create a create_brick.yml playbook file

    Create a new create_brick.yml file in the /etc/ansible/roles/gluster.infra/playbooks directory using the following example.

    This file defines the work involved in creating a brick using the gluster.infra role and the variable file you created above.

    Example create_brick.yml playbook file

    ---
    - name: Create a GlusterFS brick on the servers
      remote_user: root
      hosts: all
      gather_facts: false
      vars_files:
        - bricks.yml
    
      roles:
        - gluster.infra

  4. Execute the playbook

    Run the following command from the /etc/ansible/roles/gluster.infra/playbooks directory to run the playbook you created using the inventory and the variables files you defined above.

    # ansible-playbook -i inventory create_brick.yml
  5. Verify that your bricks are available

    1. Click Compute Hosts and select the host.
    2. Click Storage Devices and check the list of storage devices for your new bricks.

      If you cannot see your new bricks, click Sync and wait for them to appear in the list of storage devices.

1.2. Creating new bricks above VDO layer using ansible

If you want to create bricks on a lot of hosts at once, you can automate the process by creating an ansible playbook.

Prerequisites

  • Install the physical machines to host your new bricks.

    Follow the instructions in Install Physical Host Machines.

  • Configure key-based SSH authentication without a password between all nodes.

    Configure this from the node that is running the Web Console to all new nodes, and from the first new node to all other new nodes.

    Important

    RHHI for Virtualization expects key-based SSH authentication without a password between these nodes for both IP addresses and FQDNs. Ensure that you configure key-based SSH authentication between these machines for the IP address and FQDN of all storage and management network interfaces.

    Follow the instructions in Using key-based authentication to configure key-based SSH authentication without a password.

  • Verify that your hosts use a Virtual Disk Optimization (VDO) layer. If you do not have a VDO layer, use Section 1.1, “Creating new bricks using ansible” instead.

Procedure

  1. Create an inventory file

    Create a new inventory file in the /etc/ansible/roles/gluster.infra/playbooks directory using the following example.

    This file lists the hosts on which to create new bricks.

    Example inventory file

    [hosts]
    server4.example.com
    server5.example.com
    server6.example.com

  2. Create a bricks.yml variables file

    Create a new bricks.yml file in the /etc/ansible/roles/gluster.infra/playbooks directory using the following example.

    This file defines the underlying storage infrastructure and settings to be created or used on each host.

    Example vdo_bricks.yml variable file

    # gluster_infra_disktype
    # Set a disk type. Options: JBOD, RAID6, RAID10 - Default: JBOD
    gluster_infra_disktype: RAID10
    
    # gluster_infra_dalign
    # Dataalignment, for JBOD default is 256K if not provided.
    # For RAID{6,10} dataalignment is computed by multiplying
    # gluster_infra_diskcount and gluster_infra_stripe_unit_size.
    gluster_infra_dalign: 256K
    
    # gluster_infra_diskcount
    # Required only for RAID6 and RAID10.
    gluster_infra_diskcount: 10
    
    # gluster_infra_stripe_unit_size
    # Required only in case of RAID6 and RAID10. Stripe unit size always in KiB, do
    # not provide the trailing `K' in the value.
    gluster_infra_stripe_unit_size: 128
    
    # VDO creation
    gluster_infra_vdo:
       - { name: 'hc_vdo_1', device: '/dev/vdb' }
       - { name: 'hc_vdo_2', device: '/dev/vdc' }
    
    # gluster_infra_volume_groups
    # Variables for creating volume group
    gluster_infra_volume_groups:
       - { vgname: 'vg_vdb', pvname: '/dev/mapper/hc_vdo_1' }
       - { vgname: 'vg_vdc', pvname: '/dev/mapper/hc_vdo_2' }
    
    # gluster_infra_thick_lvs
    # Variable for thick lv creation
    gluster_infra_thick_lvs:
      - { vgname: 'vg_vdb', lvname: 'vg_vdb_thicklv1', size: '10G' }
    
    # gluster_infra_thinpools
    # thinpoolname is optional, if not provided `vgname' followed by _thinpool is
    # used for name. poolmetadatasize is optional, default 16G is used
    gluster_infra_thinpools:
      - {vgname: 'vg_vdb', thinpoolname: 'foo_thinpool', thinpoolsize: '10G', poolmetadatasize: '1G' }
      - {vgname: 'vg_vdc', thinpoolname: 'bar_thinpool', thinpoolsize: '20G', poolmetadatasize: '1G' }
    
    # gluster_infra_lv_logicalvols
    # Thinvolumes for the brick. `thinpoolname' is optional, if omitted `vgname'
    # followed by _thinpool is used
    gluster_infra_lv_logicalvols:
       - { vgname: 'vg_vdb', thinpool: 'foo_thinpool', lvname: 'vg_vdb_thinlv', lvsize: '500G' }
       - { vgname: 'vg_vdc', thinpool: 'bar_thinpool', lvname: 'vg_vdc_thinlv', lvsize: '500G' }
    
    # gluster_infra_mount_devices
    gluster_infra_mount_devices:
       - { path: '/rhgs/thicklv', vgname: 'vg_vdb', lvname: 'vg_vdb_thicklv1' }
       - { path: '/rhgs/thinlv1', vgname: 'vg_vdb', lvname: 'vg_vdb_thinlv' }
       - { path: '/rhgs/thinlv2', vgname: 'vg_vdc', lvname: 'vg_vdc_thinlv' }

    Important

    If the path: defined does not begin with /rhgs the bricks are not detected automatically by the Administration Portal. Synchronize the host storage after running the create_brick.yml playbook to add the new bricks to the Administration Portal.

  3. Create a create_brick.yml playbook file

    Create a new create_brick.yml file in the /etc/ansible/roles/gluster.infra/playbooks directory using the following example.

    This file defines the work involved in creating a brick using the gluster.infra role and the variable file you created above.

    Example create_brick.yml playbook file

    ---
    - name: Create a GlusterFS brick on the servers
      remote_user: root
      hosts: all
      gather_facts: false
      vars_files:
        - vdo_bricks.yml
    
      roles:
        - gluster.infra

  4. Execute the playbook

    Run the following command from the /etc/ansible/roles/gluster.infra/playbooks directory to run the playbook you created using the inventory and the variables files you defined above.

    # ansible-playbook -i inventory create_brick.yml
  5. Verify that your bricks are available

    1. Click Compute Hosts and select the host.
    2. Click Storage Devices and check the list of storage devices for your new bricks.

      If you cannot see your new bricks, click Sync and wait for them to appear in the list of storage devices.

1.3. Expanding volume from Red Hat Virtualization Manager

Follow this section to expand an existing volume across new bricks on new hyperconverged nodes.

Prerequisites

  • Verify that your scaling plans are supported: Requirements for scaling.
  • If your existing deployment uses certificates signed by a Certificate Authority for encryption, prepare the certificates required for the new nodes.
  • Install three physical machines to serve as the new hyperconverged nodes.

    Follow the instructions in Install Physical Host Machines.

  • Configure key-based SSH authentication without a password.

    Configure this from the node that is running the Web Console to all new nodes, and from the first new node to all other new nodes.

    Important

    RHHI for Virtualization expects key-based SSH authentication without a password between these nodes for both IP addresses and FQDNs. Ensure that you configure key-based SSH authentication between these machines for the IP address and FQDN of all storage and management network interfaces.

    Follow the instructions in Using key-based authentication to configure key-based SSH authentication without a password.

Procedure

  1. Create new bricks

    Create the bricks on the servers you want to expand your volume across by following the instructions in Creating bricks using ansible or Creating bricks above a VDO layer using ansible depending on your requirements.

    Important

    If the path: defined does not begin with /rhgs the bricks are not detected automatically by the Administration Portal. Synchronize the host storage after running the create_brick.yml playbook to synchronize the new bricks to the Administration Portal.

    1. Click Compute Hosts and select the host.
    2. Click Storage Devices.
    3. Click Sync.

    Repeat for each host that has new bricks.

  2. Add new bricks to the volume

    1. Log in to RHV Administration Console.
    2. Click Storage Volumes and select the volume to expand.
    3. Click the Bricks tab.
    4. Click Add. The Add Bricks window opens.
    5. Add new bricks.

      1. Select the brick host from the Host dropdown menu.
      2. Select the brick to add from the Brick Directory dropdown menu and click Add.
    6. When all bricks are listed, click OK to add bricks to the volume.

The volume automatically syncs the new bricks.

1.4. Expanding the hyperconverged cluster by adding a new volume on new nodes using the Web Console

Follow these instructions to use the Web Console to expand your hyperconverged cluster with a new volume on new nodes.

Prerequisites

  • Verify that your scaling plans are supported: Requirements for scaling.
  • If your existing deployment uses certificates signed by a Certificate Authority for encryption, prepare the certificates that will be required for the new nodes.
  • Install three physical machines to serve as the new hyperconverged nodes.

    Follow the instructions in Deploying Red Hat Hyperconverged Infrastructure for Virtualization.

  • Configure key-based SSH authentication without a password.

    Configure this from the node that is running the Web Console to all new nodes, and from the first new node to all other new nodes.

    Important

    RHHI for Virtualization expects key-based SSH authentication without a password between these nodes for both IP addresses and FQDNs. Ensure that you configure key-based SSH authentication between these machines for the IP address and FQDN of all storage and management network interfaces.

    Follow the instructions in Using key-based authentication to configure key-based SSH authentication without a password.

Procedure

  1. Log in to the Web Console.
  2. Click Virtualization Hosted Engine and then click Manage Gluster.
  3. Click Expand Cluster. The Gluster Deployment window opens.

    1. On the Hosts tab, enter the FQDN or IP address of the new hyperconverged nodes and click Next.

      The Hosts tab of the Gluster Deployment window

    2. On the Volumes tab, specify the details of the volume you want to create.

      The Volumes tab of the Gluster Deployment window

    3. On the Bricks tab, specify the details of the disks to be used to create the Gluster volume.

      The Bricks tab of the Gluster Deployment window

    4. On the Review tab, check the generated file for any problems. When you are satisfied, click Deploy.

      The Review tab of the Gluster Deployment window

      Deployment takes some time to complete. The following screen appears when the cluster has been successfully expanded.

      The success screen for expanding a cluster

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.