Chapter 2. Managing local storage by using RHEL system roles


To manage LVM and local file systems (FS) by using Ansible, you can use the storage role, which is one of the RHEL system roles available in RHEL 8.

Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.

For more information about RHEL system roles and how to apply them, see Introduction to RHEL system roles.

2.1. Creating an XFS file system on a block device by using the storage RHEL system role

The example Ansible playbook applies the storage role to create an XFS file system on a block device using the default parameters.

Note

The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - hosts: managed-node-01.example.com
      roles:
        - rhel-system-roles.storage
      vars:
        storage_volumes:
          - name: barefs
            type: disk
            disks:
              - sdb
            fs_type: xfs
    • The volume name (barefs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute.
    • You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8.
    • To create the file system on an LV, provide the LVM setup under the disks: attribute, including the enclosing volume group. For details, see Creating or resizing a logical volume by using the storage RHEL system role.

      Do not provide the path to the LV device.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.2. Persistently mounting a file system by using the storage RHEL system role

The example Ansible applies the storage role to immediately and persistently mount an XFS file system.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - hosts: managed-node-01.example.com
      roles:
        - rhel-system-roles.storage
      vars:
        storage_volumes:
          - name: barefs
            type: disk
            disks:
              - sdb
            fs_type: xfs
            mount_point: /mnt/data
            mount_user: somebody
            mount_group: somegroup
            mount_mode: 0755
    • This playbook adds the file system to the /etc/fstab file, and mounts the file system immediately.
    • If the file system on the /dev/sdb device or the mount point directory do not exist, the playbook creates them.
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.3. Creating or resizing a logical volume by using the storage RHEL system role

Use the storage role to perform the following tasks:

  • To create an LVM logical volume in a volume group consisting of many disks
  • To resize an existing file system on LVM
  • To express an LVM volume size in percentage of the pool’s total size

If the volume group does not exist, the role creates it. If a logical volume exists in the volume group, it is resized if the size does not match what is specified in the playbook.

If you are reducing a logical volume, to prevent data loss you must ensure that the file system on that logical volume is not using the space in the logical volume that is being reduced.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      tasks:
        - name: Create logical volume
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_pools:
              - name: myvg
                disks:
                  - sda
                  - sdb
                  - sdc
                volumes:
                  - name: mylv
                    size: 2G
                    fs_type: ext4
                    mount_point: /mnt/data

    The settings specified in the example playbook include the following:

    size: <size>
    You must specify the size by using units (for example, GiB) or percentage (for example, 60%).

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • Verify that specified volume has been created or resized to the requested size:

    # ansible managed-node-01.example.com -m command -a 'lvs myvg'

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.4. Enabling online block discard by using the storage RHEL system role

You can mount an XFS file system with the online block discard option to automatically discard unused blocks.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      tasks:
        - name: Enable online block discard
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_volumes:
              - name: barefs
                type: disk
                disks:
                  - sdb
                fs_type: xfs
                mount_point: /mnt/data
                mount_options: discard

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • Verify that online block discard option is enabled:

    # ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.5. Creating and mounting an Ext4 file system by using the storage RHEL system role

The example Ansible playbook applies the storage role to create and mount an Ext4 file system.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - hosts: managed-node-01.example.com
      roles:
        - rhel-system-roles.storage
      vars:
        storage_volumes:
          - name: barefs
            type: disk
            disks:
              - sdb
            fs_type: ext4
            fs_label: label-name
            mount_point: /mnt/data
    • The playbook creates the file system on the /dev/sdb disk.
    • The playbook persistently mounts the file system at the /mnt/data directory.
    • The label of the file system is label-name.
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.6. Creating and mounting an Ext3 file system by using the storage RHEL system role

The example Ansible playbook applies the storage role to create and mount an Ext3 file system.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - hosts: all
      roles:
        - rhel-system-roles.storage
      vars:
        storage_volumes:
          - name: barefs
            type: disk
            disks:
              - sdb
            fs_type: ext3
            fs_label: label-name
            mount_point: /mnt/data
            mount_user: somebody
            mount_group: somegroup
            mount_mode: 0755
    • The playbook creates the file system on the /dev/sdb disk.
    • The playbook persistently mounts the file system at the /mnt/data directory.
    • The label of the file system is label-name.
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.7. Creating a swap volume by using the storage RHEL system role

This section provides an example Ansible playbook. This playbook applies the storage role to create a swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device by using the default parameters.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Create a disk device with swap
      hosts: managed-node-01.example.com
      roles:
        - rhel-system-roles.storage
      vars:
        storage_volumes:
          - name: swap_fs
            type: disk
            disks:
              - /dev/sdb
            size: 15 GiB
            fs_type: swap

    The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.8. Configuring a RAID volume by using the storage RHEL system role

With the storage system role, you can configure a RAID volume on RHEL by using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements.

Warning

Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, use persistent naming attributes in the playbook. For more information about persistent naming attributes, see Overview of persistent naming attributes.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      tasks:
        - name: Create a RAID on sdd, sde, sdf, and sdg
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_safe_mode: false
            storage_volumes:
              - name: data
                type: raid
                disks: [sdd, sde, sdf, sdg]
                raid_level: raid0
                raid_chunk_size: 32 KiB
                mount_point: /mnt/data
                state: present

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • Verify that the array was correctly created:

    # ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory
  • Managing RAID

2.9. Configuring an LVM pool with RAID by using the storage RHEL system role

With the storage system role, you can configure an LVM pool with RAID on RHEL by using Red Hat Ansible Automation Platform. You can set up an Ansible playbook with the available parameters to configure an LVM pool with RAID.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      tasks:
        - name: Configure LVM pool with RAID
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_safe_mode: false
            storage_pools:
              - name: my_pool
                type: lvm
                disks: [sdh, sdi]
                raid_level: raid1
                volumes:
                  - name: my_volume
                    size: "1 GiB"
                    mount_point: "/mnt/app/shared"
                    fs_type: xfs
                    state: present

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • Verify that your pool is on RAID:

    # ansible managed-node-01.example.com -m command -a 'lsblk'

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory
  • Managing RAID

2.10. Configuring a stripe size for RAID LVM volumes by using the storage RHEL system role

With the storage system role, you can configure a stripe size for RAID LVM volumes on RHEL by using Red Hat Ansible Automation Platform. You can set up an Ansible playbook with the available parameters to configure an LVM pool with RAID.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      tasks:
        - name: Configure stripe size for RAID LVM volumes
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_safe_mode: false
            storage_pools:
              - name: my_pool
                type: lvm
                disks: [sdh, sdi]
                volumes:
                  - name: my_volume
                    size: "1 GiB"
                    mount_point: "/mnt/app/shared"
                    fs_type: xfs
                    raid_level: raid0
                    raid_stripe_size: "256 KiB"
                    state: present

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • Verify that stripe size is set to the required size:

    # ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory
  • Managing RAID

2.11. Configuring an LVM-VDO volume by using the storage RHEL system role

You can use the storage RHEL system role to create a VDO volume on LVM (LVM-VDO) with enabled compression and deduplication.

Note

Because of the storage system role use of LVM-VDO, only one volume can be created per pool.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      tasks:
        - name: Create LVM-VDO volume under volume group 'myvg'
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_pools:
              - name: myvg
                disks:
                  - /dev/sdb
                volumes:
                  - name: mylv1
                    compression: true
                    deduplication: true
                    vdo_pool_size: 10 GiB
                    size: 30 GiB
                    mount_point: /mnt/app/shared

    The settings specified in the example playbook include the following:

    vdo_pool_size: <size>
    The actual size that the volume takes on the device. You can specify the size in human-readable format, such as 10 GiB. If you do not specify a unit, it defaults to bytes.
    size: <size>
    The virtual size of VDO volume.

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • View the current status of compression and deduplication:

    $ ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state'
      LV       VG      Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert VDOCompression VDOCompressionState VDODeduplication VDOIndexState
      mylv1   myvg   vwi-a-v---   3.00t vpool0                                                         enabled              online          enabled        online

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory

2.12. Creating a LUKS2 encrypted volume by using the storage RHEL system role

You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook.

Prerequisites

Procedure

  1. Store your sensitive variables in an encrypted file:

    1. Create the vault:

      $ ansible-vault create vault.yml
      New Vault password: <vault_password>
      Confirm New Vault password: <vault_password>
    2. After the ansible-vault create command opens an editor, enter the sensitive data in the <key>: <value> format:

      luks_password: <password>
    3. Save the changes, and close the editor. Ansible encrypts the data in the vault.
  2. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      vars_files:
        - vault.yml
      tasks:
        - name: Create and configure a volume encrypted with LUKS
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_volumes:
              - name: barefs
                type: disk
                disks:
                  - sdb
                fs_type: xfs
                fs_label: <label>
                mount_point: /mnt/data
                encryption: true
                encryption_password: "{{ luks_password }}"

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  3. Validate the playbook syntax:

    $ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  4. Run the playbook:

    $ ansible-playbook --ask-vault-pass ~/playbook.yml

Verification

  1. Find the luksUUID value of the LUKS encrypted volume:

    # ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb'
    
    4e4e7970-1822-470e-b55a-e91efe5d0f5c
  2. View the encryption status of the volume:

    # ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c'
    
    /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use.
      type:    LUKS2
      cipher:  aes-xts-plain64
      keysize: 512 bits
      key location: keyring
      device:  /dev/sdb
    ...
  3. Verify the created LUKS encrypted volume:

    # ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb'
    
    LUKS header information
    Version:        2
    Epoch:          3
    Metadata area:  16384 [bytes]
    Keyslots area:  16744448 [bytes]
    UUID:           4e4e7970-1822-470e-b55a-e91efe5d0f5c
    Label:          (no label)
    Subsystem:      (no subsystem)
    Flags:          (no flags)
    
    Data segments:
      0: crypt
            offset: 16777216 [bytes]
            length: (whole device)
            cipher: aes-xts-plain64
            sector: 512 [bytes]
    ...

Additional resources

2.13. Creating shared LVM devices using the storage RHEL system role

You can use the storage RHEL system role to create shared LVM devices if you want your multiple systems to access the same storage at the same time.

This can bring the following notable benefits:

  • Resource sharing
  • Flexibility in managing storage resources
  • Simplification of storage management tasks

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage local storage
      hosts: managed-node-01.example.com
      become: true
      tasks:
        - name: Create shared LVM device
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_pools:
              - name: vg1
                disks: /dev/vdb
                type: lvm
                shared: true
                state: present
                volumes:
                  - name: lv1
                    size: 4g
                    mount_point: /opt/test1
            storage_safe_mode: false
            storage_use_partitions: true

    For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.