Este conteúdo não está disponível no idioma selecionado.
Chapter 3. Preparing Storage for Red Hat Virtualization
You need to prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended.
When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.
A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains.
You can use one of the following storage types:
Prerequisites
Self-hosted engines must have an additional data domain with at least 74 GiB dedicated to the Manager virtual machine. The self-hosted engine installer creates this domain. Prepare the storage for this domain before installation.
WarningExtending or otherwise changing the self-hosted engine storage domain after deployment of the self-hosted engine is not supported. Any such change might prevent the self-hosted engine from booting.
- When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine.
- If you use iSCSI storage, the self-hosted engine storage domain must use a dedicated iSCSI target. Any additional storage domains must use a different iSCSI target.
- It is strongly recommended to create additional data storage domains in the same data center as the self-hosted engine storage domain. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you cannot add new storage domains or remove the corrupted storage domain. You must redeploy the self-hosted engine.
3.1. Preparing NFS Storage Copiar o linkLink copiado para a área de transferência!
Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts.
For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8.
Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization.
Prerequisites
Install the NFS
utilspackage.dnf install nfs-utils -y
# dnf install nfs-utils -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow To check the enabled versions:
cat /proc/fs/nfsd/versions
# cat /proc/fs/nfsd/versionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the following services:
systemctl enable nfs-server systemctl enable rpcbind
# systemctl enable nfs-server # systemctl enable rpcbindCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create the group
kvm:groupadd kvm -g 36
# groupadd kvm -g 36Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the user
vdsmin the groupkvm:useradd vdsm -u 36 -g kvm
# useradd vdsm -u 36 -g kvmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
storagedirectory and modify the access rights.mkdir /storage chmod 0755 /storage chown 36:36 /storage/
# mkdir /storage # chmod 0755 /storage # chown 36:36 /storage/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
storagedirectory to/etc/exportswith the relevant permissions.vi /etc/exports cat /etc/exports
# vi /etc/exports # cat /etc/exports /storage *(rw)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the following services:
systemctl restart rpcbind systemctl restart nfs-server
# systemctl restart rpcbind # systemctl restart nfs-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow To see which export are available for a specific IP address:
exportfs
# exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable.
3.2. Preparing iSCSI Storage Copiar o linkLink copiado para a área de transferência!
Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time.
For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter
Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode.
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored.
To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
3.3. Preparing FCP Storage Copiar o linkLink copiado para a área de transferência!
Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter
Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode.
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored.
To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
3.4. Preparing Red Hat Gluster Storage Copiar o linkLink copiado para a área de transferência!
For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide.
For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support.
3.5. Customizing Multipath Configurations for SAN Vendors Copiar o linkLink copiado para a área de transferência!
If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf.
To override the multipath settings, do not customize /etc/multipath.conf. Because VDSM owns /etc/multipath.conf, installing or upgrading VDSM or Red Hat Virtualization can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures.
Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override.
VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf.
To avoid causing severe storage failures, follow these guidelines:
-
Do not modify
/etc/multipath.conf. If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. -
Do not override the
user_friendly_namesandfind_multipathssettings. For details, see Recommended Settings for Multipath.conf. -
Avoid overriding the
no_path_retryandpolling_intervalsettings unless a storage vendor specifically requires you to do so. For details, see Recommended Settings for Multipath.conf.
Not following these guidelines can cause catastrophic storage errors.
Prerequisites
VDSM is configured to use the multipath module. To verify this, enter:
vdsm-tool is-configured --module multipath
# vdsm-tool is-configured --module multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
-
Create a new configuration file in the
/etc/multipath/conf.ddirectory. -
Copy the individual setting you want to override from
/etc/multipath.confto the new configuration file in/etc/multipath/conf.d/<my_device>.conf. Remove any comment marks, edit the setting values, and save your changes. Apply the new configuration settings by entering:
systemctl reload multipathd
# systemctl reload multipathdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not restart the multipathd service. Doing so generates errors in the VDSM logs.
Verification steps
- Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections.
- Enable one connection at a time and verify that doing so makes the storage domain reachable.
3.6. Recommended Settings for Multipath.conf Copiar o linkLink copiado para a área de transferência!
Do not override the following settings:
- user_friendly_names no
Device names must be consistent across all hypervisors. For example,
/dev/mapper/{WWID}. The default value of this setting,no, prevents the assignment of arbitrary and inconsistent device names such as/dev/mapper/mpath{N}on various hypervisors, which can lead to unpredictable system behavior.WarningDo not change this setting to
user_friendly_names yes. User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported.find_multipaths noThis setting controls whether RHVH tries to access devices through multipath only if more than one path is available. The current value,
no, allows RHV to access devices through multipath even if only one path is available.WarningDo not override this setting.
Avoid overriding the following settings unless required by the storage system vendor:
no_path_retry 4-
This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of
no_path_retrywasfailbecause QEMU had trouble with the I/O queuing when no paths were available. Thefailvalue made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to4so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the next time all paths fail. For more details, see the commit that changed this setting. polling_interval 5- This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.