Questo contenuto non è disponibile nella lingua selezionata.
2.6. Storage
2.6.1. About Red Hat Virtualization storage
Red Hat Virtualization uses a centralized storage system for virtual disks, ISO files and snapshots. Storage networking can be implemented using:
- Network File System (NFS)
- Other POSIX compliant file systems
- Internet Small Computer System Interface (iSCSI)
- Local storage attached directly to the virtualization hosts
- Fibre Channel Protocol (FCP)
- Parallel NFS (pNFS)
Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated.
As a Red Hat Virtualization system administrator, you create, configure, attach and maintain storage for the virtualized enterprise. You must be familiar with the storage types and their use. Read your storage array vendor’s guides, and see Red Hat Enterprise Linux Managing storage devices for more information on the concepts, protocols, requirements, and general usage of storage.
To add storage domains you must be able to successfully access the Administration Portal, and there must be at least one host connected with a status of Up.
Red Hat Virtualization has three types of storage domains:
Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain.
The data domain cannot be shared across data centers. Data domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, provided they are all shared, rather than local, domains.
You must attach a data domain to a data center before you can attach domains of other types to it.
- ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center’s need for physical media. An ISO domain can be shared across different data centers. ISO domains can only be NFS-based. Only one ISO domain can be added to a data center.
Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and Red Hat Virtualization environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time. Export domains can only be NFS-based. Only one export domain can be added to a data center.
NoteThe export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See Importing Existing Storage Domains for information on importing storage domains.
Only commence configuring and attaching storage for your Red Hat Virtualization environment once you have determined the storage needs of your data center(s).
2.6.2. Understanding Storage Domains
A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), or ISO files. A storage domain can be made of block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems).
By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO.
GlusterFS Storage is deprecated, and will no longer be supported in future releases.
On NFS, all virtual disks, templates, and snapshots are files.
On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See Red Hat Enterprise Linux Configuring and managing logical volumes for more information on LVM.
Virtual disks can have one of two formats, either QCOW2 or raw. The type of storage can be sparse or preallocated. Snapshots are always sparse but can be taken for disks of either format.
Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.
2.6.3. Preparing and Adding NFS Storage
2.6.3.1. Preparing NFS Storage
Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts.
For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8.
Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown
and chmod
steps for all of the directories you intend to use as storage domains in Red Hat Virtualization.
Prerequisites
Install the NFS
utils
package.# dnf install nfs-utils -y
To check the enabled versions:
# cat /proc/fs/nfsd/versions
Enable the following services:
# systemctl enable nfs-server # systemctl enable rpcbind
Procedure
Create the group
kvm
:# groupadd kvm -g 36
Create the user
vdsm
in the groupkvm
:# useradd vdsm -u 36 -g kvm
Create the
storage
directory and modify the access rights.# mkdir /storage # chmod 0755 /storage # chown 36:36 /storage/
Add the
storage
directory to/etc/exports
with the relevant permissions.# vi /etc/exports # cat /etc/exports /storage *(rw)
Restart the following services:
# systemctl restart rpcbind # systemctl restart nfs-server
To see which export are available for a specific IP address:
# exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>
If changes in /etc/exports
have been made after starting the services, the exportfs -ra
command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable.
2.6.3.2. Adding NFS Storage
This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain.
If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list.
Procedure
-
In the Administration Portal, click
. - Click New Domain.
- Enter a Name for the storage domain.
- Accept the default values for the Data Center, Domain Function, Storage Type, Format, and Host lists.
- Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data.
Optionally, you can configure the advanced parameters:
- Click Advanced Parameters.
- Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
- Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
- Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
- Click .
The new NFS data domain has a status of Locked
until the disk is prepared. The data domain is then automatically attached to the data center.
2.6.3.3. Increasing NFS Storage
To increase the amount of NFS storage, you can either create a new storage domain and add it to an existing data center, or increase the available free space on the NFS server. For the former option, see Adding NFS Storage. The following procedure explains how to increase the available free space on the existing NFS server.
Procedure
-
Click
. - Click the NFS storage domain’s name. This opens the details view.
- Click the Data Center tab and click Maintenance to place the storage domain into maintenance mode. This unmounts the existing share and makes it possible to resize the storage domain.
- On the NFS server, resize the storage. For Red Hat Enterprise Linux 6 systems, see Red Hat Enterprise Linux 6 Storage Administration Guide. For Red Hat Enterprise Linux 7 systems, see Red Hat Enterprise Linux 7 Storage Administration Guide. For Red Hat Enterprise Linux 8 systems, see Resizing a partition.
- In the details view, click the Data Center tab and click Activate to mount the storage domain.
2.6.4. Preparing and adding local storage
A virtual machine’s disk that uses a storage device that is physically installed on the virtual machine’s host is referred to as a local storage device.
A storage device must be part of a storage domain. The storage domain type for local storage is referred to as a local storage domain.
Configuring a host to use local storage automatically creates, and adds the host to, a new local storage domain, data center and cluster to which no other host can be added. Multiple-host clusters require that all hosts have access to all storage domains, which is not possible with local storage. Virtual machines created in a single-host cluster cannot be migrated, fenced, or scheduled.
2.6.4.1. Preparing local storage
On Red Hat Virtualization Host (RHVH), local storage should always be defined on a file system that is separate from /
(root). Use a separate logical volume or disk, to prevent possible loss of data during upgrades.
Procedure for Red Hat Enterprise Linux hosts
On the host, create the directory to be used for the local storage:
# mkdir -p /data/images
Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36):
# chown 36:36 /data /data/images # chmod 0755 /data /data/images
Procedure for Red Hat Virtualization Hosts
Create the local storage on a logical volume:
Create a local storage directory:
# mkdir /data # lvcreate -L $SIZE rhvh -n data # mkfs.ext4 /dev/mapper/rhvh-data # echo "/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2" >> /etc/fstab # mount /data
Mount the new local storage:
# mount -a
Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36):
# chown 36:36 /data /rhvh-data # chmod 0755 /data /rhvh-data
2.6.4.2. Adding a local storage domain
When adding a local storage domain to a host, setting the path to the local storage directory automatically creates and places the host in a local data center, local cluster, and local storage domain.
Procedure
-
Click
and select the host. -
Click
and . The host’s status changes to Maintenance. -
Click
. - Click the Edit buttons next to the Data Center, Cluster, and Storage fields to configure and name the local storage domain.
- Set the path to your local storage in the text entry field.
- If applicable, click the Optimization tab to configure the memory optimization policy for the new local storage cluster.
- Click .
The Manager sets up the local data center with a local cluster, local storage domain. It also changes the host’s status to Up.
Verification
-
Click
. - Locate the local storage domain you just added.
The domain’s status should be Active ( ), and the value in the Storage Type column should be Local on Host.
You can now upload a disk image in the new local storage domain.
2.6.5. Preparing and Adding POSIX-compliant File System Storage
2.6.5.1. Preparing POSIX-compliant File System Storage
POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP.
Any POSIX-compliant file system used as a storage domain in Red Hat Virtualization must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization.
For information on setting up and configuring POSIX-compliant file system storage, see Red Hat Enterprise Linux Global File System 2.
Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead.
2.6.5.2. Adding POSIX-compliant File System Storage
This procedure shows you how to attach existing POSIX-compliant file system storage to your Red Hat Virtualization environment as a data domain.
Procedure
-
Click
. - Click New Domain.
- Enter the Name for the storage domain.
-
Select the Data Center to be associated with the storage domain. The data center selected must be of type POSIX (POSIX compliant FS). Alternatively, select
(none)
. Select
Data
from the Domain Function drop-down list, andPOSIX compliant FS
from the Storage Type drop-down list.If applicable, select the Format from the drop-down menu.
- Select a host from the Host drop-down list.
-
Enter the Path to the POSIX file system, as you would normally provide it to the
mount
command. -
Enter the VFS Type, as you would normally provide it to the
mount
command using the-t
argument. Seeman mount
for a list of valid VFS types. -
Enter additional Mount Options, as you would normally provide them to the
mount
command using the-o
argument. The mount options should be provided in a comma-separated list. Seeman mount
for a list of valid mount options. Optionally, you can configure the advanced parameters.
- Click Advanced Parameters.
- Enter a percentage value in the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
- Enter a GB value in the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
- Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
- Click .
2.6.6. Preparing and Adding Block Storage
2.6.6.1. Preparing iSCSI Storage
Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time.
For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter
command to create filters for the LVM.
Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode.
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored.
To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
# cat /etc/multipath/conf.d/host.conf
multipaths {
multipath {
wwid boot_LUN_wwid
no_path_retry queue
}
2.6.6.2. Adding iSCSI Storage
This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain.
Procedure
-
Click
. - Click New Domain.
- Enter the Name of the new storage domain.
- Select a Data Center from the drop-down list.
- Select Data as the Domain Function and iSCSI as the Storage Type.
Select an active host as the Host.
ImportantCommunication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured.
The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the next step.
Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.
NoteLUNs used externally for the environment are also displayed.
You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs.
ImportantIf you use the REST API method
discoveriscsi
to discover the iscsi targets, you can use an FQDN or an IP address, but you must use the iscsi details from the discovered targets results to log in using the REST API methodiscsilogin
. See discoveriscsi in the REST API Guide for more information.- Enter the FQDN or IP address of the iSCSI host in the Address field.
-
Enter the port with which to connect to the host when browsing for targets in the Port field. The default is
3260
. If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
NoteYou can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information.
- Click Discover.
Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets.
ImportantIf more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.
ImportantWhen using the REST API
iscsilogin
method to log in, you must use the iscsi details from the discovered targets results in thediscoveriscsi
method. See iscsilogin in the REST API Guide for more information.
- Click the + button next to the desired target. This expands the entry and displays all unused LUNs attached to the target.
- Select the check box for each LUN that you are using to create the storage domain.
Optionally, you can configure the advanced parameters:
- Click Advanced Parameters.
- Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
- Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
- Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
- Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
- Click .
If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding.
If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond.
2.6.6.3. Configuring iSCSI Multipathing
iSCSI multipathing enables you to create and manage groups of logical networks and iSCSI storage connections. Multiple network paths between the hosts and iSCSI storage prevent host downtime caused by network path failure.
The Manager connects each host in the data center to each target, using the NICs or VLANs that are assigned to the logical networks in the iSCSI bond.
You can create an iSCSI bond with multiple targets and logical networks for redundancy.
Prerequisites
- One or more iSCSI targets
One or more logical networks that meet the following requirements:
- Not defined as Required or VM Network
- Assigned to a host interface
- Assigned a static IP address in the same VLAN and subnet as the other logical networks in the iSCSI bond
Multipath is not supported for Self-Hosted Engine deployments.
Procedure
-
Click
. - Click the data center name. This opens the details view.
- In the iSCSI Multipathing tab, click Add.
- In the Add iSCSI Bond window, enter a Name and a Description.
- Select a logical network from Logical Networks and a storage domain from Storage Targets. You must select all the paths to the same target.
- Click .
The hosts in the data center are connected to the iSCSI targets through the logical networks in the iSCSI bond.
2.6.6.4. Migrating a Logical Network to an iSCSI Bond
If you have a logical network that you created for iSCSI traffic and configured on top of an existing network bond, you can migrate it to an iSCSI bond on the same subnet without disruption or downtime.
Procedure
Modify the current logical network so that it is not Required:
-
Click
. - Click the cluster name. This opens the details view.
-
In the Logical Networks tab, select the current logical network (
net-1
) and click Manage Networks. - Clear the Require check box and click .
-
Click
Create a new logical network that is not Required and not VM network:
- Click Add Network. This opens the New Logical Network window.
-
In the General tab, enter the Name (
net-2
) and clear the VM network check box. - In the Cluster tab, clear the Require check box and click .
Remove the current network bond and reassign the logical networks:
-
Click
. - Click the host name. This opens the details view.
- In the Network Interfaces tab, click Setup Host Networks.
-
Drag
net-1
to the right to unassign it. - Drag the current bond to the right to remove it.
-
Drag
net-1
andnet-2
to the left to assign them to physical interfaces. -
Click the pencil icon of
net-2
. This opens the Edit Network window. - In the IPV4 tab, select Static.
- Enter the IP and Netmask/Routing Prefix of the subnet and click .
-
Click
Create the iSCSI bond:
-
Click
. - Click the data center name. This opens the details view.
- In the iSCSI Multipathing tab, click Add.
-
In the Add iSCSI Bond window, enter a Name, select the networks,
net-1
andnet-2
, and click .
-
Click
Your data center has an iSCSI bond containing the old and new logical networks.
2.6.6.5. Preparing FCP Storage
Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter
command to create filters for the LVM.
Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode.
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored.
To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
# cat /etc/multipath/conf.d/host.conf
multipaths {
multipath {
wwid boot_LUN_wwid
no_path_retry queue
}
}
2.6.6.6. Adding FCP Storage
This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain.
Procedure
-
Click
. - Click New Domain.
- Enter the Name of the storage domain.
Select an FCP Data Center from the drop-down list.
If you do not yet have an appropriate FCP data center, select
(none)
.- Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available.
Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
ImportantAll communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
- The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
Optionally, you can configure the advanced parameters.
- Click Advanced Parameters.
- Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
- Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
- Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
- Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
- Click .
The new FCP data domain remains in a Locked
status while it is being prepared for use. When ready, it is automatically attached to the data center.
2.6.6.7. Increasing iSCSI or FCP Storage
There are several ways to increase iSCSI or FCP storage size:
- Add an existing LUN to the current storage domain.
- Create a new storage domain with new LUNs and add it to an existing data center. See Adding iSCSI Storage.
- Expand the storage domain by resizing the underlying LUNs.
For information about configuring or resizing FCP storage, see Using Fibre Channel Devices in Managing storage devices for Red Hat Enterprise Linux 8.
The following procedure explains how to expand storage area network (SAN) storage by adding a new LUN to an existing storage domain.
Prerequisites
-
The storage domain’s status must be
UP
. -
The LUN must be accessible to all the hosts whose status is
UP
, or else the operation will fail and the LUN will not be added to the domain. The hosts themselves, however, will not be affected. If a newly added host, or a host that is coming out of maintenance or aNon Operational
state, cannot access the LUN, the host’s state will beNon Operational
.
Increasing an Existing iSCSI or FCP Storage Domain
-
Click
and select an iSCSI or FCP domain. - Click .
-
Click
and click the Discover Targets expansion button. - Enter the connection information for the storage server and click to initiate the connection.
-
Click
and select the check box of the newly available LUN. - Click to add the LUN to the selected storage domain.
This will increase the storage domain by the size of the added LUN.
When expanding the storage domain by resizing the underlying LUNs, the LUNs must also be refreshed in the Administration Portal.
Refreshing the LUN Size
-
Click
and select an iSCSI or FCP domain. - Click .
-
Click
. - In the Additional Size column, click button of the LUN to refresh.
- Click to refresh the LUN to indicate the new storage size.
2.6.6.8. Reusing LUNs
LUNs cannot be reused, as is, to create a storage domain or virtual disk. If you try to reuse the LUNs, the Administration Portal displays the following error message:
Physical device initialization failed. Please check that the device is empty and accessible by the host.
A self-hosted engine shows the following error during installation:
[ ERROR ] Error creating Volume Group: Failed to initialize physical device: ("[u'/dev/mapper/000000000000000000000000000000000']",) [ ERROR ] Failed to execute stage 'Misc configuration': Failed to initialize physical device: ("[u'/dev/mapper/000000000000000000000000000000000']",)
Before the LUN can be reused, the old partitioning table must be cleared.
You must run this procedure on the correct LUN so that you do not inadvertently destroy data.
Delete the partition mappings in <LUN_ID>:
kpartx -dv /dev/mapper/<LUN_ID>
Erase the fileystem or raid signatures in <LUN_ID>:
wipefs -a /dev/mapper/<LUN_ID>
Inform the operating system about the partition table changes on <LUN_ID>:
partprobe
2.6.6.9. Removing stale LUNs
When a storage domain is removed, stale LUN links can remain on the storage server. This can lead to slow multipath scans, cluttered log files, and LUN ID conflicts.
Red Hat Virtualization does not manage the iSCSI servers and, therefore, cannot automatically remove LUNs when a storage domain is removed. The administrator can manually remove stale LUN links with the remove_stale_lun.yml
Ansible role. This role removes stale LUN links from all hosts that belong to given data center. For more information about this role and its variables, see the Remove Stale LUN role in the oVirt Ansible collection.
It is assumed that you are running remove_stale_lun.yml
from the engine machine as the engine ssh key is already added on all the hosts. If the playbook is not running on the engine machine, a user’s SSH key must be added to all hosts that belong to the data center, or the user must provide an appropriate inventory file.
Procedure
-
Click
. - Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Maintenance, then click OK.
- Click Detatch, then click OK.
- Click Remove.
- Click OK to remove the storage domain from the source environment.
- Remove the LUN from the storage server.
Remove the stale LUNs from the host using Ansible:
# ansible-playbook --extra-vars "lun=<LUN>" /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/remove_stale_lun/examples/remove_stale_lun.yml
where LUN is the LUN removed from the storage server in the steps above.
NoteIf you remove the stale LUN from the host using Ansible without first removing the LUN from the storage server, the stale LUN will reappear on the host the next time VDSM performs an iSCSI rescan.
2.6.6.10. Creating an LVM filter
An LVM filter is a capability that can be set in /etc/lvm/lvm.conf
to accept devices into or reject devices from the list of volumes based on a regex query. For example, to ignore /dev/cdrom
you can use filter=["r|^/dev/cdrom$|"]
, or add the following parameter to the lvm
command: lvs --config 'devices{filter=["r|cdrom|"]}'
.
This provides a simple way to prevent a host from scanning and activating logical volumes that are not required directly by the host. In particular, the solution addresses logical volumes on shared storage managed by RHV, and logical volumes created by a guest in RHV raw volumes. This solution is needed because scanning and activating other logical volumes may cause data corruption, slow boot, or other issues.
The solution is to configure an LVM filter on each host, which allows the LVM on a host to scan only the logical volumes that are required by the host.
You can use the command vdsm-tool config-lvm-filter
to analyze the current LVM configuration and decide if a filter needs to be configured.
If the LVM filter has not yet been configured, the command generates an LVM filter option for the host, and adds the option to the LVM configuration.
Scenario 1: An Unconfigured Host
On a host yet to be configured, the command automatically configures the LVM once the user confirms the operation:
# vdsm-tool config-lvm-filter
Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/vg0-lv_home mountpoint: /home devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_root mountpoint: / devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_swap mountpoint: [SWAP] devices: /dev/vda2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/vda2$|", "r|.*|" ]
This filter will allow LVM to access the local devices used by the hypervisor, but not shared storage owned by VDSM. If you add a new device to the volume group, you will need to edit the filter manually.
Configure LVM filter? [yes,NO] ? [NO/yes] yes Configuration completed successfully!
Please reboot to verify the LVM configuration.
Scenario 2: A Configured Host
If the host is already configured, the command simply informs the user that the LVM filter is already configured:
# vdsm-tool config-lvm-filter
Analyzing host... LVM filter is already configured for Vdsm
Scenario 3: Manual Configuration Required
If the host configuration does not match the configuration required by VDSM, the LVM filter will need to be configured manually:
# vdsm-tool config-lvm-filter
Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/vg0-lv_home mountpoint: /home devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_root mountpoint: / devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_swap mountpoint: [SWAP] devices: /dev/vda2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/vda2$|", "r|.*|" ]
This filter will allow LVM to access the local devices used by the hypervisor, but not shared storage owned by VDSM. If you add a new device to the volume group, you will need to edit the filter manually.
This is the current LVM filter:
filter = [ "a|^/dev/vda2$|", "a|^/dev/vdb1$|", "r|.*|" ]
WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically.
Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value.
It is recommended to reboot after changing LVM filter.
2.6.7. Preparing and Adding Red Hat Gluster Storage
2.6.7.1. Preparing Red Hat Gluster Storage
For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide.
For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support.
2.6.7.2. Adding Red Hat Gluster Storage
To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage.
For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support.
2.6.8. Importing Existing Storage Domains
2.6.8.1. Overview of Importing Existing Storage Domains
Aside from adding new storage domains, which contain no data, you can import existing storage domains and access the data they contain. By importing storage domains, you can recover data in the event of a failure in the Manager database, and migrate data from one data center or environment to another.
The following is an overview of importing each storage domain type:
- Data
Importing an existing data storage domain allows you to access all of the virtual machines and templates that the data storage domain contains. After you import the storage domain, you must manually import virtual machines, floating disk images, and templates into the destination data center. The process for importing the virtual machines and templates that a data storage domain contains is similar to that for an export storage domain. However, because data storage domains contain all the virtual machines and templates in a given data center, importing data storage domains is recommended for data recovery or large-scale migration of virtual machines between data centers or environments.
ImportantYou can import existing data storage domains that were attached to data centers with the correct supported compatibility level. See Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions for more information.
- ISO
- Importing an existing ISO storage domain allows you to access all of the ISO files and virtual diskettes that the ISO storage domain contains. No additional action is required after importing the storage domain to access these resources; you can attach them to virtual machines as required.
- Export
Importing an existing export storage domain allows you to access all of the virtual machine images and templates that the export storage domain contains. Because export domains are designed for exporting and importing virtual machine images and templates, importing export storage domains is recommended method of migrating small numbers of virtual machines and templates inside an environment or between environments. For information on exporting and importing virtual machines and templates to and from export storage domains, see Exporting and Importing Virtual Machines and Templates in the Virtual Machine Management Guide.
NoteThe export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center.
WarningUpon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains.
2.6.8.2. Importing storage domains
Import a storage domain that was previously attached to a data center in the same environment or in a different environment. This procedure assumes the storage domain is no longer attached to any data center in any environment, to avoid data corruption. To import and attach an existing data storage domain to a data center, the target data center must be initialized.
Procedure
-
Click
. - Click Import Domain.
- Select the Data Center you want to import the storage domain to.
- Enter a Name for the storage domain.
- Select the Domain Function and Storage Type from the drop-down lists.
Select a host from the Host drop-down list.
ImportantAll communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
Enter the details of the storage domain.
NoteThe fields for specifying the details of the storage domain change depending on the values you select in the Domain Function and Storage Type lists. These fields are the same as those available for adding a new storage domain.
- Select the Activate Domain in Data Center check box to activate the storage domain after attaching it to the selected data center.
- Click .
You can now import virtual machines and templates from the storage domain to the data center.
Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains.
2.6.8.3. Migrating Storage Domains between Data Centers in the Same Environment
Migrate a storage domain from one data center to another in the same Red Hat Virtualization environment to allow the destination data center to access the data contained in the storage domain. This procedure involves detaching the storage domain from one data center, and attaching it to a different data center.
Migrating a data storage domain to a data center that has a higher compatibility level than the original data center upgrades the storage domain’s storage format version.
If you want to move the storage domain back to the original data center for any reason, such as to migrate virtual machines to the new data center, be aware that the higher version prevents reattaching the data storage domain to the original data center.
The Administration Portal prompts you to confirm that you want to update the storage domain format, for example, from V3 to V5. It also warns that you will not be able to attach it back to an older data center with a lower DC level.
To work around this issue, you can create a target data center that has the same compatibility version as the source data center. When you no longer need to maintain the lower compatibility version, you can increase the target data center’s compatibility version.
For details, see Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions.
Procedure
- Shut down all virtual machines running on the required storage domain.
-
Click
. - Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Maintenance, then click .
- Click Detach, then click .
- Click Attach.
- Select the destination data center and click .
The storage domain is attached to the destination data center and is automatically activated. You can now import virtual machines and templates from the storage domain to the destination data center.
2.6.8.4. Migrating Storage Domains between Data Centers in Different Environments
Migrate a storage domain from one Red Hat Virtualization environment to another to allow the destination environment to access the data contained in the storage domain. This procedure involves removing the storage domain from one Red Hat Virtualization environment, and importing it into a different environment. To import and attach an existing data storage domain to a Red Hat Virtualization data center, the storage domain’s source data center must have the correct supported compatibility level.
Migrating a data storage domain to a data center that has a higher compatibility level than the original data center upgrades the storage domain’s storage format version.
If you want to move the storage domain back to the original data center for any reason, such as to migrate virtual machines to the new data center, be aware that the higher version prevents reattaching the data storage domain to the original data center.
The Administration Portal prompts you to confirm that you want to update the storage domain format, for example, from V3 to V5. It also warns that you will not be able to attach it back to an older data center with a lower DC level.
To work around this issue, you can create a target data center that has the same compatibility version as the source data center. When you no longer need to maintain the lower compatibility version, you can increase the target data center’s compatibility version.
For details, see Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions.
Procedure
- Log in to the Administration Portal of the source environment.
- Shut down all virtual machines running on the required storage domain.
-
Click
. - Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Maintenance, then click .
- Click Detach, then click .
- Click Remove.
- In the Remove Storage(s) window, ensure the Format Domain, i.e. Storage Content will be lost! check box is not selected. This step preserves the data in the storage domain for later use.
- Click to remove the storage domain from the source environment.
- Log in to the Administration Portal of the destination environment.
-
Click
. - Click Import Domain.
- Select the destination data center from the Data Center drop-down list.
- Enter a name for the storage domain.
- Select the Domain Function and Storage Type from the appropriate drop-down lists.
- Select a host from the Host drop-down list.
Enter the details of the storage domain.
NoteThe fields for specifying the details of the storage domain change depending on the value you select in the Storage Type drop-down list. These fields are the same as those available for adding a new storage domain.
- Select the Activate Domain in Data Center check box to automatically activate the storage domain when it is attached.
- Click .
The storage domain is attached to the destination data center in the new Red Hat Virtualization environment and is automatically activated. You can now import virtual machines and templates from the imported storage domain to the destination data center.
Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains.
2.6.8.5. Importing Templates from Imported Data Storage Domains
Import a template from a data storage domain you have imported into your Red Hat Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated.
Procedure
-
Click
. - Click the imported storage domain’s name. This opens the details view.
- Click the Template Import tab.
- Select one or more templates to import.
- Click Import.
- For each template in the Import Templates(s) window, ensure the correct target cluster is selected in the Cluster list.
Map external virtual machine vNIC profiles to profiles that are present on the target cluster(s):
- Click vNic Profiles Mapping.
- Select the vNIC profile to use from the Target vNic Profile drop-down list.
- If multiple target clusters are selected in the Import Templates window, select each target cluster in the Target Cluster drop-down list and ensure the mappings are correct.
- Click .
- Click .
The imported templates no longer appear in the list under the Template Import tab.
2.6.9. Storage Tasks
2.6.9.1. Uploading Images to a Data Storage Domain
You can upload virtual disk images and ISO images to your data storage domain in the Administration Portal or with the REST API.
To upload images with the REST API, see IMAGETRANSFERS and IMAGETRANSFER in the REST API Guide.
QEMU-compatible virtual disks can be attached to virtual machines. Virtual disk types must be either QCOW2 or raw. Disks created from a QCOW2 virtual disk cannot be shareable, and the QCOW2 virtual disk file must not have a backing file.
ISO images can be attached to virtual machines as CDROMs or used to boot virtual machines.
Prerequisites
The upload function uses HTML 5 APIs, which requires your environment to have the following:
Certificate authority, imported into the web browser used to access the Administration Portal.
To import the certificate authority, browse to
https://engine_address/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
and enable all the trust settings. Refer to the instructions to install the certificate authority in Firefox, Internet Explorer, or Google Chrome.- Browser that supports HTML 5, such as Firefox 35, Internet Explorer 10, Chrome 13, or later.
Procedure
-
Click
. - Select Start from the Upload menu.
- Click Choose File and select the image to upload.
- Fill in the Disk Options fields. See Explanation of Settings in the New Virtual Disk Window for descriptions of the relevant fields.
Click
.A progress bar indicates the status of the upload. You can pause, cancel, or resume uploads from the Upload menu.
If the upload times out with the message, Reason: timeout due to transfer inactivity, increase the timeout value and restart the ovirt-engine
service:
# engine-config -s TransferImageClientInactivityTimeoutInSeconds=6000 # systemctl restart ovirt-engine
2.6.9.2. Uploading the VirtIO image files to a storage domain
The virtio-win_version.iso
image contains the following for Windows virtual machines to improve performance and usability:
- VirtIO drivers
- an installer for the guest agents
- an installer for the drivers
To install and upload the most recent version of virtio-win_version.iso
:
Install the image files on the Manager machine:
# dnf -y install virtio-win
After you install it on the Manager machine, the image file is
/usr/share/virtio-win/virtio-win_version.iso
- Upload the image file to a data storage domain that was not created locally during installation. For more information, see Uploading Images to a Data Storage Domain in the Administration Guide.
- Attach the image file to virtual machines.
The virtual machines can now use the virtio drivers and agents.
For information on attaching the image files to a virtual machine, see Installing the Guest Agents, Tools, and Drivers on Windows in the Virtual Machine Management Guide.
2.6.9.3. Uploading images to an ISO domain
The ISO domain is a deprecated storage domain type. The ISO Uploader tool, ovirt-iso-uploader
, is removed in Red Hat Virtualization 4.4. You should upload ISO images to the data domain with the Administration Portal or with the REST API. See Uploading Images to a Data Storage Domain for details.
Although the ISO domain is deprecated, this information is provided in case you must use an ISO domain.
To upload an ISO image to an ISO storage domain in order to make it available from within the Manager, follow these steps.
Procedure
- Login as root to the host that belongs to the Data Center where your ISO storage domain resides.
Get a directory tree of
/rhv/data-center
:# tree /rhev/data-center . |-- 80dfacc7-52dd-4d75-ab82-4f9b8423dc8b | |-- 76d1ecba-b61d-45a4-8eb5-89ab710a6275
/rhev/data-center/mnt/10.10.10.10:_rhevnfssd/76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- b835cd1c-111c-468d-ba70-fec5346af227 /rhev/data-center/mnt/10.10.10.10:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227 | |-- mastersd 76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- tasks mastersd/master/tasks | `-- vms mastersd/master/vms |-- hsm-tasks `-- mnt |-- 10.10.10.10:_rhevisosd | |-- b835cd1c-111c-468d-ba70-fec5346af227 | | |-- dom_md | | | |-- ids | | | |-- inbox | | | |-- leases | | | |-- metadata | | | `-- outbox | | `-- images | | `-- 11111111-1111-1111-1111-111111111111 | `-- lost+found [error opening dir] (output trimmed) Securely copy the image from the source location into the full path of
11111111-1111-1111-1111-111111111111
:# scp root@isosource:/isos/example.iso /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111
File permissions for the newly copied ISO image should be 36:36 (vdsm:kvm). If they are not, change user and group ownership of the ISO file to 36:36 (vdsm’s user and group):
# cd /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111 # chown 36.36 example.iso
The ISO image should now be available in the ISO domain in the data center.
2.6.9.4. Moving Storage Domains to Maintenance Mode
A storage domain must be in maintenance mode before it can be detached and removed. This is required to redesignate another data domain as the master
data domain.
You cannot move a storage domain into maintenance mode if a virtual machine has a lease on the storage domain. The virtual machine needs to be shut down, or the lease needs to be to removed or moved to a different storage domain first. See the Virtual Machine Management Guide for information about virtual machine leases.
Expanding iSCSI domains by adding more LUNs can only be done when the domain is active.
Procedure
- Shut down all the virtual machines running on the storage domain.
-
Click
. - Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
Click Maintenance.
NoteThe
Ignore OVF update failure
check box allows the storage domain to go into maintenance mode even if the OVF update fails.- Click .
The storage domain is deactivated and has an Inactive
status in the results list. You can now edit, detach, remove, or reactivate the inactive storage domains from the data center.
You can also activate, detach, and place domains into maintenance mode using the Storage tab in the details view of the data center it is associated with.
2.6.9.5. Editing Storage Domains
You can edit storage domain parameters through the Administration Portal. Depending on the state of the storage domain, either active or inactive, different fields are available for editing. Fields such as Data Center, Domain Function, Storage Type, and Format cannot be changed.
- Active: When the storage domain is in an active state, the Name, Description, Comment, Warning Low Space Indicator (%), Critical Space Action Blocker (GB), Wipe After Delete, and Discard After Delete fields can be edited. The Name field can only be edited while the storage domain is active. All other fields can also be edited while the storage domain is inactive.
- Inactive: When the storage domain is in maintenance mode or unattached, thus in an inactive state, you can edit all fields except Name, Data Center, Domain Function, Storage Type, and Format. The storage domain must be inactive to edit storage connections, mount options, and other advanced parameters. This is only supported for NFS, POSIX, and Local storage types.
iSCSI storage connections cannot be edited via the Administration Portal, but can be edited via the REST API. See Updating Storage Connections in the REST API Guide.
Editing an Active Storage Domain*
-
Click
and select a storage domain. - Click Manage Domain.
- Edit the available fields as required.
- Click .
Editing an Inactive Storage Domain
-
Click
. If the storage domain is active, move it to maintenance mode:
- Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Maintenance.
- Click .
- Click Manage Domain.
- Edit the storage path and other details as required. The new connection details must be of the same storage type as the original connection.
- Click .
Activate the storage domain:
- Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Activate.
2.6.9.6. Updating OVFs
By default, OVFs are updated every 60 minutes. However, if you have imported an important virtual machine or made a critical update, you can update OVFs manually.
Procedure
-
Click
. Select the storage domain and click More Actions ( ), then click Update OVFs.
The OVFs are updated and a message appears in Events.
2.6.9.7. Activating Storage Domains from Maintenance Mode
If you have been making changes to a data center’s storage, you have to put storage domains into maintenance mode. Activate a storage domain to resume using it.
-
Click
. - Click an inactive storage domain’s name. This opens the details view.
- Click the Data Centers tab.
- Click Activate.
If you attempt to activate the ISO domain before activating the data domain, an error message displays and the domain is not activated.
2.6.9.8. Detaching a Storage Domain from a Data Center
Detach a storage domain from one data center to migrate it to another data center.
Procedure
-
Click
. - Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Maintenance.
- Click to initiate maintenance mode.
- Click Detach.
- Click to detach the storage domain.
The storage domain has been detached from the data center, ready to be attached to another data center.
2.6.9.9. Attaching a Storage Domain to a Data Center
Attach a storage domain to a data center.
Procedure
-
Click
. - Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Attach.
- Select the appropriate data center.
- Click .
The storage domain is attached to the data center and is automatically activated.
2.6.9.10. Removing a Storage Domain
You have a storage domain in your data center that you want to remove from the virtualized environment.
Procedure
-
Click
. Move the storage domain to maintenance mode and detach it:
- Click the storage domain’s name. This opens the details view.
- Click the Data Center tab.
- Click Maintenance, then click .
- Click Detach, then click .
- Click Remove.
- Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
- Click .
The storage domain is permanently removed from the environment.
2.6.9.11. Destroying a Storage Domain
A storage domain encountering errors may not be able to be removed through the normal procedure. Destroying a storage domain forcibly removes the storage domain from the virtualized environment.
Procedure
-
Click
. - Select the storage domain and click More Actions ( ), then click Destroy.
- Select the Approve operation check box.
- Click .
2.6.9.12. Creating a Disk Profile
Disk profiles define the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are created based on storage profiles defined under data centers, and must be manually assigned to individual virtual disks for the profile to take effect.
This procedure assumes you have already defined one or more storage quality of service entries under the data center to which the storage domain belongs.
Procedure
-
Click
. - Click the data storage domain’s name. This opens the details view.
- Click the Disk Profiles tab.
- Click New.
- Enter a Name and a Description for the disk profile.
- Select the quality of service to apply to the disk profile from the QoS list.
- Click .
2.6.9.13. Removing a Disk Profile
Remove an existing disk profile from your Red Hat Virtualization environment.
Procedure
-
Click
. - Click the data storage domain’s name. This opens the details view.
- Click the Disk Profiles tab.
- Select the disk profile to remove.
- Click Remove.
- Click .
If the disk profile was assigned to any virtual disks, the disk profile is removed from those virtual disks.
2.6.9.14. Viewing the Health Status of a Storage Domain
Storage domains have an external health status in addition to their regular Status. The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the storage domain’s Name as one of the following icons:
- OK: No icon
- Info:
- Warning:
- Error:
- Failure:
To view further details about the storage domain’s health status, click the storage domain’s name. This opens the details view, and click the Events tab.
The storage domain’s health status can also be viewed using the REST API. A GET
request on a storage domain will include the external_status
element, which contains the health status.
You can set a storage domain’s health status in the REST API via the events
collection. For more information, see Adding Events in the REST API Guide.
2.6.9.15. Setting Discard After Delete for a Storage Domain
When the Discard After Delete check box is selected, a blkdiscard
command is called on a logical volume when it is removed and the underlying storage is notified that the blocks are free. The storage array can use the freed space and allocate it when requested. Discard After Delete only works on block storage. The flag is not available on the Red Hat Virtualization Manager for file storage, for example NFS.
Restrictions:
- Discard After Delete is only available on block storage domains, such as iSCSI or Fibre Channel.
-
The underlying storage must support
Discard
.
Discard After Delete can be enabled both when creating a block storage domain or when editing a block storage domain. See Preparing and Adding Block Storage and Editing Storage Domains.
2.6.9.16. Enabling 4K support on environments with more than 250 hosts
By default, GlusterFS domains and local storage domains support 4K block size on Red Hat Virtualization environments with up to 250 hosts. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO.
GlusterFS Storage is deprecated, and will no longer be supported in future releases.
The lockspace area that Sanlock allocates is 1 MB when the maximum number of hosts is the default 250. When you increase the maximum number of hosts when using 4K storage, the lockspace area is larger. For example, when using 2000 hosts, the lockspace area could be as large as 8 MB.
You can enable 4K block support on environments with more than 250 hosts by setting the engine configuration parameter MaxNumberOfHostsInStoragePool
.
Procedure
On the Manager machine enable the required maximum number of hosts:
# engine-config -s MaxNumberOfHostsInStoragePool=NUMBER_OF_HOSTS
Restart the JBoss Application Server:
# service jboss-as restart
For example, if you have a cluster with 300 hosts, enter:
# engine-config -s MaxNumberOfHostsInStoragePool=300 # service jboss-as restart
Verification
View the value of the MaxNumberOfHostsInStoragePool
parameter on the Manager:
# engine-config --get=MaxNumberOfHostsInStoragePool MaxNumberOfHostsInStoragePool: 250 version: general
2.6.9.17. Disabling 4K support
By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO.
GlusterFS Storage is deprecated, and will no longer be supported in future releases.
You can disable 4K block support.
Procedure
Ensure that 4K block support is enabled.
$ vdsm-client Host getCapabilities … { "GLUSTERFS" : [ 0, 512, 4096, ] …
Edit
/etc/vdsm/vdsm.conf.d/gluster.conf
and setenable_4k_storage
tofalse
. For example:$ vi /etc/vdsm/vdsm.conf.d/gluster.conf [gluster] # Use to disable 4k support # if needed. enable_4k_storage = false
2.6.9.18. Monitoring available space in a storage domain
You can monitor available space in a storage domain and create an alert to warn you when a storage domain is nearing capacity. You can also define a critical threshold at which point the domain shuts down.
With Virtual Data Optimizer (VDO) and thin pool support, you might see more available space than is physically available. For VDO this behavior is expected, but the Manager cannot predict how much data you can actually write. The Warning Low Confirmed Space Indicator parameter notifies you when the domain is nearing physical space capacity and shows how much confirmed space remains. Confirmed space refers to the actual space available to write data.
Procedure
-
In the Administration Portal, click
and click the name of a storage domain. - Click Manage Domain. The Manage Domains dialog box opens.
- Expand Advanced Parameters.
- For Warning Low Space Indicator (%) enter a percentage value. When the available space in the storage domain reaches this value, the Manager alerts you that the domain is nearing capacity.
- For Critical Space Action Blocker (GB), enter a value in gigabytes. When the available space in the storage domain reaches this value, the Manager shuts down.
- For Warning Low Confirmed Space Indicator (%) enter a percentage value. When the available space in the storage domain reaches this value, the Manager alerts you that the actual space available to write data is nearing capacity.