Administration Guide
Administration tasks in Red Hat Virtualization
Abstract
Chapter 1. Administering and Maintaining the Red Hat Virtualization Environment
The Red Hat Virtualization environment requires an administrator to keep it running. As an administrator, your tasks include:
- Managing physical and virtual resources such as hosts and virtual machines. This includes upgrading and adding hosts, importing domains, converting virtual machines created on foreign hypervisors, and managing virtual machine pools.
- Monitoring the overall system resources for potential problems such as extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions (such as migrating virtual machines to other hosts to lessen the load or freeing resources by shutting down machines).
- Responding to the new requirements of virtual machines (for example, upgrading the operating system or allocating more memory).
- Managing customized object properties using tags.
- Managing searches saved as public bookmarks.
- Managing user setup and setting permission levels.
- Troubleshooting for specific users or virtual machines for overall system functionality.
- Generating general and specific reports.
1.1. Global Configuration
Accessed by clicking Configure window allows you to configure a number of global resources for your Red Hat Virtualization environment, such as users, roles, system permissions, scheduling policies, instance types, and MAC address pools. This window allows you to customize the way in which users interact with resources in the environment, and provides a central location for configuring options that can be applied to multiple clusters.
→ , the1.1.1. Roles
Roles are predefined sets of privileges that can be configured from Red Hat Virtualization Manager. Roles provide access and management permissions to different levels of resources in the data center, and to specific physical and virtual resources.
With multilevel administration, any permissions which apply to a container object also apply to all individual objects within that container. For example, when a host administrator role is assigned to a user on a specific host, the user gains permissions to perform any of the available host operations, but only on the assigned host. However, if the host administrator role is assigned to a user on a data center, the user gains permissions to perform host operations on all hosts within the cluster of the data center.
1.1.1.1. Creating a New Role
If the role you require is not on Red Hat Virtualization’s default list of roles, you can create a new role and customize it to suit your purposes.
Procedure
- Click Configure window. The Roles tab is selected by default, showing a list of default User and Administrator roles, and any custom roles. → . This opens the
- Click New.
- Enter the Name and Description of the new role.
- Select either Admin or User as the Account Type.
- Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects in the Check Boxes to Allow Action list. You can also expand or collapse the options for each object.
- For each of the objects, select or clear the actions you want to permit or deny for the role you are setting up.
- Click to apply the changes. The new role displays on the list of roles.
1.1.1.2. Editing or Copying a Role
You can change the settings for roles you have created, but you cannot change default roles. To change default roles, clone and modify them to suit your requirements.
Procedure
- Click Configure window, which shows a list of default User and Administrator roles, as well as any custom roles. → . This opens the
- Select the role you wish to change.
- Click Edit Role or Copy Role window. or . This opens the
- If necessary, edit the Name and Description of the role.
- Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects. You can also expand or collapse the options for each object.
- For each of the objects, select or clear the actions you wish to permit or deny for the role you are editing.
- Click to apply the changes you have made.
1.1.1.3. User Role and Authorization Examples
The following examples illustrate how to apply authorization controls for various scenarios, using the different features of the authorization system described in this chapter.
Example 1.1. Cluster Permissions
Sarah is the system administrator for the accounts department of a company. All the virtual resources for her department are organized under a Red Hat Virtualization cluster called Accounts
. She is assigned the ClusterAdmin role on the accounts cluster. This enables her to manage all virtual machines in the cluster, since the virtual machines are child objects of the cluster. Managing the virtual machines includes editing, adding, or removing virtual resources such as disks, and taking snapshots. It does not allow her to manage any resources outside this cluster. Because ClusterAdmin is an administrator role, it allows her to use the Administration Portal or the VM Portal to manage these resources.
Example 1.2. VM PowerUser Permissions
John is a software developer in the accounts department. He uses virtual machines to build and test his software. Sarah has created a virtual desktop called johndesktop
for him. John is assigned the UserVmManager role on the johndesktop
virtual machine. This allows him to access this single virtual machine using the VM Portal. Because he has UserVmManager permissions, he can modify the virtual machine. Because UserVmManager is a user role, it does not allow him to use the Administration Portal.
Example 1.3. Data Center Power User Role Permissions
Penelope is an office manager. In addition to her own responsibilities, she occasionally helps the HR manager with recruitment tasks, such as scheduling interviews and following up on reference checks. As per corporate policy, Penelope needs to use a particular application for recruitment tasks.
While Penelope has her own machine for office management tasks, she wants to create a separate virtual machine to run the recruitment application. She is assigned PowerUserRole permissions for the data center in which her new virtual machine will reside. This is because to create a new virtual machine, she needs to make changes to several components within the data center, including creating the virtual disk in the storage domain.
Note that this is not the same as assigning DataCenterAdmin privileges to Penelope. As a PowerUser for a data center, Penelope can log in to the VM Portal and perform virtual machine-specific actions on virtual machines within the data center. She cannot perform data center-level operations such as attaching hosts or storage to a data center.
Example 1.4. Network Administrator Permissions
Chris works as the network administrator in the IT department. Her day-to-day responsibilities include creating, manipulating, and removing networks in the department’s Red Hat Virtualization environment. For her role, she requires administrative privileges on the resources and on the networks of each resource. For example, if Chris has NetworkAdmin privileges on the IT department’s data center, she can add and remove networks in the data center, and attach and detach networks for all virtual machines belonging to the data center.
Example 1.5. Custom Role Permissions
Rachel works in the IT department, and is responsible for managing user accounts in Red Hat Virtualization. She needs permission to add user accounts and assign them the appropriate roles and permissions. She does not use any virtual machines herself, and should not have access to administration of hosts, virtual machines, clusters or data centers. There is no built-in role which provides her with this specific set of permissions. A custom role must be created to define the set of permissions appropriate to Rachel’s position.
Figure 1.1. UserManager Custom Role
The UserManager custom role shown above allows manipulation of users, permissions and roles. These actions are organized under System - the top level object of the hierarchy shown in Object Hierarchy. This means they apply to all other objects in the system. The role is set to have an Account Type of Admin. This means that when she is assigned this role, Rachel can use both the Administration Portal and the VM Portal.
1.1.2. System Permissions
Permissions enable users to perform actions on objects, where objects are either individual objects or container objects. Any permissions that apply to a container object also apply to all members of that container.
Figure 1.2. Permissions & Roles
Figure 1.3. Red Hat Virtualization Object Hierarchy
1.1.2.1. User Properties
Roles and permissions are the properties of the user. Roles are predefined sets of privileges that permit access to different levels of physical and virtual resources. Multilevel administration provides a finely grained hierarchy of permissions. For example, a data center administrator has permissions to manage all objects in the data center, while a host administrator has system administrator permissions to a single physical host. A user can have permissions to use a single virtual machine but not make any changes to the virtual machine configurations, while another user can be assigned system permissions to a virtual machine.
1.1.2.2. User and Administrator Roles
Red Hat Virtualization provides a range of pre-configured roles, from an administrator with system-wide permissions to an end user with access to a single virtual machine. While you cannot change or remove the default roles, you can clone and customize them, or create new roles according to your requirements. There are two types of roles:
- Administrator Role: Allows access to the Administration Portal for managing physical and virtual resources. An administrator role confers permissions for actions to be performed in the VM Portal; however, it has no bearing on what a user can see in the VM Portal.
- User Role: Allows access to the VM Portal for managing and accessing virtual machines and templates. A user role determines what a user can see in the VM Portal. Permissions granted to a user with an administrator role are reflected in the actions available to that user in the VM Portal.
1.1.2.3. User Roles Explained
The table below describes basic user roles which confer permissions to access and configure virtual machines in the VM Portal.
Role | Privileges | Notes |
---|---|---|
UserRole | Can access and use virtual machines and pools. | Can log in to the VM Portal, use assigned virtual machines and pools, view virtual machine state and details. |
PowerUserRole | Can create and manage virtual machines and templates. | Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center. |
UserVmManager | System administrator of a virtual machine. | Can manage virtual machines and create and use snapshots. A user who creates a virtual machine in the VM Portal is automatically assigned the UserVmManager role on the machine. |
The table below describes advanced user roles which allow you to do more fine tuning of permissions for resources in the VM Portal.
Role | Privileges | Notes |
---|---|---|
UserTemplateBasedVm | Limited privileges to only use Templates. | Can use templates to create virtual machines. |
DiskOperator | Virtual disk user. | Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. |
VmCreator | Can create virtual machines in the VM Portal. | This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or clusters. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains. |
TemplateCreator | Can create, edit, manage and remove virtual machine templates within assigned resources. | This role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. |
DiskCreator | Can create, edit, manage and remove virtual disks within assigned clusters or data centers. | This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or storage domains. |
TemplateOwner | Can edit and delete the template, assign and manage user permissions for the template. | This role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template. |
VnicProfileUser | Logical network and network interface user for virtual machine and template. | Can attach or detach network interfaces from specific logical networks. |
1.1.2.4. Administrator Roles Explained
The table below describes basic administrator roles which confer permissions to access and configure resources in the Administration Portal.
Role | Privileges | Notes |
---|---|---|
SuperUser | System Administrator of the Red Hat Virtualization environment. | Has full permissions across all objects and levels, can manage all objects across all data centers. |
ClusterAdmin | Cluster Administrator. | Possesses administrative permissions for all objects underneath a specific cluster. |
DataCenterAdmin | Data Center Administrator. | Possesses administrative permissions for all objects underneath a specific data center except for storage. |
Do not use the administrative user for the directory server as the Red Hat Virtualization administrative user. Create a user in the directory server specifically for use as the Red Hat Virtualization administrative user.
The table below describes advanced administrator roles which allow you to do more fine tuning of permissions for resources in the Administration Portal.
Role | Privileges | Notes |
---|---|---|
TemplateAdmin | Administrator of a virtual machine template. | Can create, delete, and configure the storage domains and network details of templates, and move templates between domains. |
StorageAdmin | Storage Administrator. | Can create, delete, configure, and manage an assigned storage domain. |
HostAdmin | Host Administrator. | Can attach, remove, configure, and manage a specific host. |
NetworkAdmin | Network Administrator. | Can configure and manage the network of a particular data center or cluster. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. |
VmPoolAdmin | System Administrator of a virtual pool. | Can create, delete, and configure a virtual pool; assign and remove virtual pool users; and perform basic operations on a virtual machine in the pool. |
GlusterAdmin | Gluster Storage Administrator. | Can create, delete, configure, and manage Gluster storage volumes. |
VmImporterExporter | Import and export Administrator of a virtual machine. | Can import and export virtual machines. Able to view all virtual machines and templates exported by other users. |
1.1.2.5. Assigning an Administrator or User Role to a Resource
Assign administrator or user roles to resources to allow users to access or manage that resource.
Procedure
- Find and click the resource’s name. This opens the details view.
- Click the Permissions tab to list the assigned users, each user’s role, and the inherited permissions for the selected resource.
- Click Add.
- Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
- Select a role from the Role to Assign drop-down list.
- Click .
The user now has the inherited permissions of that role enabled for that resource.
Avoid assigning global permissions to regular users on resources such as clusters because permissions are automatically inherited by resources that are lower in a system’s hierarchy. Set UserRole
and all other user role permissions on specific resources such as virtual machines, pools or virtual machine pools, especially the latter.
Assigning global permissions can cause two problems due to the inheritance of permissions:
- A regular user can automatically be granted permission to control virtual machine pools, even if the administrator assigning permissions did not intend for this to happen.
- The virtual machine portal might behave unexpectedly with pools.
Therefore, it is strongly recommended to set UserRole
and all other user role permissions on specific resources only, especially virtual machine pool resources, and not on resources from which other resources inherit permissions.
1.1.2.6. Removing an Administrator or User Role from a Resource
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.
Procedure
- Find and click the resource’s name. This opens the details view.
- Click the Permissions tab to list the assigned users, the user’s role, and the inherited permissions for the selected resource.
- Select the user to remove from the resource.
- Click Remove.
- Click .
1.1.2.7. Managing System Permissions for a Data Center
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A data center administrator is a system administration role for a specific data center only. This is useful in virtualization environments with multiple data centers where each data center requires an administrator. The DataCenterAdmin role is a hierarchical model; a user assigned the data center administrator role for a data center can manage all objects in the data center with the exception of storage for that data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment.
The data center administrator role permits the following actions:
- Create and remove clusters associated with the data center.
- Add and remove hosts, virtual machines, and pools associated with the data center.
- Edit user permissions for virtual machines associated with the data center.
You can only assign roles and permissions to existing users.
You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator.
1.1.2.8. Data Center Administrator Roles Explained
Data Center Permission Roles
The table below describes the administrator roles and privileges applicable to data center administration.
Role | Privileges | Notes |
---|---|---|
DataCenterAdmin | Data Center Administrator | Can use, create, delete, manage all physical and virtual resources within a specific data center except for storage, including clusters, hosts, templates and virtual machines. |
NetworkAdmin | Network Administrator | Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well. |
1.1.2.9. Managing System Permissions for a Cluster
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A cluster administrator is a system administration role for a specific cluster only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment.
The cluster administrator role permits the following actions:
- Create and remove associated clusters.
- Add and remove hosts, virtual machines, and pools associated with the cluster.
- Edit user permissions for virtual machines associated with the cluster.
You can only assign roles and permissions to existing users.
You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator.
1.1.2.10. Cluster Administrator Roles Explained
Cluster Permission Roles
The table below describes the administrator roles and privileges applicable to cluster administration.
Role | Privileges | Notes |
---|---|---|
ClusterAdmin | Cluster Administrator | Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required. However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required. |
NetworkAdmin | Network Administrator | Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well. |
1.1.2.11. Managing System Permissions for a Network
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A network administrator is a system administration role that can be applied for a specific network, or for all networks on a data center, cluster, host, virtual machine, or template. A network user can perform limited administration roles, such as viewing and attaching networks on a specific virtual machine or template. You can use the Configure button in the header bar to assign a network administrator for all networks in the environment.
The network administrator role permits the following actions:
- Create, edit and remove networks.
- Edit the configuration of the network, including configuring port mirroring.
- Attach and detach networks from resources including clusters and virtual machines.
The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. You can also change the administrator of a network by removing the existing administrator and adding the new administrator.
1.1.2.12. Network Administrator and User Roles Explained
Network Permission Roles
The table below describes the administrator and user roles and privileges applicable to network administration.
Role | Privileges | Notes |
---|---|---|
NetworkAdmin | Network Administrator for data center, cluster, host, virtual machine, or template. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. | Can configure and manage the network of a particular data center, cluster, host, virtual machine, or template. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine. |
VnicProfileUser | Logical network and network interface user for virtual machine and template. | Can attach or detach network interfaces from specific logical networks. |
1.1.2.13. Managing System Permissions for a Host
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A host administrator is a system administration role for a specific host only. This is useful in clusters with multiple hosts, where each host requires a system administrator. You can use the Configure button in the header bar to assign a host administrator for all hosts in the environment.
The host administrator role permits the following actions:
- Edit the configuration of the host.
- Set up the logical networks.
- Remove the host.
You can also change the system administrator of a host by removing the existing system administrator and adding the new system administrator.
1.1.2.14. Host Administrator Roles Explained
Host Permission Roles
The table below describes the administrator roles and privileges applicable to host administration.
Role | Privileges | Notes |
---|---|---|
HostAdmin | Host Administrator | Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host. |
1.1.2.15. Managing System Permissions for a Storage Domain
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A storage administrator is a system administration role for a specific storage domain only. This is useful in data centers with multiple storage domains, where each storage domain requires a system administrator. Use the Configure button in the header bar to assign a storage administrator for all storage domains in the environment.
The storage domain administrator role permits the following actions:
- Edit the configuration of the storage domain.
- Move the storage domain into maintenance mode.
- Remove the storage domain.
You can only assign roles and permissions to existing users.
You can also change the system administrator of a storage domain by removing the existing system administrator and adding the new system administrator.
1.1.2.16. Storage Administrator Roles Explained
Storage Domain Permission Roles
The table below describes the administrator roles and privileges applicable to storage domain administration.
Role | Privileges | Notes |
---|---|---|
StorageAdmin | Storage Administrator | Can create, delete, configure and manage a specific storage domain. |
GlusterAdmin | Gluster Storage Administrator | Can create, delete, configure and manage Gluster storage volumes. |
1.1.2.17. Managing System Permissions for a Virtual Machine Pool
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources.
The virtual machine pool administrator role permits the following actions:
- Create, edit, and remove pools.
- Add and detach virtual machines from the pool.
You can only assign roles and permissions to existing users.
1.1.2.18. Virtual Machine Pool Administrator Roles Explained
Pool Permission Roles
The table below describes the administrator roles and privileges applicable to pool administration.
Role | Privileges | Notes |
---|---|---|
VmPoolAdmin | System Administrator role of a virtual pool. | Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine. |
ClusterAdmin | Cluster Administrator | Can use, create, delete, manage all virtual machine pools in a specific cluster. |
1.1.2.19. Managing System Permissions for a Virtual Disk
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
Red Hat Virtualization Manager provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the VM Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources.
The virtual disk creator role permits the following actions:
- Create, edit, and remove virtual disks associated with a virtual machine or other resources.
- Edit user permissions for virtual disks.
You can only assign roles and permissions to existing users.
1.1.2.20. Virtual Disk User Roles Explained
Virtual Disk User Permission Roles
The table below describes the user roles and privileges applicable to using and administrating virtual disks in the VM Portal.
Role | Privileges | Notes |
---|---|---|
DiskOperator | Virtual disk user. | Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. |
DiskCreator | Can create, edit, manage and remove virtual disks within assigned clusters or data centers. | This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. |
1.1.2.20.1. Setting a Legacy SPICE Cipher
SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is: kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL
This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine.
You can change the cipher string by using an Ansible playbook.
Changing the cipher string
On the Manager machine, create a file in the directory
/usr/share/ovirt-engine/playbooks
. For example:# vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml
Enter the following in the file and save it:
name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption
Run the file you just created:
# ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml
Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy
using the --extra-vars
option with the variable host_deploy_spice_cipher_string
:
# ansible-playbook -l hostname \
--extra-vars host_deploy_spice_cipher_string=”DEFAULT:-RC4:-3DES:-DES” \
/usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml
1.1.3. Scheduling Policies
A scheduling policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster that scheduling policy is applied to. Scheduling policies determine this logic via a combination of filters, weightings, and a load balancing policy. The filter modules apply hard enforcement and filter out hosts that do not meet the conditions specified by that filter. The weights modules apply soft enforcement, and are used to control the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
The Red Hat Virtualization Manager provides five default scheduling policies: Evenly_Distributed, Cluster_Maintenance, None, Power_Saving, and VM_Evenly_Distributed. You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information about the properties of each scheduling policy.
For detailed information about how scheduling policies work, see How does cluster scheduling policy work?.
Figure 1.4. Evenly Distributed Scheduling Policy
The Evenly_Distributed scheduling policy distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes, HighUtilization, VCpuToPhysicalCpuRatio, or MaxFreeMemoryForOverUtilized.
The VM_Evenly_Distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold.
Figure 1.5. Power Saving Scheduling Policy
The Power_Saving scheduling policy distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value.
Set the None policy to have no load or power sharing between hosts for running virtual machines. This is the default mode. When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes, HighUtilization, or MaxFreeMemoryForOverUtilized.
The Cluster_Maintenance scheduling policy limits activity in a cluster during maintenance tasks. When the Cluster_Maintenance policy is set, no new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate.
1.1.3.1. Creating a Scheduling Policy
You can create new scheduling policies to control the logic by which virtual machines are distributed amongst a given cluster in your Red Hat Virtualization environment.
Procedure
- Click → .
- Click the Scheduling Policies tab.
- Click New.
- Enter a Name and Description for the scheduling policy.
Configure filter modules:
- In the Filter Modules section, drag and drop the preferred filter modules to apply to the scheduling policy from the Disabled Filters section into the Enabled Filters section.
- Specific filter modules can also be set as the First, to be given highest priority, or Last, to be given lowest priority, for basic optimization. To set the priority, right-click any filter module, hover the cursor over Position and select First or Last.
Configure weight modules:
- In the Weights Modules section, drag and drop the preferred weights modules to apply to the scheduling policy from the Disabled Weights section into the Enabled Weights & Factors section.
- Use the + and - buttons to the left of the enabled weight modules to increase or decrease the weight of those modules.
Specify a load balancing policy:
- From the drop-down menu in the Load Balancer section, select the load balancing policy to apply to the scheduling policy.
- From the drop-down menu in the Properties section, select a load balancing property to apply to the scheduling policy and use the text field to the right of that property to specify a value.
- Use the + and - buttons to add or remove additional properties.
- Click .
1.1.3.2. Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window
The following table details the options available in the New Scheduling Policy and Edit Scheduling Policy windows.
Field Name | Description |
---|---|
Name | The name of the scheduling policy. This is the name used to refer to the scheduling policy in the Red Hat Virtualization Manager. |
Description | A description of the scheduling policy. This field is recommended but not mandatory. |
Filter Modules | A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below:
|
Weights Modules | A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
|
Load Balancer | This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage. |
Properties | This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the + and - buttons to add or remove additional properties to or from the load balancing module. |
1.1.4. Instance Types
Instance types can be used to define the hardware configuration of a virtual machine. Selecting an instance type when creating or editing a virtual machine will automatically fill in the hardware configuration fields. This allows users to create multiple virtual machines with the same hardware configuration without having to manually fill in every field.
Support for instance types is now deprecated, and will be removed in a future release.
A set of predefined instance types are available by default, as outlined in the following table:
Name | Memory | vCPUs |
---|---|---|
Tiny | 512 MB | 1 |
Small | 2 GB | 1 |
Medium | 4 GB | 2 |
Large | 8 GB | 2 |
XLarge | 16 GB | 4 |
Administrators can also create, edit, and remove instance types from the Instance Types tab of the Configure window.
Fields in the New Virtual Machine and Edit Virtual Machine windows that are bound to an instance type have a chain link image next to them ( ). If the value of one of these fields is changed, the virtual machine will be detached from the instance type, changing to Custom, and the chain will appear broken ( ). However, if the value is changed back, the chain will relink and the instance type will move back to the selected one.
1.1.4.1. Creating Instance Types
Administrators can create new instance types, which can then be selected by users when creating or editing virtual machines.
Procedure
- Click → .
- Click the Instance Types tab.
- Click New.
- Enter a Name and Description for the instance type.
- Click Show Advanced Options and configure the instance type’s settings as required. The settings that appear in the New Instance Type window are identical to those in the New Virtual Machine window, but with the relevant fields only. See Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows in the Virtual Machine Management Guide.
- Click .
The new instance type will appear in the Instance Types tab in the Configure window, and can be selected from the Instance Type drop-down list when creating or editing a virtual machine.
1.1.4.2. Editing Instance Types
Administrators can edit existing instance types from the Configure window.
Procedure
- Click → .
- Click the Instance Types tab.
- Select the instance type to be edited.
- Click Edit.
- Change the settings as required.
- Click .
The configuration of the instance type is updated. When a new virtual machine based on this instance type is created, or when an existing virtual machine based on this instance type is updated, the new configuration is applied.
Existing virtual machines based on this instance type will display fields, marked with a chain icon, that will be updated. If the existing virtual machines were running when the instance type was changed, the orange Pending Changes icon will appear beside them and the fields with the chain icon will be updated at the next restart.
1.1.4.3. Removing Instance Types
Procedure
- Click → .
- Click the Instance Types tab.
- Select the instance type to be removed.
- Click Remove.
- If any virtual machines are based on the instance type to be removed, a warning window listing the attached virtual machines will appear. To continue removing the instance type, select the Approve Operation check box. Otherwise click Cancel.
- Click .
The instance type is removed from the Instance Types list and can no longer be used when creating a new virtual machine. Any virtual machines that were attached to the removed instance type will now be attached to Custom (no instance type).
1.1.5. MAC Address Pools
MAC address pools define the range(s) of MAC addresses allocated for each cluster. A MAC address pool is specified for each cluster. By using MAC address pools, Red Hat Virtualization can automatically generate and assign MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. MAC address pools are more memory efficient when all MAC addresses related to a cluster are within the range for the assigned MAC address pool.
The same MAC address pool can be shared by multiple clusters, but each cluster has a single MAC address pool assigned. A default MAC address pool is created by Red Hat Virtualization and is used if another MAC address pool is not assigned. For more information about assigning MAC address pools to clusters see Creating a New Cluster.
If more than one Red Hat Virtualization cluster shares a network, do not rely solely on the default MAC address pool because the virtual machines of each cluster will try to use the same range of MAC addresses, leading to conflicts. To avoid MAC address conflicts, check the MAC address pool ranges to ensure that each cluster is assigned a unique MAC address range.
The MAC address pool assigns the next available MAC address following the last address that was returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses defined in a single MAC address pool, the ranges take turns in serving incoming requests in the same way available MAC addresses are selected.
1.1.5.1. Creating MAC Address Pools
You can create new MAC address pools.
Procedure
- Click → .
- Click the MAC Address Pools tab.
- Click Add.
- Enter the Name and Description of the new MAC address pool.
Select the Allow Duplicates check box to allow a MAC address to be used multiple times in a pool. The MAC address pool will not automatically use a duplicate MAC address, but enabling the duplicates option means a user can manually use a duplicate MAC address.
NoteIf one MAC address pool has duplicates disabled, and another has duplicates enabled, each MAC address can be used once in the pool with duplicates disabled but can be used multiple times in the pool with duplicates enabled.
- Enter the required MAC Address Ranges. To enter multiple ranges click the plus button next to the From and To fields.
- Click .
1.1.5.2. Editing MAC Address Pools
You can edit MAC address pools to change the details, including the range of MAC addresses available in the pool and whether duplicates are allowed.
Procedure
- Click → .
- Click the MAC Address Pools tab.
- Select the MAC address pool to be edited.
- Click Edit.
Change the Name, Description, Allow Duplicates, and MAC Address Ranges fields as required.
NoteWhen a MAC address range is updated, the MAC addresses of existing NICs are not reassigned. MAC addresses that were already assigned, but are outside of the new MAC address range, are added as user-specified MAC addresses and are still tracked by that MAC address pool.
- Click .
1.1.5.3. Editing MAC Address Pool Permissions
After a MAC address pool has been created, you can edit its user permissions. The user permissions control which data centers can use the MAC address pool. See Roles for more information on adding new user permissions.
Procedure
- Click → .
- Click the MAC Address Pools tab.
- Select the required MAC address pool.
Edit the user permissions for the MAC address pool:
To add user permissions to a MAC address pool:
- Click Add in the user permissions pane at the bottom of the Configure window.
- Search for and select the required users.
- Select the required role from the Role to Assign drop-down list.
- Click to add the user permissions.
To remove user permissions from a MAC address pool:
- Select the user permission to be removed in the user permissions pane at the bottom of the Configure window.
- Click Remove to remove the user permissions.
1.1.5.4. Removing MAC Address Pools
You can remove a created MAC address pool if the pool is not associated with a cluster, but the default MAC address pool cannot be removed.
Procedure
- Click → .
- Click the MAC Address Pools tab.
- Select the MAC address pool to be removed.
- Click the Remove.
- Click .
1.2. Dashboard
The Dashboard provides an overview of the Red Hat Virtualization system status by displaying a summary of Red Hat Virtualization’s resources and utilization. This summary can alert you to a problem and allows you to analyze the problem area.
The information in the dashboard is updated every 15 minutes by default from Data Warehouse, and every 15 seconds by default by the Manager API, or whenever the Dashboard is refreshed. The Dashboard is refreshed when the user changes back from another page or when manually refreshed. The Dashboard does not automatically refresh. The inventory card information is supplied by the Manager API and the utilization information is supplied by Data Warehouse. The Dashboard is implemented as a UI plugin component, which is automatically installed and upgraded alongside the Manager.
Figure 1.6. The Dashboard
1.2.1. Prerequisites
The Dashboard requires that Data Warehouse is installed and configured. See Installing and Configuring Data Warehouse in the Data Warehouse Guide.
1.2.2. Global Inventory
The top section of the Dashboard provides a global inventory of the Red Hat Virtualization resources and includes items for data centers, clusters, hosts, storage domains, virtual machines, and events. Icons show the status of each resource and numbers show the quantity of the each resource with that status.
Figure 1.7. Global Inventory
The title shows the number of a type of resource and their status is displayed below the title. Clicking on the resource title navigates to the related page in the Red Hat Virtualization Manager. The status for Clusters is always displayed as N/A.
Icon | Status |
---|---|
| None of that resource added to Red Hat Virtualization. |
| Shows the number of a resource with a warning status. Clicking on the icon navigates to the appropriate page with the search limited to that resource with a warning status. The search is limited differently for each resource:
|
| Shows the number of a resource with an up status. Clicking on the icon navigates to the appropriate page with the search limited to resources that are up. |
| Shows the number of a resource with a down status. Clicking on the icon navigates to the appropriate page with the search limited to resources with a down status. The search is limited differently for each resource:
|
images:images/Dashboard_Alert.png[title="Alert icon"] | Shows the number of events with an alert status. Clicking on the icon navigates to Events with the search limited to events with the severity of alert. |
images:images/Dashboard_Error.png[title="Error icon"] | Shows the number of events with an error status. Clicking on the icon navigates to Events with the search limited to events with the severity of error. |
1.2.3. Global Utilization
The Global Utilization section shows the system utilization of the CPU, Memory and Storage.
Figure 1.8. Global Utilization
- The top section shows the percentage of the available CPU, memory or storage and the over commit ratio. For example, the over commit ratio for the CPU is calculated by dividing the number of virtual cores by the number of physical cores that are available for the running virtual machines based on the latest data in Data Warehouse.
- The donut displays the usage in percentage for the CPU, memory or storage and shows the average usage for all hosts based on the average usage in the last 5 minutes. Hovering over a section of the donut will display the value of the selected section.
- The line graph at the bottom displays the trend in the last 24 hours. Each data point shows the average usage for a specific hour. Hovering over a point on the graph displays the time and the percentage used for the CPU graph and the amount of usage for the memory and storage graphs.
1.2.3.1. Top Utilized Resources
Figure 1.9. Top Utilized Resources (Memory)
Clicking the donut in the global utilization section of the Dashboard will display a list of the top utilized resources for the CPU, memory or storage. For CPU and memory the pop-up shows a list of the ten hosts and virtual machines with the highest usage. For storage the pop-up shows a list of the top ten utilized storage domains and virtual machines. The arrow to the right of the usage bar shows the trend of usage for that resource in the last minute.
1.2.4. Cluster Utilization
The Cluster Utilization section shows the cluster utilization for the CPU and memory in a heatmap.
Figure 1.10. Cluster Utilization
1.2.4.1. CPU
The heatmap of the CPU utilization for a specific cluster that shows the average utilization of the CPU for the last 24 hours. Hovering over the heatmap displays the cluster name. Clicking on the heatmap navigates to
→ and displays the results of a search on a specific cluster sorted by CPU utilization. The formula used to calculate the usage of the CPU by the cluster is the average host CPU utilization in the cluster. This is calculated by using the average host CPU utilization for each host over the last 24 hours to find the total average usage of the CPU by the cluster.1.2.4.2. Memory
The heatmap of the memory utilization for a specific cluster that shows the average utilization of the memory for the last 24 hours. Hovering over the heatmap displays the cluster name. Clicking on the heatmap navigates to
→ and displays the results of a search on a specific cluster sorted by memory usage. The formula used to calculate the memory usage by the cluster is the total utilization of the memory in the cluster in GB. This is calculated by using the average host memory utilization for each host over the last 24 hours to find the total average usage of memory by the cluster.1.2.5. Storage Utilization
The Storage Utilization section shows the storage utilization in a heatmap.
Figure 1.11. Storage Utilization
The heatmap shows the average utilization of the storage for the last 24 hours. The formula used to calculate the storage usage by the cluster is the total utilization of the storage in the cluster. This is calculated by using the average storage utilization for each host over the last 24 hours to find the total average usage of the storage by the cluster. Hovering over the heatmap displays the storage domain name. Clicking on the heatmap navigates to
→ with the storage domains sorted by utilization.1.3. Searches
1.3.1. Performing Searches in Red Hat Virtualization
The Administration Portal allows you to manage thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) into the search bar, available on the main page for each resource. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are required. Searches are not case sensitive.
1.3.2. Search Syntax and Examples
The syntax of the search queries for Red Hat Virtualization resources is as follows:
result type: {criteria} [sortby sort_spec]
Syntax Examples
The following examples describe how the search query is used and help you to understand how Red Hat Virtualization assists with building search queries.
Example | Result |
---|---|
Hosts: Vms.status = up page 2 | Displays page 2 of a list of all hosts running virtual machines that are up. |
Vms: domain = qa.company.com | Displays a list of all virtual machines running on the specified domain. |
Vms: users.name = Mary | Displays a list of all virtual machines belonging to users with the user name Mary. |
Events: severity > normal sortby time | Displays the list of all Events whose severity is higher than Normal, sorted by time. |
1.3.3. Search Auto-Completion
The Administration Portal provides auto-completion to help you create valid and powerful search queries. As you type each part of a search query, a drop-down list of choices for the next part of the search opens below the Search Bar. You can either select from the list and then continue typing/selecting the next part of the search, or ignore the options and continue entering your query manually.
The following table specifies by example how the Administration Portal auto-completion assists in constructing a query:
Hosts: Vms.status = down
Input | List Items Displayed | Action |
---|---|---|
h |
|
Select |
Hosts: | All host properties | Type v |
Hosts: v |
host properties starting with a |
Select |
Hosts: Vms | All virtual machine properties | Type s |
Hosts: Vms.s |
All virtual machine properties beginning with |
Select |
Hosts: Vms.status |
| Select or type = |
Hosts: Vms.status = | All status values | Select or type down |
1.3.4. Search Result Type Options
The result type allows you to search for resources of any of the following types:
- Vms for a list of virtual machines
- Host for a list of hosts
- Pools for a list of pools
- Template for a list of templates
- Events for a list of events
- Users for a list of users
- Cluster for a list of clusters
- DataCenter for a list of data centers
- Storage for a list of storage domains
As each type of resource has a unique set of properties and a set of other resource types that it is associated with, each search type has a set of valid syntax combinations. You can also use the auto-complete feature to create valid queries easily.
1.3.5. Search Criteria
You can specify the search criteria after the colon in the query. The syntax of {criteria}
is as follows:
<prop><operator><value>
or
<obj-type><prop><operator><value>
Examples
The following table describes the parts of the syntax:
Part | Description | Values | Example | Note |
---|---|---|---|---|
prop |
The property of the searched-for resource. Can also be the property of a resource type (see | Limit your search to objects with a certain property. For example, search for objects with a status property. | Status | N/A |
obj-type | A resource type that can be associated with the searched-for resource. | These are system objects, like data centers and virtual machines. | Users | N/A |
operator | Comparison operators. | = != (not equal) > < >= <= | N/A | Value options depend on property. |
Value | What the expression is being compared to. | String Integer Ranking Date (formatted according to Regional Settings) | Jones 256 normal |
|
1.3.6. Search: Multiple Criteria and Wildcards
Wildcards can be used in the <value>
part of the syntax for strings. For example, to find all users beginning with m, enter m*
.
You can perform a search having two criteria by using the Boolean operators AND
and OR
. For example:
Vms: users.name = m* AND status = Up
This query returns all running virtual machines for users whose names begin with "m".
Vms: users.name = m* AND tag = "paris-loc"
This query returns all virtual machines tagged with "paris-loc" for users whose names begin with "m".
When two criteria are specified without AND
or OR
, AND
is implied. AND
precedes OR
, and OR
precedes implied AND
.
1.3.7. Search: Determining Search Order
You can determine the sort order of the returned information by using sortby
. Sort direction (asc
for ascending, desc
for descending) can be included.
For example:
events: severity > normal sortby time desc
This query returns all Events whose severity is higher than Normal, sorted by time (descending order).
1.3.8. Searching for Data Centers
The following table describes all search options for Data Centers.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the clusters associated with the data center. |
| String | The name of the data center. |
| String | A description of the data center. |
| String | The type of data center. |
| List | The availability of the data center. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Datacenter: type = nfs and status != up
This example returns a list of data centers with a storage type of NFS and status other than up.
1.3.9. Searching for Clusters
The following table describes all search options for clusters.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the data center associated with the cluster. |
| String | The data center to which the cluster belongs. |
| String | The unique name that identifies the clusters on the network. |
| String | The description of the cluster. |
| String | True or False indicating the status of the cluster. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Clusters: initialized = true or name = Default
This example returns a list of clusters which are initialized or named Default.
1.3.10. Searching for Hosts
The following table describes all search options for hosts.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the virtual machines associated with the host. |
| Depends on property type | The property of the templates associated with the host. |
| Depends on property type | The property of the events associated with the host. |
| Depends on property type | The property of the users associated with the host. |
| String | The name of the host. |
| List | The availability of the host. |
| String | The health status of the host as reported by external systems and plug-ins. |
| String | The cluster to which the host belongs. |
| String | The unique name that identifies the host on the network. |
| Integer | The percent of processing power used. |
| Integer | The percentage of memory used. |
| Integer | The percentage of network usage. |
| Integer | Jobs waiting to be executed in the run-queue per processor, in a given time slice. |
| Integer | The version number of the operating system. |
| Integer | The number of CPUs on the host. |
| Integer | The amount of memory available. |
| Integer | The processing speed of the CPU. |
| String | The type of CPU. |
| Integer | The number of virtual machines currently running. |
| Integer | The number of virtual machines currently being migrated. |
| Integer | The percentage of committed memory. |
| String | The tag assigned to the host. |
| String | The type of host. |
| String | The data center to which the host belongs. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Hosts: cluster = Default and Vms.os = rhel6
This example returns a list of hosts which are part of the Default cluster and host virtual machines running the Red Hat Enterprise Linux 6 operating system.
1.3.11. Searching for Networks
The following table describes all search options for networks.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the cluster associated with the network. |
| Depends on property type | The property of the host associated with the network. |
| String | The human readable name that identifies the network. |
| String | Keywords or text describing the network, optionally used when creating the network. |
| Integer | The VLAN ID of the network. |
| String | Whether Spanning Tree Protocol (STP) is enabled or disabled for the network. |
| Integer | The maximum transmission unit for the logical network. |
| String | Whether the network is only used for virtual machine traffic. |
| String | The data center to which the network is attached. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Network: mtu > 1500 and vmnetwork = true
This example returns a list of networks with a maximum transmission unit greater than 1500 bytes, and which are set up for use by only virtual machines.
1.3.12. Searching for Storage
The following table describes all search options for storage.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the hosts associated with the storage. |
| Depends on property type | The property of the clusters associated with the storage. |
| String | The unique name that identifies the storage on the network. |
| String | The status of the storage domain. |
| String | The health status of the storage domain as reported by external systems and plug-ins. |
| String | The data center to which the storage belongs. |
| String | The type of the storage. |
| Integer | The size (GB) of the free storage. |
| Integer | The amount (GB) of the storage that is used. |
| Integer | The total amount (GB) of the storage that is available. |
| Integer | The amount (GB) of the storage that is committed. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Storage: free_size > 6 GB and total_size < 20 GB
This example returns a list of storage with free storage space greater than 6 GB, or total storage space less than 20 GB.
1.3.13. Searching for Disks
The following table describes all search options for disks.
You can use the Disk Type
and Content Type
filtering options to reduce the number of displayed virtual disks.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the data centers associated with the disk. |
| Depends on property type | The property of the storage associated with the disk. |
| String | The human readable name that identifies the storage on the network. |
| String | Keywords or text describing the disk, optionally used when creating the disk. |
| Integer | The virtual size of the disk. |
| Integer | The size of the disk. |
| Integer | The actual size allocated to the disk. |
| Integer | The date the disk was created. |
| String |
Whether the disk can or cannot be booted. Valid values are one of |
| String |
Whether the disk can or cannot be attached to more than one virtual machine at a time. Valid values are one of |
| String |
The format of the disk. Can be one of |
| String |
The status of the disk. Can be one of |
| String |
The type of the disk. Can be one of |
| Integer | The number of virtual machine(s) to which the disk is attached. |
| String | The name(s) of the virtual machine(s) to which the disk is attached. |
| String | The name of the quota enforced on the virtual disk. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Disks: format = cow and provisioned_size > 8
This example returns a list of virtual disks with QCOW format and an allocated disk size greater than 8 GB.
1.3.14. Searching for Volumes
The following table describes all search options for volumes.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| String | The name of the cluster associated with the volume. |
| Depends on property type (examples: name, description, comment, architecture) | The property of the clusters associated with the volume. |
| String | The human readable name that identifies the volume. |
| String | Can be one of distribute, replicate, distributed_replicate, stripe, or distributed_stripe. |
| Integer | Can be one of TCP or RDMA. |
| Integer | Number of replica. |
| Integer | Number of stripes. |
| String | The status of the volume. Can be one of Up or Down. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Volume: transport_type = rdma and stripe_count >= 2
This example returns a list of volumes with transport type set to RDMA, and with 2 or more stripes.
1.3.15. Searching for Virtual Machines
The following table describes all search options for virtual machines.
Currently, the Network Label, Custom Emulated Machine, and Custom CPU Type properties are not supported search parameters.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the hosts associated with the virtual machine. |
| Depends on property type | The property of the templates associated with the virtual machine. |
| Depends on property type | The property of the events associated with the virtual machine. |
| Depends on property type | The property of the users associated with the virtual machine. |
| Depends on the property type | The property of storage devices associated with the virtual machine. |
| Depends on the property type | The property of the vNIC associated with the virtual machine. |
| String | The name of the virtual machine. |
| List | The availability of the virtual machine. |
| Integer | The IP address of the virtual machine. |
| Integer | The number of minutes that the virtual machine has been running. |
| String | The domain (usually Active Directory domain) that groups these machines. |
| String | The operating system selected when the virtual machine was created. |
| Date | The date on which the virtual machine was created. |
| String | The unique name that identifies the virtual machine on the network. |
| Integer | The percent of processing power used. |
| Integer | The percentage of memory used. |
| Integer | The percentage of network used. |
| Integer | The maximum memory defined. |
| String | The applications currently installed on the virtual machine. |
| List | The cluster to which the virtual machine belongs. |
| List | The virtual machine pool to which the virtual machine belongs. |
| String | The name of the user currently logged in to the virtual machine. |
| List | The tags to which the virtual machine belongs. |
| String | The data center to which the virtual machine belongs. |
| List | The virtual machine type (server or desktop). |
| String | The name of the quota associated with the virtual machine. |
| String | Keywords or text describing the virtual machine, optionally used when creating the virtual machine. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
| Boolean | The virtual machine has pending configuration changes. |
Example
Vms: template.name = Win* and user.name = ""
This example returns a list of virtual machines whose base template name begins with Win and are assigned to any user.
Example
Vms: cluster = Default and os = windows7
This example returns a list of virtual machines that belong to the Default cluster and are running Windows 7.
1.3.16. Searching for Pools
The following table describes all search options for Pools.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| String | The name of the pool. |
| String | The description of the pool. |
| List | The type of pool. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Pools: type = automatic
This example returns a list of pools with a type of automatic
.
1.3.17. Searching for Templates
The following table describes all search options for templates.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| String | The property of the virtual machines associated with the template. |
| String | The property of the hosts associated with the template. |
| String | The property of the events associated with the template. |
| String | The property of the users associated with the template. |
| String | The name of the template. |
| String | The domain of the template. |
| String | The type of operating system. |
| Integer | The date on which the template was created. Date format is mm/dd/yy. |
| Integer | The number of virtual machines created from the template. |
| Integer | Defined memory. |
| String | The description of the template. |
| String | The status of the template. |
| String | The cluster associated with the template. |
| String | The data center associated with the template. |
| String | The quota associated with the template. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Template: Events.severity >= normal and Vms.uptime > 0
This example returns a list of templates where events of normal or greater severity have occurred on virtual machines derived from the template, and the virtual machines are still running.
1.3.18. Searching for Users
The following table describes all search options for users.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the virtual machines associated with the user. |
| Depends on property type | The property of the hosts associated with the user. |
| Depends on property type | The property of the templates associated with the user. |
| Depends on property type | The property of the events associated with the user. |
| String | The name of the user. |
| String | The last name of the user. |
| String | The unique name of the user. |
| String | The department to which the user belongs. |
| String | The group to which the user belongs. |
| String | The title of the user. |
| String | The status of the user. |
| String | The role of the user. |
| String | The tag to which the user belongs. |
| String | The pool to which the user belongs. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Users: Events.severity > normal and Vms.status = up or Vms.status = pause
This example returns a list of users where events of greater than normal severity have occurred on their virtual machines AND the virtual machines are still running; or the users' virtual machines are paused.
1.3.19. Searching for Events
The following table describes all search options you can use to search for events. Auto-completion is offered for many options as appropriate.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
| Depends on property type | The property of the virtual machines associated with the event. |
| Depends on property type | The property of the hosts associated with the event. |
| Depends on property type | The property of the templates associated with the event. |
| Depends on property type | The property of the users associated with the event. |
| Depends on property type | The property of the clusters associated with the event. |
| Depends on property type | The property of the volumes associated with the event. |
| List | Type of the event. |
| List | The severity of the event: Warning/Error/Normal. |
| String | Description of the event type. |
| List | Day the event occurred. |
| String | The user name associated with the event. |
| String | The host associated with the event. |
| String | The virtual machine associated with the event. |
| String | The template associated with the event. |
| String | The storage associated with the event. |
| String | The data center associated with the event. |
| String | The volume associated with the event. |
| Integer | The identification number of the event. |
| List | Sorts the returned results by one of the resource properties. |
| Integer | The page number of results to display. |
Example
Events: Vms.name = testdesktop and Hosts.name = gonzo.example.com
This example returns a list of events, where the event occurred on the virtual machine named testdesktop
while it was running on the host gonzo.example.com
.
1.4. Bookmarks
1.4.1. Saving a Query String as a Bookmark
A bookmark can be used to remember a search query, and shared with other users.
Procedure
- Enter the desired search query in the search bar and perform the search.
- Click the star-shaped Bookmark button to the right of the search bar. This opens the New Bookmark window.
- Enter the Name of the bookmark.
- Edit the Search string field, if required.
- Click .
Click the Bookmarks icon ( ) in the header bar to find and select the bookmark.
1.4.2. Editing a Bookmark
You can modify the name and search string of a bookmark.
Procedure
- Click the Bookmarks icon ( ) in the header bar.
- Select a bookmark and click Edit.
- Change the Name and Search string fields as necessary.
- Click .
1.4.3. Deleting a Bookmark
When a bookmark is no longer needed, remove it.
Procedure
- Click the Bookmarks icon ( ) in the header bar.
- Select a bookmark and click Remove.
- Click .
1.5. Tags
1.5.1. Using Tags to Customize Interactions with Red Hat Virtualization
After your Red Hat Virtualization platform is set up and configured to your requirements, you can customize the way you work with it using tags. Tags allow system resources to be arranged into groups or categories. This is useful when many objects exist in the virtualization environment and the administrator wants to concentrate on a specific set of them.
This section describes how to create and edit tags, assign them to hosts or virtual machines and search using the tags as criteria. Tags can be arranged in a hierarchy that matches a structure, to fit the needs of the enterprise.
To create, modify, and remove Administration Portal tags, click the Tags icon ( ) in the header bar.
1.5.2. Creating a Tag
Create tags so you can filter search results using tags.
Procedure
- Click the Tags icon ( ) in the header bar.
- Click Add to create a new tag, or select a tag and click New to create a descendant tag.
- Enter the Name and Description of the new tag.
- Click .
1.5.3. Modifying a Tag
You can edit the name and description of a tag.
Modifying a Tag
- Click the Tags icon ( ) in the header bar.
- Select the tag you want to modify and click Edit.
- Change the Name and Description fields as necessary.
- Click .
1.5.4. Deleting a Tag
When a tag is no longer needed, remove it.
Procedure
- Click the Tags icon ( ) in the header bar.
- Select the tag you want to delete and click Remove. A message warns you that removing the tag will also remove all descendants of the tag.
- Click .
You have removed the tag and all its descendants. The tag is also removed from all the objects that it was attached to.
1.5.5. Adding and Removing Tags to and from Objects
You can assign tags to and remove tags from hosts, virtual machines, and users.
Procedure
- Select the object(s) you want to tag or untag.
- Click More Actions ( ), then click Assign Tags.
- Select the check box to assign a tag to the object, or clear the check box to detach the tag from the object.
- Click .
The specified tag is now added or removed as a custom property of the selected object(s).
1.5.6. Searching for Objects Using Tags
Enter a search query using tag
as the property and the desired value or set of values as criteria for the search.
The objects tagged with the specified criteria are listed in the results list.
If you search for objects using tag
as the property and the inequality operator (!=
), for example, Host: Vms.tag!=server1
, the results list does not include untagged objects.
1.5.7. Customizing Hosts with Tags
You can use tags to store information about your hosts. You can then search for hosts based on tags. For more information on searches, see Searches.
Procedure
- Click → and select a host.
- Click More Actions ( ), then click Assign Tags.
- Select the check boxes of applicable tags.
- Click .
You have added extra, searchable information about your host as tags.
Chapter 2. Administering the Resources
2.1. Quality of Service
Red Hat Virtualization allows you to define quality of service entries that provide fine-grained control over the level of input and output, processing, and networking capabilities that resources in your environment can access. Quality of service entries are defined at the data center level and are assigned to profiles created under clusters and storage domains. These profiles are then assigned to individual resources in the clusters and storage domains where the profiles were created.
2.1.1. Storage Quality of Service
Storage quality of service defines the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Assigning storage quality of service to a virtual disk allows you to fine tune the performance of storage domains and prevent the storage operations associated with one virtual disk from affecting the storage capabilities available to other virtual disks hosted in the same storage domain.
2.1.1.1. Creating a Storage Quality of Service Entry
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under Storage, click New.
- Enter a QoS Name and a Description for the quality of service entry.
Specify the Throughput quality of service by clicking one of the radio buttons:
- None
- Total - Enter the maximum permitted total throughput in the MB/s field.
- Read/Write - Enter the maximum permitted throughput for read operations in the left MB/s field, and the maximum permitted throughput for write operations in the right MB/s field.
Specify the input and output (IOps) quality of service by clicking one of the radio buttons:
- None
- Total - Enter the maximum permitted number of input and output operations per second in the IOps field.
- Read/Write - Enter the maximum permitted number of input operations per second in the left IOps field, and the maximum permitted number of output operations per second in the right IOps field.
- Click .
You have created a storage quality of service entry, and can create disk profiles based on that entry in data storage domains that belong to the data center.
2.1.1.2. Removing a Storage Quality of Service Entry
Remove an existing storage quality of service entry.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under Storage, select a storage quality of service entry and click Remove.
- Click .
If any disk profiles were based on that entry, the storage quality of service entry for those profiles is automatically set to [unlimited]
.
2.1.2. Virtual Machine Network Quality of Service
Virtual machine network quality of service is a feature that allows you to create profiles for limiting both the inbound and outbound traffic of individual virtual network interface controllers. With this feature, you can limit bandwidth in a number of layers, controlling the consumption of network resources.
2.1.2.1. Creating a Virtual Machine Network Quality of Service Entry
Create a virtual machine network quality of service entry to regulate network traffic when applied to a virtual network interface controller (vNIC) profile, also known as a virtual machine network interface profile.
Creating a Virtual Machine Network Quality of Service Entry
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under VM Network, click New.
- Enter a Name for the virtual machine network quality of service entry.
- Enter the limits for the Inbound and Outbound network traffic.
- Click .
You have created a virtual machine network quality of service entry that can be used in a virtual network interface controller.
2.1.2.2. Settings in the New Virtual Machine Network QoS and Edit Virtual Machine Network QoS Windows Explained
Virtual machine network quality of service settings allow you to configure bandwidth limits for both inbound and outbound traffic on three distinct levels.
Field Name | Description |
---|---|
Data Center | The data center to which the virtual machine network QoS policy is to be added. This field is configured automatically according to the selected data center. |
Name | A name to represent the virtual machine network QoS policy within the Manager. |
Inbound | The settings to be applied to inbound traffic. Select or clear the Inbound check box to enable or disable these settings.
|
Outbound | The settings to be applied to outbound traffic. Select or clear the Outbound check box to enable or disable these settings.
|
To change the maximum value allowed by the Average, Peak, or Burst fields, use the engine-config
command to change the value of the MaxAverageNetworkQoSValue
, MaxPeakNetworkQoSValue
, or MaxBurstNetworkQoSValue
configuration keys. You must restart the ovirt-engine service for any changes to take effect. For example:
# engine-config -s MaxAverageNetworkQoSValue=2048 # systemctl restart ovirt-engine
2.1.2.3. Removing a Virtual Machine Network Quality of Service Entry
Remove an existing virtual machine network quality of service entry.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under VM Network, select a virtual machine network quality of service entry and click Remove.
- Click .
2.1.3. Host Network Quality of Service
Host network quality of service configures the networks on a host to enable the control of network traffic through the physical interfaces. Host network quality of service allows for the fine tuning of network performance by controlling the consumption of network resources on the same physical network interface controller. This helps to prevent situations where one network causes other networks attached to the same physical network interface controller to no longer function due to heavy traffic. By configuring host network quality of service, these networks can now function on the same physical network interface controller without congestion issues.
2.1.3.1. Creating a Host Network Quality of Service Entry
Create a host network quality of service entry.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under Host Network, click New.
- Enter a Qos Name and a description for the quality of service entry.
- Enter the desired values for Weighted Share, Rate Limit [Mbps], and Committed Rate [Mbps].
- Click .
2.1.3.2. Settings in the New Host Network Quality of Service and Edit Host Network Quality of Service Windows Explained
Host network quality of service settings allow you to configure bandwidth limits for outbound traffic.
Field Name | Description |
---|---|
Data Center | The data center to which the host network QoS policy is to be added. This field is configured automatically according to the selected data center. |
QoS Name | A name to represent the host network QoS policy within the Manager. |
Description | A description of the host network QoS policy. |
Outbound | The settings to be applied to outbound traffic.
|
To change the maximum value allowed by the Rate Limit [Mbps] or Committed Rate [Mbps] fields, use the engine-config
command to change the value of the MaxAverageNetworkQoSValue
configuration key. You must restart the ovirt-engine service for the change to take effect. For example:
# engine-config -s MaxAverageNetworkQoSValue=2048 # systemctl restart ovirt-engine
2.1.3.3. Removing a Host Network Quality of Service Entry
Remove an existing network quality of service entry.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under Host Network, select a host network quality of service entry and click Remove.
- Click when prompted.
2.1.4. CPU Quality of Service
CPU quality of service defines the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. Assigning CPU quality of service to a virtual machine allows you to prevent the workload on one virtual machine in a cluster from affecting the processing resources available to other virtual machines in that cluster.
2.1.4.1. Creating a CPU Quality of Service Entry
Create a CPU quality of service entry.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under CPU, click New.
- Enter a QoS Name and a Description for the quality of service entry.
-
Enter the maximum processing capability the quality of service entry permits in the Limit (%) field. Do not include the
%
symbol. - Click .
You have created a CPU quality of service entry, and can create CPU profiles based on that entry in clusters that belong to the data center.
2.1.4.2. Removing a CPU Quality of Service Entry
Remove an existing CPU quality of service entry.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the QoS tab.
- Under CPU, select a CPU quality of service entry and click Remove.
- Click .
If any CPU profiles were based on that entry, the CPU quality of service entry for those profiles is automatically set to [unlimited]
.
2.2. Data Centers
2.2.1. Introduction to Data Centers
A data center is a logical entity that defines the set of resources used in a specific environment. A data center is considered a container resource, in that it is comprised of logical resources, in the form of clusters and hosts; network resources, in the form of logical networks and physical NICs; and storage resources, in the form of storage domains.
A data center can contain multiple clusters, which can contain multiple hosts; it can have multiple storage domains associated to it; and it can support multiple virtual machines on each of its hosts. A Red Hat Virtualization environment can contain multiple data centers; the data center infrastructure allows you to keep these centers separate.
All data centers are managed from the single Administration Portal.
Figure 2.1. Data Centers
Red Hat Virtualization creates a default data center during installation. You can configure the default data center, or set up new appropriately named data centers.
2.2.2. The Storage Pool Manager
The Storage Pool Manager (SPM) is a role given to one of the hosts in the data center enabling it to manage the storage domains of the data center. The SPM entity can be run on any host in the data center; the Red Hat Virtualization Manager grants the role to one of the hosts. The SPM does not preclude the host from its standard operation; a host running as SPM can still host virtual resources.
The SPM entity controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN). This is an exclusive responsibility: only one host can be the SPM in the data center at one time to ensure metadata integrity.
The Red Hat Virtualization Manager ensures that the SPM is always available. The Manager moves the SPM role to a different host if the SPM host encounters problems accessing the storage. When the SPM starts, it ensures that it is the only host granted the role; therefore it will acquire a storage-centric lease. This process can take some time.
2.2.3. SPM Priority
The SPM role uses some of a host’s available resources. The SPM priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Critical virtual machines on hosts with low SPM priority will not have to contend with SPM operations for host resources.
You can change a host’s SPM priority in the SPM tab in the Edit Host window.
2.2.4. Data Center Tasks
2.2.4.1. Creating a New Data Center
This procedure creates a data center in your virtualization environment. The data center requires a functioning cluster, host, and storage domain to operate.
After you set the Compatibility Version, you cannot lower the version number. Version regression is not supported.
You can specify a MAC pool range for a cluster. Setting a MAC pool range is no longer supported.
Procedure
- Click → .
- Click New.
- Enter the Name and Description of the data center.
- Select the Storage Type, Compatibility Version, and Quota Mode of the data center from the drop-down menus.
- Click Data Center - Guide Me window. to create the data center and open the
- The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the Configure Later button. Configuration can be resumed by selecting the data center and clicking More Actions ( ), then clicking Guide Me.
The new data center will remain Uninitialized until a cluster, host, and storage domain are configured for it; use Guide Me to configure these entities.
2.2.4.2. Explanation of Settings in the New Data Center and Edit Data Center Windows
The table below describes the settings of a data center as displayed in the New Data Center and Edit Data Center windows. Invalid entries are outlined in orange when you click , prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.
Field | Description/Action |
---|---|
Name | The name of the data center. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. |
Description | The description of the data center. This field is recommended but not mandatory. |
Storage Type | Choose Shared or Local storage type. Different types of storage domains (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center. Local and shared domains, however, cannot be mixed. You can change the storage type after the data center is initialized. See Changing the Data Center Storage Type. |
Compatibility Version | The version of Red Hat Virtualization. After upgrading the Red Hat Virtualization Manager, the hosts, clusters and data centers may still be in the earlier version. Ensure that you have upgraded all the hosts, then the clusters, before you upgrade the Compatibility Level of the data center. |
Quota Mode | Quota is a resource limitation tool provided with Red Hat Virtualization. Choose one of:
|
Comment | Optionally add a plain text comment about the data center. |
2.2.4.3. Re-Initializing a Data Center: Recovery Procedure
This recovery procedure replaces the master
data domain of your data center with a new master
data domain. You must re-initialize your master
data domain if its data is corrupted. Re-initializing a data center allows you to restore all other resources associated with the data center, including clusters, hosts, and non-problematic storage domains.
You can import any backup or exported virtual machines or templates into your new master
data domain.
Procedure
- Click → and select the data center.
- Ensure that any storage domains attached to the data center are in maintenance mode.
- Click More Actions ( ), then click Re-Initialize Data Center.
- The Data Center Re-Initialize window lists all available (detached; in maintenance mode) storage domains. Click the radio button for the storage domain you are adding to the data center.
- Select the Approve operation check box.
- Click .
The storage domain is attached to the data center as the master
data domain and activated. You can now import any backup or exported virtual machines or templates into your new master
data domain.
2.2.4.4. Removing a Data Center
An active host is required to remove a data center. Removing a data center will not remove the associated resources.
Procedure
- Ensure the storage domains attached to the data center are in maintenance mode.
- Click → and select the data center to remove.
- Click Remove.
- Click .
2.2.4.5. Force Removing a Data Center
A data center becomes Non Responsive
if the attached storage domain is corrupt or if the host becomes Non Responsive
. You cannot Remove the data center under either circumstance.
Force Remove does not require an active host. It also permanently removes the attached storage domain.
It may be necessary to Destroy a corrupted storage domain before you can Force Remove the data center.
Procedure
- Click → and select the data center to remove.
- Click More Actions ( ), then click Force Remove.
- Select the Approve operation check box.
- Click
The data center and attached storage domain are permanently removed from the Red Hat Virtualization environment.
2.2.4.6. Changing the Data Center Storage Type
You can change the storage type of the data center after it has been initialized. This is useful for data domains that are used to move virtual machines or templates around.
Limitations
- Shared to Local - For a data center that does not contain more than one host and more than one cluster, since a local data center does not support it.
- Local to Shared - For a data center that does not contain a local storage domain.
Procedure
- Click → and select the data center to change.
- Click Edit.
- Change the Storage Type to the desired value.
- Click .
2.2.4.7. Changing the Data Center Compatibility Version
Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.
Prerequisites
- To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.
Procedure
- In the Administration Portal, click → .
- Select the data center to change and click .
- Change the Compatibility Version to the desired value.
- Click Change Data Center Compatibility Version confirmation dialog opens. . The
- Click to confirm.
2.2.5. Data Centers and Storage Domains
2.2.5.1. Attaching an Existing Data Domain to a Data Center
Data domains that are Unattached can be attached to a data center. Shared storage domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the Storage tab to list the storage domains already attached to the data center.
- Click Attach Data.
- Select the check box for the data domain to attach to the data center. You can select multiple check boxes to attach multiple data domains.
- Click .
The data domain is attached to the data center and is automatically activated.
2.2.5.2. Attaching an Existing ISO domain to a Data Center
An ISO domain that is Unattached can be attached to a data center. The ISO domain must be of the same Storage Type as the data center.
Only one ISO domain can be attached to a data center.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the Storage tab to list the storage domains already attached to the data center.
- Click Attach ISO.
- Click the radio button for the appropriate ISO domain.
- Click .
The ISO domain is attached to the data center and is automatically activated.
2.2.5.3. Attaching an Existing Export Domain to a Data Center
The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See Importing Existing Storage Domains for information on importing storage domains.
An export domain that is Unattached can be attached to a data center. Only one export domain can be attached to a data center.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the Storage tab to list the storage domains already attached to the data center.
- Click Attach Export.
- Click the radio button for the appropriate export domain.
- Click .
The export domain is attached to the data center and is automatically activated.
2.2.5.4. Detaching a Storage Domain from a Data Center
Detaching a storage domain from a data center stops the data center from associating with that storage domain. The storage domain is not removed from the Red Hat Virtualization environment; it can be attached to another data center.
Data, such as virtual machines and templates, remains attached to the storage domain.
Although it possible to detach the last master storage domain, this is not recommended.
If the master storage domain is detached, it must be reinitialized.
If the storage domain is reinitialized, all your data will be lost, and the storage domain might not find your disks again.
Procedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the Storage tab to list the storage domains attached to the data center.
-
Select the storage domain to detach. If the storage domain is
Active
, click Maintenance. - Click to initiate maintenance mode.
- Click Detach.
- Click .
It can take up to several minutes for the storage domain to disappear from the details view.
2.3. Clusters
2.3.1. Introduction to Clusters
A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models.
Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the cluster and settings on the virtual machines. The cluster is the highest level at which power and load-sharing policies can be defined.
The number of hosts and number of virtual machines that belong to a cluster are displayed in the results list under Host Count and VM Count, respectively.
Clusters run virtual machines or Red Hat Gluster Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together.
Red Hat Virtualization creates a default cluster in the default data center during installation.
Figure 2.2. Cluster
2.3.2. Cluster Tasks
Some cluster options do not apply to Gluster clusters. For more information about using Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage.
2.3.2.1. Creating a New Cluster
A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must have the same CPU architecture. To optimize your CPU types, create your hosts before you create your cluster. After creating the cluster, you can configure the hosts using the Guide Me button.
Procedure
- Click → .
- Click New.
- Select the Data Center the cluster will belong to from the drop-down list.
- Enter the Name and Description of the cluster.
- Select a network from the Management Network drop-down list to assign the management network role.
- Select the CPU Architecture.
For CPU Type, select the oldest CPU processor family among the hosts that will be part of this cluster. The CPU types are listed in order from the oldest to newest.
ImportantA hosts whose CPU processor family is older than the one you specify with CPU Type cannot be part of this cluster. For details, see Which CPU family should a RHEV3 or RHV4 cluster be set to?.
- Select the FIPS Mode of the cluster from the drop-down list.
- Select the Compatibility Version of the cluster from the drop-down list.
- Select the Switch Type from the drop-down list.
Select the Firewall Type for hosts in the cluster, either Firewalld (default) or iptables.
Noteiptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld
- Select either the Enable Virt Service or Enable Gluster Service check box to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes.
- Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Manager, allowing the administrator to provide an explanation for the maintenance.
- Optionally select the Enable to set Host maintenance reason check box to enable an optional reason field when a host is placed into maintenance mode from the Manager, allowing the administrator to provide an explanation for the maintenance.
- Optionally select the /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster will use. The /dev/urandom source (Linux-provided device) is enabled by default.
- Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster.
- Click the Migration Policy tab to define the virtual machine migration policy for the cluster.
- Click the Scheduling Policy tab to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and select a serial number policy.
- Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster.
- Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options.
- Click the MAC Address Pool tab to specify a MAC address pool other than the default pool for the cluster. For more options on creating, editing, or removing MAC address pools, see MAC Address Pools.
- Click Cluster - Guide Me window. to create the cluster and open the
- The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button. Configuration can be resumed by selecting the cluster and clicking More Actions ( ), then clicking Guide Me.
2.3.2.2. General Cluster Settings Explained
The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click , prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.
Field | Description/Action |
---|---|
Data Center | The data center that will contain the cluster. The data center must be created before adding a cluster. |
Name | The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. |
Description / Comment | The description of the cluster or additional notes. These fields are recommended but not mandatory. |
Management Network | The logical network that will be assigned the management network role. The default is ovirtmgmt. This network will also be used for migrating virtual machines if the migration network is not properly attached to the source or the destination hosts. On existing clusters, the management network can only be changed using the Manage Networks button in the Logical Networks tab in the details view. |
CPU Architecture | The CPU architecture of the cluster. All hosts in a cluster must run the architecture you specify. Different CPU types are available depending on which CPU architecture is selected.
|
CPU Type | The oldest CPU family in the cluster. For a list of CPU types, see CPU Requirements in the Planning and Prerequisites Guide. You cannot change this after creating the cluster without significant disruption. Set CPU type to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. |
Chipset/Firmware Type | This setting is only available if the CPU Architecture of the cluster is set to x86_64. This setting specifies the chipset and firmware type. Options are:
For more information, see UEFI and the Q35 chipset in the Administration Guide. |
Change Existing VMs/Templates from 1440fx to Q35 Chipset with Bios | Select this check box to change existing workloads when the cluster’s chipset changes from I440FX to Q35. |
FIPS Mode | The FIPS mode used by the cluster. All hosts in the cluster must run the FIPS mode you specify or they will become non-operational.
|
Compatibility Version | The version of Red Hat Virtualization. You will not be able to select a version earlier than the version specified for the data center. |
Switch Type | The type of switch used by the cluster. Linux Bridge is the standard Red Hat Virtualization switch. OVS provides support for Open vSwitch networking features. |
Firewall Type | Specifies the firewall type for hosts in the cluster, either firewalld (default) or iptables. iptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld. If you change an existing cluster’s firewall type, you must reinstall all hosts in the cluster to apply the change. |
Default Network Provider | Specifies the default external network provider that the cluster will use. If you select Open Virtual Network (OVN), the hosts added to the cluster are automatically configured to communicate with the OVN provider. If you change the default network provider, you must reinstall all hosts in the cluster to apply the change. |
Maximum Log Memory Threshold |
Specifies the logging threshold for maximum memory consumption as a percentage or as an absolute value in MB. A message is logged if a host’s memory usage exceeds the percentage value or if a host’s available memory falls below the absolute value in MB. The default is |
Enable Virt Service | If this check box is selected, hosts in this cluster will be used to run virtual machines. |
Enable Gluster Service | If this check box is selected, hosts in this cluster will be used as Red Hat Gluster Storage Server nodes, and not for running virtual machines. |
Import existing gluster configuration | This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to Red Hat Virtualization Manager. The following options are required for each host in the cluster that is being imported:
|
Additional Random Number Generator source | If the check box is selected, all hosts in the cluster have the additional random number generator device available. This enables passthrough of entropy from the random number generator device to virtual machines. |
Gluster Tuned Profile | This check box is only available if the Enable Gluster Service check box is selected. This option specifies the virtual-host tuning profile to enable more aggressive writeback of dirty memory pages, which benefits the host performance. |
2.3.2.3. Optimization Settings Explained
Memory Considerations
Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your Red Hat Virtualization environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine.
CPU Considerations
For non-CPU-intensive workloads, you can run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). The following benefits can be achieved:
- You can run a greater number of virtual machines, which reduces hardware requirements.
- You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads.
- For best performance, and especially for CPU-intensive workloads, you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host’s hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core.
The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows.
Field | Description/Action |
---|---|
Memory Optimization |
|
CPU Threads | Selecting the Count Threads As Cores check box enables hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). When this check box is selected, the exposed host threads are treated as cores that virtual machines can use. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores. |
Memory Balloon | Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the Memory Overcommit Manager (MoM) starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine.
To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine includes a balloon device unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution. |
KSM control | Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. |
2.3.2.4. Migration Policy Settings Explained
A migration policy defines the conditions for live migrating virtual machines in the event of host failure. These conditions include the downtime of the virtual machine during migration, network bandwidth, and how the virtual machines are prioritized.
Policy | Description |
---|---|
Cluster default (Minimal downtime) |
Overrides in |
Minimal downtime | A policy that lets virtual machines migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if the virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. |
Post-copy migration | When used, post-copy migration pauses the migrating virtual machine vCPUs on the source host, transfers only a minimum of memory pages, activates the virtual machine vCPUs on the destination host, and transfers the remaining memory pages while the virtual machine is running on the destination. The post-copy policy first tries pre-copy to verify whether convergence can occur. The migration switches to post-copy if the virtual machine migration does not converge after a long time. This significantly reduces the downtime of the migrated virtual machine, and also guarantees that the migration finishes regardless of how rapidly the memory pages of the source virtual machine change. It is optimal for migrating virtual machines in heavy continuous use, which would not be possible to migrate with standard pre-copy migration. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. Warning If the network connection breaks prior to the completion of the post-copy process, the Manager pauses and then kills the running virtual machine. Do not use post-copy migration if the virtual machine availability is critical or if the migration network is unstable. |
Suspend workload if needed | A policy that lets virtual machines migrate in most situations, including virtual machines running heavy workloads. Because of this, virtual machines may experience a more significant downtime than with some of the other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. |
The bandwidth settings define the maximum bandwidth of both outgoing and incoming migrations per host.
Policy | Description |
---|---|
Auto | Bandwidth is copied from the Rate Limit [Mbps] setting in the data center Host Network QoS. If the rate limit has not been defined, it is computed as a minimum of link speeds of sending and receiving network interfaces. If rate limit has not been set, and link speeds are not available, it is determined by local VDSM setting on sending host. |
Hypervisor default | Bandwidth is controlled by local VDSM setting on sending Host. |
Custom | Defined by user (in Mbps). This value is divided by the number of concurrent migrations (default is 2, to account for ingoing and outgoing migration). Therefore, the user-defined bandwidth must be large enough to accommodate all concurrent migrations.
For example, if the |
The resilience policy defines how the virtual machines are prioritized in the migration.
Field | Description/Action |
---|---|
Migrate Virtual Machines | Migrates all virtual machines in order of their defined priority. |
Migrate only Highly Available Virtual Machines | Migrates only highly available virtual machines to prevent overloading other hosts. |
Do Not Migrate Virtual Machines | Prevents virtual machines from being migrated. |
Field | Description/Action |
---|---|
Enable Migration Encryption | Allows the virtual machine to be encrypted during migration.
|
Parallel Migrations | Allows you to specify whether and how many parallel migration connections to use.
|
Number of VM Migration Connections | This setting is only available when Custom is selected. The preferred number of custom parallel migrations, between 2 and 255. |
2.3.2.5. Scheduling Policy Settings Explained
Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information.
Field | Description/Action |
---|---|
Select Policy | Select a policy from the drop-down list.
|
Properties | The following properties appear depending on the selected policy. Edit them if necessary:
|
Scheduler Optimization | Optimize scheduling for host weighing/ordering.
|
Enable Trusted Service |
Enable integration with an OpenAttestation server. Before this can be enabled, use the |
Enable HA Reservation | Enable the Manager to monitor cluster capacity for highly available virtual machines. The Manager ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly. |
Serial Number Policy | Configure the policy for assigning serial numbers to each new virtual machine in the cluster:
|
Custom Serial Number | Specify the custom serial number to apply to new virtual machines in the cluster. |
When a host’s free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580
are logged to /var/log/vdsm/mom.log. /var/log/vdsm/mom.log is the Memory Overcommit Manager log file.
2.3.2.6. MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties
The scheduler has a background process that migrates virtual machines according to the current cluster scheduling policy and its parameters. Based on the various criteria and their relative weights in a policy, the scheduler continuously categorizes hosts as source hosts or destination hosts and migrates individual virtual machines from the former to the latter.
The following description explains how the evenly_distributed and power_saving cluster scheduling policies interact with the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties. Although both policies consider CPU and memory load, CPU load is not relevant for the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties.
If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the evenly_distributed policy:
- Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts.
- Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become destination hosts.
- If MaxFreeMemoryForOverUtilized is not defined, the scheduler does not migrate virtual machines based on the memory load. (It continues migrating virtual machines based on the policy’s other criteria, such as CPU load.)
- If MinFreeMemoryForUnderUtilized is not defined, the scheduler considers all hosts eligible to become destination hosts.
If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the power_saving policy:
- Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts.
- Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become source hosts.
- Hosts that have more free memory than MaxFreeMemoryForOverUtilized are not overutilized and become destination hosts.
- Hosts that have less free memory than MinFreeMemoryForUnderUtilized are not underutilized and become destination hosts.
- The scheduler prefers migrating virtual machines to hosts that are neither overutilized nor underutilized. If there are not enough of these hosts, the scheduler can migrate virtual machines to underutilized hosts. If the underutilized hosts are not needed for this purpose, the scheduler can power them down.
- If MaxFreeMemoryForOverUtilized is not defined, no hosts are overutilized. Therefore, only underutilized hosts are source hosts, and destination hosts include all hosts in the cluster.
- If MinFreeMemoryForUnderUtilized is not defined, only overutilized hosts are source hosts, and hosts that are not overutilized are destination hosts.
To prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine.
If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered.
In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio.
Additional resources
2.3.2.7. Cluster Console Settings Explained
The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows.
Field | Description/Action |
---|---|
Define SPICE Proxy for Cluster | Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the VM Portal) is outside of the network where the hypervisors reside. |
Overridden SPICE proxy address | The proxy by which the SPICE client connects to virtual machines. The address must be in the following format: protocol://[host]:[port] |
2.3.2.8. Fencing Policy Settings Explained
The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows.
Field | Description/Action |
---|---|
Enable fencing | Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere. |
Skip fencing if host has live lease on storage | If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced. |
Skip fencing on cluster connectivity issues | If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold. The Threshold value is selected from the drop-down list; available values are 25, 50, 75, and 100. |
Skip fencing if gluster bricks are up | This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and can be reached from other peers. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. |
Skip fencing if gluster quorum not met | This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and shutting down the host will cause loss of quorum. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. |
2.3.2.9. Setting Load and Power Management Policies for Hosts in a Cluster
The evenly_distributed and power_saving scheduling policies allow you to specify acceptable memory and CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each scheduling policy, see Cluster Scheduling Policy Settings.
Procedure
- Click → and select a cluster.
- Click Edit.
- Click the Scheduling Policy tab.
Select one of the following policies:
- none
vm_evenly_distributed
- Set the minimum number of virtual machines that must be running on at least one host to enable load balancing in the HighVmCount field.
- Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field.
- Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field.
- Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information.
evenly_distributed
- Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.
- Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
- Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information.
Optionally, to prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine.
If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered.
In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio.
power_saving
- Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.
- Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field.
- Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
- Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information.
Choose one of the following as the Scheduler Optimization for the cluster:
- Select Optimize for Utilization to include weight modules in scheduling to allow best selection.
- Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests.
-
If you are using an OpenAttestation server to verify your hosts, and have set up the server’s details using the
engine-config
tool, select the Enable Trusted Service check box.
OpenAttestation and Intel Trusted Execution Technology (Intel TXT) are no longer available.
- Optionally select the Enable HA Reservation check box to enable the Manager to monitor cluster capacity for highly available virtual machines.
Optionally select a Serial Number Policy for the virtual machines in the cluster:
-
System Default: Use the system-wide defaults, which are configured in the Manager database using the engine configuration tool and the
DefaultSerialNumberPolicy
andDefaultCustomSerialNumber
key names. The default value forDefaultSerialNumberPolicy
is to use the Host ID. See Scheduling Policies in the Administration Guide for more information. - Host ID: Set each virtual machine’s serial number to the UUID of the host.
- Vm ID: Set each virtual machine’s serial number to the UUID of the virtual machine.
- Custom serial number: Set each virtual machine’s serial number to the value you specify in the following Custom Serial Number parameter.
-
System Default: Use the system-wide defaults, which are configured in the Manager database using the engine configuration tool and the
- Click .
2.3.2.10. Updating the MoM Policy on Hosts in a Cluster
The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions for a cluster pass to hosts the next time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up. The following procedure must be performed on each host individually.
Procedure
- Click → .
- Click the cluster’s name. This opens the details view.
- Click the Hosts tab and select the host that requires an updated MoM policy.
- Click Sync MoM Policy.
The MoM policy on the host is updated without having to move the host to maintenance mode and back Up.
2.3.2.11. Creating a CPU Profile
CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect.
This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs.
Procedure
- Click → .
- Click the cluster’s name. This opens the details view.
- Click the CPU Profiles tab.
- Click New.
- Enter a Name and a Description for the CPU profile.
- Select the quality of service to apply to the CPU profile from the QoS list.
- Click .
2.3.2.12. Removing a CPU Profile
Remove an existing CPU profile from your Red Hat Virtualization environment.
Procedure
- Click → .
- Click the cluster’s name. This opens the details view.
- Click the CPU Profiles tab and select the CPU profile to remove.
- Click Remove.
- Click .
If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default
CPU profile.
2.3.2.13. Importing an Existing Red Hat Gluster Storage Cluster
You can import a Red Hat Gluster Storage cluster and all hosts belonging to the cluster into Red Hat Virtualization Manager.
When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status
command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them.
Procedure
- Click → .
- Click New.
- Select the Data Center the cluster will belong to.
- Enter the Name and Description of the cluster.
Select the Enable Gluster Service check box and the Import existing gluster configuration check box.
The Import existing gluster configuration field is only displayed if the Enable Gluster Service is selected.
In the Hostname field, enter the host name or IP address of any server in the cluster.
The host SSH Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field.
- Enter the Password for the server, and click .
- The Add Hosts window opens, and a list of hosts that are a part of the cluster displays.
- For each host, enter the Name and the Root Password.
If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field.
Click Apply to set the entered password all hosts.
Verify that the fingerprints are valid and submit your changes by clicking OK.
The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Red Hat Gluster Storage cluster into Red Hat Virtualization Manager.
2.3.2.14. Explanation of Settings in the Add Hosts Window
The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details.
Field | Description |
---|---|
Use a common password | Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts. |
Name | Enter the name of the host. |
Hostname/IP | This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window. |
Root Password | Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster. |
Fingerprint | The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window. |
2.3.2.15. Removing a Cluster
Move all hosts out of a cluster before removing it.
You cannot remove the Default cluster, as it holds the Blank template. You can, however, rename the Default cluster and add it to a new data center.
Procedure
- Click → and select a cluster.
- Ensure there are no hosts in the cluster.
- Click Remove.
- Click
2.3.2.16. Memory Optimization
To increase the number of virtual machines on a host, you can use memory overcommitment, in which the memory you assign to virtual machines exceeds RAM and relies on swap space.
However, there are potential problems with memory overcommitment:
- Swapping performance - Swap space is slower and consumes more CPU resources than RAM, impacting virtual machine performance. Excessive swapping can lead to CPU thrashing.
- Out-of-memory (OOM) killer - If the host runs out of swap space, new processes cannot start, and the kernel’s OOM killer daemon begins shutting down active processes such as virtual machine guests.
To help overcome these shortcomings, you can do the following:
- Limit memory overcommitment using the Memory Optimization setting and the Memory Overcommit Manager (MoM).
- Make the swap space large enough to accommodate the maximum potential demand for virtual memory and have a safety margin remaining.
- Reduce virtual memory size by enabling memory ballooning and Kernel Same-page Merging (KSM).
2.3.2.17. Memory Optimization and Memory Overcommitment
You can limit the amount of memory overcommitment by selecting one of the Memory Optimization settings: None (0%), 150%, or 200%.
Each setting represents a percentage of RAM. For example, with a host that has 64 GB RAM, selecting 150% means you can overcommit memory by an additional 32 GB, for a total of 96 GB in virtual memory. If the host uses 4 GB of that total, the remaining 92 GB are available. You can assign most of that to the virtual machines (Memory Size on the System tab), but consider leaving some of it unassigned as a safety margin.
Sudden spikes in demand for virtual memory can impact performance before the MoM, memory ballooning, and KSM have time to re-optimize virtual memory. To reduce that impact, select a limit that is appropriate for the kinds of applications and workloads you are running:
- For workloads that produce more incremental growth in demand for memory, select a higher percentage, such as 200% or 150%.
- For more critical applications or workloads that produce more sudden increases in demand for memory, select a lower percentage, such as 150% or None (0%). Selecting None helps prevent memory overcommitment but allows the MoM, memory balloon devices, and KSM to continue optimizing virtual memory.
Always test your Memory Optimization settings by stress testing under a wide range of conditions before deploying the configuration to production.
To configure the Memory Optimization setting, click the Optimization tab in the New Cluster or Edit Cluster windows. See Cluster Optimization Settings Explained.
Additional comments:
- The Host Statistics views display useful historical information for sizing the overcommitment ratio.
- The actual memory available cannot be determined in real time because the amount of memory optimization achieved by KSM and memory ballooning changes continuously.
- When virtual machines reach the virtual memory limit, new apps cannot start.
- When you plan the number of virtual machines to run on a host, use the maximum virtual memory (physical memory size and the Memory Optimization setting) as a starting point. Do not factor in the smaller virtual memory achieved by memory optimizations such as memory ballooning and KSM.
2.3.2.18. Swap Space and Memory Overcommitment
Red Hat provides these recommendations for configuring swap space.
When applying these recommendations, follow the guidance to size the swap space as "last effort memory" for a worst-case scenario. Use the physical memory size and Memory Optimization setting as a basis for estimating the total virtual memory size. Exclude any reduction of the virtual memory size from optimization by the MoM, memory ballooning, and KSM.
To help prevent an OOM condition, make the swap space large enough to handle a worst-case scenario and still have a safety margin available. Always stress-test your configuration under a wide range of conditions before deploying it to production.
2.3.2.19. The Memory Overcommit Manager (MoM)
The Memory Overcommit Manager (MoM) does two things:
- It limits memory overcommitment by applying the Memory Optimization setting to the hosts in a cluster, as described in the preceding section.
- It optimizes memory by managing the memory ballooning and KSM, as described in the following sections.
You do not need to enable or disable MoM.
When a host’s free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580
are logged to /var/log/vdsm/mom.log, the Memory Overcommit Manager log file.
2.3.2.20. Memory Ballooning
Virtual machines start with the full amount of virtual memory you have assigned to them. As virtual memory usage exceeds RAM, the host relies more on swap space. If enabled, memory ballooning lets virtual machines give up the unused portion of that memory. The freed memory can be reused by other processes and virtual machines on the host. The reduced memory footprint makes swapping less likely and improves performance.
The virtio-balloon package that provides the memory balloon device and drivers ships as a loadable kernel module (LKM). By default, it is configured to load automatically. Adding the module to the denyist or unloading it disables ballooning.
The memory balloon devices do not coordinate directly with each other; they rely on the host’s Memory Overcommit Manager (MoM) process to continuously monitor each virtual machine needs and instruct the balloon device to increase or decrease virtual memory.
Performance considerations:
- Red Hat does not recommend memory ballooning and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools.
- Use memory ballooning when increasing virtual machine density (economy) is more important than performance.
- Memory ballooning does not have a significant impact on CPU utilization. (KSM consumes some CPU resources, but consumption remains consistent under pressure.)
To enable memory ballooning, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable Memory Balloon Optimization checkbox. This setting enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the MoM starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. See Cluster Optimization Settings Explained.
Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Updating the MoM Policy on Hosts in a Cluster.
2.3.2.21. Kernel Same-page Merging (KSM)
When a virtual machine runs, it often creates duplicate memory pages for items such as common libraries and high-use data. Furthermore, virtual machines that run similar guest operating systems and applications produce duplicate memory pages in virtual memory.
When enabled, Kernel Same-page Merging (KSM) examines the virtual memory on a host, eliminates duplicate memory pages, and shares the remaining memory pages across multiple applications and virtual machines. These shared memory pages are marked copy-on-write; if a virtual machine needs to write changes to the page, it makes a copy first before writing its modifications to that copy.
While KSM is enabled, the MoM manages KSM. You do not need to configure or control KSM manually.
KSM increases virtual memory performance in two ways. Because a shared memory page is used more frequently, the host is more likely to the store it in cache or main memory, which improves the memory access speed. Additionally, with memory overcommitment, KSM reduces the virtual memory footprint, reducing the likelihood of swapping and improving performance.
KSM consumes more CPU resources than memory ballooning. The amount of CPU KSM consumes remains consistent under pressure. Running identical virtual machines and applications on a host provides KSM with more opportunities to merge memory pages than running dissimilar ones. If you run mostly dissimilar virtual machines and applications, the CPU cost of using KSM may offset its benefits.
Performance considerations:
- After the KSM daemon merges large amounts of memory, the kernel memory accounting statistics may eventually contradict each other. If your system has a large amount of free memory, you might improve performance by disabling KSM.
- Red Hat does not recommend KSM and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools.
- Use KSM when increasing virtual machine density (economy) is more important than performance.
To enable KSM, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable KSM checkbox. This setting enables MoM to run KSM when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. See Cluster Optimization Settings Explained.
2.3.2.22. UEFI and the Q35 chipset
The Intel Q35 chipset, the default chipset for new virtual machines, includes support for the Unified Extensible Firmware Interface (UEFI), which replaces legacy BIOS.
Alternatively you can configure a virtual machine or cluster to use the legacy Intel i440fx chipset, which does not support UEFI.
UEFI provides several advantages over legacy BIOS, including the following:
- A modern boot loader
- SecureBoot, which authenticates the digital signatures of the boot loader
- GUID Partition Table (GPT), which enables disks larger than 2 TB
To use UEFI on a virtual machine, you must configure the virtual machine’s cluster for 4.4 compatibility or later. Then you can set UEFI for any existing virtual machine, or to be the default BIOS type for new virtual machines in the cluster. The following options are available:
BIOS Type | Description |
---|---|
Q35 Chipset with Legacy BIOS | Legacy BIOS without UEFI (Default for clusters with compatibility version 4.4) |
Q35 Chipset with UEFI BIOS | BIOS with UEFI |
Q35 Chipset with SecureBoot | UEFI with SecureBoot, which authenticates the digital signatures of the boot loader |
Legacy | i440fx chipset with legacy BIOS |
Setting the BIOS type before installing the operating system
You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI is not supported after installing an operating system.
2.3.2.23. Configuring a cluster to use the Q35 Chipset and UEFI
After upgrading a cluster to Red Hat Virtualization 4.4, all virtual machines in the cluster run the 4.4 version of VDSM. You can configure a cluster’s default BIOS type, which determines the default BIOS type of any new virtual machines you create in that cluster. If necessary, you can override the cluster’s default BIOS type by specifying a different BIOS type when you create a virtual machine.
Procedure
- In the VM Portal or the Administration Portal, click → .
- Select a cluster and click Edit.
- Click General.
Define the default BIOS type for new virtual machines in the cluster by clicking the BIOS Type dropdown menu, and selecting one of the following:
- Legacy
- Q35 Chipset with Legacy BIOS
- Q35 Chipset with UEFI BIOS
- Q35 Chipset with SecureBoot
- From the Compatibility Version dropdown menu select 4.4. The Manager checks that all running hosts are compatible with 4.4, and if they are, the Manager uses 4.4 features.
- If any existing virtual machines in the cluster should use the new BIOS type, configure them to do so. Any new virtual machines in the cluster that are configured to use the BIOS type Cluster default now use the BIOS type you selected. For more information, see Configuring a virtual machine to use the Q35 Chipset and UEFI.
Because you can change the BIOS type only before installing an operating system, for any existing virtual machines that are configured to use the BIOS type Cluster default, change the BIOS type to the previous default cluster BIOS type. Otherwise the virtual machine might not boot. Alternatively, you can reinstall the virtual machine’s operating system.
2.3.2.24. Configuring a virtual machine to use the Q35 Chipset and UEFI
You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI, or from UEFI to legacy BIOS, might prevent the virtual machine from booting. If you change the BIOS type of an existing virtual machine, reinstall the operating system.
If the virtual machine’s BIOS type is set to Cluster default, changing the BIOS type of the cluster changes the BIOS type of the virtual machine. If the virtual machine has an operating system installed, changing the cluster BIOS type can cause booting the virtual machine to fail.
Procedure
To configure a virtual machine to use the Q35 chipset and UEFI:
- In the VM Portal or the Administration Portal click → .
- Select a virtual machine and click Edit.
- On the General tab, click Show Advanced Options.
- Click → .
Select one of the following from the BIOS Type dropdown menu:
- Cluster default
- Q35 Chipset with Legacy BIOS
- Q35 Chipset with UEFI BIOS
- Q35 Chipset with SecureBoot
- Click .
- From the Virtual Machine portal or the Administration Portal, power off the virtual machine. The next time you start the virtual machine, it will run with the new BIOS type you selected.
2.3.2.25. Changing the Cluster Compatibility Version
Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.
Prerequisites
- To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.
Limitations
Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.
If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.
Procedure
- In the Administration Portal, click → .
- Select the cluster to change and click .
- On the General tab, change the Compatibility Version to the desired value.
- Click Change Cluster Compatibility Version confirmation dialog opens. . The
- Click to confirm.
An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.
After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.
In a self-hosted engine environment, the Manager virtual machine does not need to be restarted.
Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Virtual machines that have not been updated run with the old configuration, and the new configuration could be overwritten if other changes are made to the virtual machine before the reboot.
Once you have updated the compatibility version of all clusters and virtual machines in a data center, you can then change the compatibility version of the data center itself.
2.4. Logical Networks
2.4.1. Logical Network Tasks
2.4.1.1. Performing Networking Tasks
New, Edit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers.
→ provides a central location for users to perform logical network-related operations and search for logical networks based on each network’s property or association with other resources. TheClick each network name and use the tabs in the details view to perform functions including:
- Attaching or detaching the networks to clusters and hosts
- Removing network interfaces from virtual machines and templates
- Adding and removing permissions for users to access and manage networks
These functions are also accessible through each individual resource.
Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable.
If you plan to use Red Hat Virtualization nodes to provide any services, remember that the services will stop if the Red Hat Virtualization environment stops operating.
This applies to all services, but you should be especially aware of the hazards of running the following on Red Hat Virtualization:
- Directory Services
- DNS
- Storage
2.4.1.2. Creating a New Logical Network in a Data Center or Cluster
Create a logical network and define its use in a data center, or in clusters in a data center.
Procedure
- Click → or → .
- Click the data center or cluster name. The Details view opens.
- Click the Logical Networks tab.
Open the New Logical Network window:
- From a data center details view, click New.
- From a cluster details view, click Add Network.
- Enter a Name, Description, and Comment for the logical network.
- Optional: Enable Enable VLAN tagging.
- Optional: Disable VM Network.
Optional: Select the Create on external provider checkbox. This disables the network label and the VM network. See External Providers for details.
- Select the External Provider. The External Provider list does not include external providers that are in read-only mode.
- To create an internal, isolated network, select ovirt-provider-ovn on the External Provider list and leave Connect to physical network cleared.
- Enter a new label or select an existing label for the logical network in the Network Label text field.
For MTU, either select Default (1500) or select Custom and specify a custom value.
ImportantAfter you create a network on an external provider, you cannot change the network’s MTU settings.
ImportantIf you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414.
- If you selected ovirt-provider-ovn from the External Provider drop-down list, define whether the network should implement Security Groups. See Logical Network General Settings Explained for details.
- From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
- If the Create on external provider checkbox is selected, the Subnet tab is visible. From the Subnet tab, select the Create subnet and enter a Name, CIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
- From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
- Click .
If you entered a label for the logical network, it is automatically added to all host network interfaces with that label.
When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.
2.4.1.3. Editing a Logical Network
A logical network cannot be edited or moved to another interface if it is not synchronized with the network configuration on the host. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts on how to synchronize your networks.
When changing the VM Network
property of an existing logical network used as a display network, no new virtual machines can be started on a host already running virtual machines. Only hosts that have no running virtual machines after the change of the VM Network
property can start new virtual machines.
Procedure
- Click → .
- Click the data center’s name. This opens the details view.
- Click the Logical Networks tab and select a logical network.
- Click Edit.
Edit the necessary settings.
NoteYou can edit the name of a new or existing network, with the exception of the default network, without having to stop the virtual machines.
- Click .
Multi-host network configuration automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running.
2.4.1.4. Removing a Logical Network
You can remove a logical network from ovirtmgmt management network.
→ or → . The following procedure shows you how to remove logical networks associated to a data center. For a working Red Hat Virtualization environment, you must have at least one logical network used as theProcedure
- Click → .
- Click a data center’s name. This opens the details view.
- Click the Logical Networks tab to list the logical networks in the data center.
- Select a logical network and click Remove.
- Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from the Manager and from the external provider if the network is provided by an external provider. The check box is grayed out if the external provider is in read-only mode.
- Click .
The logical network is removed from the Manager and is no longer available.
2.4.1.5. Configuring a Non-Management Logical Network as the Default Route
The default route used by hosts in a cluster is through the management network (ovirtmgmt
). The following procedure provides instructions to configure a non-management logical network as the default route.
Prerequisite:
-
If you are using the
default_route
custom property, you need to clear the custom property from all attached hosts and then follow this procedure.
Configuring the Default Route Role
- Click → .
- Click the name of the non-management logical network to configure as the default route to access its details.
- Click the Clusters tab.
- Click Manage Network. This opens the Manage Network window.
- Select the Default Route checkbox for the appropriate cluster(s).
- Click .
When networks are attached to a host, the default route of the host will be set on the network of your choice. It is recommended to configure the default route role before any host is added to your cluster. If your cluster already contains hosts, they may become out-of-sync until you sync your change to them.
Important Limitations with IPv6
- For IPv6, Red Hat Virtualization supports only static addressing.
- If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network.
- If the host and Manager are not on the same subnet, the Manager loses connectivity with the host because the IPv6 gateway has been removed.
- Moving the default route role to a non-management network removes the IPv6 gateway from the network interface and generates an alert: "On cluster clustername the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network."
2.4.1.6. Adding a static route on a host
You can use nmstate to add static routes to hosts. This method requires you to configure the hosts directly, without using Red Hat Virtualization Manager.
Static-routes you add are preserved as long as the related routed bridge, interface, or bond exists and has an IP address. Otherwise, the system removes the static route.
Except for adding or removing a static route on a host, always use the RHV Manager to configure host network settings in your cluster. For details, see Network Manager Stateful Configuration (nmstate).
The custom static-route is preserved so long as its interface/bond exists and has an IP address. Otherwise, it will be removed.
As a result, VM networks behave differently from non-VM networks:
- VM networks are based on a bridge. Moving the network from one interfaces/bond to another does not affect the route on a VM Network.
- Non-VM networks are based on an interface. Moving the network from one interfaces/bond to another deletes the route related to the Non-VM network.
Prerequisites
This procedure requires nmstate, which is only available if your environment uses:
- Red Hat Virtualization Manager version 4.4
- Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts that are based on Red Hat Enterprise Linux 8
Procedure
- Connect to the host you want to configure.
On the host, create a
static_route.yml
file, with the following example content:routes: config: - destination: 192.168.123.0/24 next-hop-address: 192.168.178.1 next-hop-interface: eth1
- Replace the example values shown with real values for your network.
To route your traffic to a secondary added network, use
next-hop-interface
to specify an interface or network name.-
To use a non-virtual machine network, specify an interface such as
eth1
. -
To use a virtual machine network, specify a network name that is also the bridge name such as
net1
.
-
To use a non-virtual machine network, specify an interface such as
Run this command:
$ nmstatectl set static_route.yml
Verification steps
Run the IP route command,
ip route
, with the destination parameter value you set instatic_route.yml
. This should show the desired route. For example, run the following command:$ ip route | grep 192.168.123.0`
Additional resources
2.4.1.7. Removing a static route on a host
You can use nmstate to remove static routes from hosts. This method requires you to configure the hosts directly, without using Red Hat Virtualization Manager.
Except for adding or removing a static route on a host, always use the RHV Manager to configure host network settings in your cluster. For details, see Network Manager Stateful Configuration (nmstate).
The custom static-route is preserved so long as its interface/bond exists and has an IP address. Otherwise, it will be removed.
As a result, VM networks behave differently from non-VM networks:
- VM networks are based on a bridge. Moving the network from one interfaces/bond to another does not affect the route on a VM Network.
- Non-VM networks are based on an interface. Moving the network from one interfaces/bond to another deletes the route related to the Non-VM network.
Prerequisites
This procedure requires nmstate, which is only available if your environment uses:
- Red Hat Virtualization Manager version 4.4
- Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts that are based on Red Hat Enterprise Linux 8
Procedure
- Connect to the host you want to reconfigure.
-
On the host, edit the
static_route.yml
file. -
Insert a line
state: absent
as shown in the following example. Add the value of
next-hop-interface
between the brackets ofinterfaces: []
. The result should look similar to the example shown here.routes: config: - destination: 192.168.123.0/24 next-hop-address: 192.168.178. next-hop-interface: eth1 state: absent interfaces: [{“name”: eth1}]
Run this command:
$ nmstatectl set static_route.yml
Verification steps
Run the IP route command,
ip route
, with the destination parameter value you set instatic_route.yml
. This should no longer show the desired route. For example, run the following command:$ ip route | grep 192.168.123.0`
Additional resources
2.4.1.8. Viewing or Editing the Gateway for a Logical Network
Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway.
If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host.
Red Hat Virtualization handles multiple gateways automatically whenever an interface goes up or down.
Procedure
- Click → .
- Click the host’s name. This opens the details view.
- Click the Network Interfaces tab to list the network interfaces attached to the host, and their configurations.
- Click Setup Host Networks.
- Hover your cursor over an assigned logical network and click the pencil icon. This opens the Edit Management Network window.
The Edit Management Network window displays the network name, the boot protocol, and the IP, subnet mask, and gateway addresses. The address information can be manually edited by selecting a Static boot protocol.
2.4.1.9. Logical Network General Settings Explained
The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.
Field Name | Description |
---|---|
Name | The name of the logical network. This text field must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Note that while the name of the logical network can be longer than 15 characters and can contain non-ASCII characters, the on-host identifier (vdsm_name) will differ from the name you defined. See Mapping VDSM Names to Logical Network Names for instructions on displaying a mapping of these names. |
Description | The description of the logical network. This text field has a 40-character limit. |
Comment | A field for adding plain text, human-readable comments regarding the logical network. |
Create on external provider | Allows you to create the logical network to an OpenStack Networking instance that has been added to the Manager as an external provider. External Provider - Allows you to select the external provider on which the logical network will be created. |
Enable VLAN tagging | VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled. |
VM Network | Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box. |
Port Isolation | If this is set, virtual machines on the same host are prevented from communicating and seeing each other on this logical network. For this option to work on different hypervisors, the switches need to be configured with PVLAN/Port Isolation on the respective port/VLAN connected to the hypervisors, and not reflect back the frames with any hairpin setting. |
MTU | Choose either Default, which sets the maximum transmission unit (MTU) to the value given in the parenthesis (), or Custom to set a custom MTU for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Custom is selected. IMPORTANT: If you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414. |
Network Label | Allows you to specify a new label for the network or select from existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label. |
Security Groups |
Allows you to assign security groups to the ports on this logical network. |
2.4.1.10. Logical Network Cluster Settings Explained
The table below describes the settings for the Cluster tab of the New Logical Network window.
Field Name | Description |
---|---|
Attach/Detach Network to/from Cluster(s) | Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters. Name - the name of the cluster to which the settings will apply. This value cannot be edited. Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster. Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster. |
2.4.1.11. Logical Network vNIC Profiles Settings Explained
The table below describes the settings for the vNIC Profiles tab of the New Logical Network window.
Field Name | Description |
---|---|
vNIC Profiles | Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile. Public - Allows you to specify whether the profile is available to all users. QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile. |
2.4.1.12. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window
Specify the traffic type for the logical network to optimize the network traffic flow.
Procedure
- Click → .
- Click the cluster’s name. This opens the details view.
- Click the Logical Networks tab.
- Click Manage Networks.
- Select the appropriate check boxes and radio buttons.
- Click .
Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration.
2.4.1.13. Explanation of Settings in the Manage Networks Window
The table below describes the settings for the Manage Networks window.
Field | Description/Action |
---|---|
Assign | Assigns the logical network to all hosts in the cluster. |
Required | A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational. |
VM Network | A logical network marked "VM Network" carries network traffic relevant to the virtual machine network. |
Display Network | A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller. |
Migration Network | A logical network marked "Migration Network" carries virtual machine and storage migration traffic. If an outage occurs on this network, the management network (ovirtmgmt by default) will be used instead. |
2.4.1.14. Configuring virtual functions on a NIC
This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV
Single Root I/O Virtualization (SR-IOV) enables you to use each PCIe endpoint as multiple separate devices by using physical functions (PFs) and virtual functions (VFs). A PCIe card can have between one and eight PFs. Each PF can have many VFs. The number of VFs it can have depends on the specific type of PCIe device.
To configure SR-IOV-capable Network Interface Controllers (NICs), you use the Red Hat Virtualization Manager. There, you can configure the number of VFs on each NIC.
You can configure a VF like you would configure a standalone NIC, including:
- Assigning one or more logical networks to the VF.
- Creating bonded interfaces with VFs.
- Assigning vNICs to VFs for direct device passthrough.
By default, all virtual networks have access to the virtual functions. You can disable this default and specify which networks have access to a virtual function.
Prerequisite
- For a vNIC to be attached to a VF must, its passthrough property must be enabled. For details, see Enabling_Passthrough_on_a_vNIC_Profile.
Procedure
- Click → .
- Click the name of an SR-IOV-capable host. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Select an SR-IOV-capable NIC, marked with a , and click the pencil icon.
Optional: To change the number of virtual functions, click the Number of VFs setting drop-down button and edit the Number of VFs text field.
ImportantChanging the number of VFs deletes all previous VFs on the network interface before creating the new VFs. This includes any VFs that have virtual machines directly attached.
Optional: To limit which virtual networks have access virtual functions, select Specific networks.
- Select the networks that should have access to the VF, or use Labels to select networks based on their network labels.
- Click .
- In the Setup Host Networks window, click .
2.4.2. Virtual Network Interface Cards (vNICs)
2.4.2.1. vNIC Profile Overview
A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.
2.4.2.2. Creating or Editing a vNIC Profile
Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups.
If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing.
Procedure
- Click → .
- Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab.
- Click New or Edit.
- Enter the Name and Description of the profile.
- Select the relevant Quality of Service policy from the QoS list.
- Select a Network Filter from the drop-down list to manage the traffic of network packets to and from virtual machines. For more information on network filters, see Applying network filtering in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide.
- Select the Passthrough check box to enable passthrough of the vNIC and allow direct device assignment of a virtual function. Enabling the passthrough property will disable QoS, network filtering, and port mirroring as these are not compatible. For more information on passthrough, see Enabling Passthrough on a vNIC Profile.
- If Passthrough is selected, optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide.
- Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.
- Select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
- Click .
Apply this profile to users and groups to regulate their network bandwidth. If you edited a vNIC profile, you must either restart the virtual machine, or hot unplug and then hot plug the vNIC if the guest operating system supports vNIC hot plug and hot unplug.
2.4.2.3. Explanation of Settings in the VM Interface Profile Window
Field Name | Description |
---|---|
Network | A drop-down list of the available networks to apply the vNIC profile to. |
Name | The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters. |
Description | The description of the vNIC profile. This field is recommended but not mandatory. |
QoS | A drop-down list of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC. |
Network Filter |
A drop-down list of the available network filters to apply to the vNIC profile. Network filters improve network security by filtering the type of packets that can be sent to and from virtual machines. The default filter is
Use Note
Red Hat no longer supports disabling filters by setting the |
Passthrough | A check box to toggle the passthrough property. Passthrough allows a vNIC to connect directly to a virtual function of a host NIC. The passthrough property cannot be edited if the vNIC profile is attached to a virtual machine. QoS, network filters, and port mirroring are disabled in the vNIC profile if passthrough is enabled. |
Migratable | A check box to toggle whether or not vNICs using this profile can be migrated. Migration is enabled by default on regular vNIC profiles; the check box is selected and cannot be changed. When the Passthrough check box is selected, Migratable becomes available and can be deselected, if required, to disable migration of passthrough vNICs. |
Failover | A drop-down menu to select available vNIC profiles that act as a failover device. Available only when the Passthrough and Migratable check boxes are checked. |
Port Mirroring | A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default. For further details, see Port Mirroring in the Technical Reference. |
Device Custom Properties | A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively. |
Allow all users to use this Profile | A check box to toggle the availability of the profile to all users in the environment. It is selected by default. |
2.4.2.4. Enabling Passthrough on a vNIC Profile
This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV
The passthrough property of a vNIC profile enables a vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment.
The passthrough property cannot be enabled if the vNIC profile is already attached to a vNIC; this procedure creates a new profile to avoid this. If a vNIC profile has passthrough enabled, QoS, network filters, and port mirroring cannot be enabled on the same profile.
For more information on SR-IOV, direct device assignment, and the hardware considerations for implementing these in Red Hat Virtualization, see Hardware Considerations for Implementing SR-IOV.
Procedure
- Click → .
- Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab to list all vNIC profiles for that logical network.
- Click New.
- Enter the Name and Description of the profile.
- Select the Passthrough check box.
- Optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide.
- If necessary, select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
- Click .
The vNIC profile is now passthrough-capable. To use this profile to directly attach a virtual machine to a NIC or PCI VF, attach the logical network to the NIC and create a new PCI Passthrough vNIC on the desired virtual machine that uses the passthrough vNIC profile. For more information on these procedures respectively, see Editing Host Network Interfaces and Assigning Logical Networks to Hosts, and Adding a New Network Interface in the Virtual Machine Management Guide.
2.4.2.5. Enabling a vNIC profile for SR-IOV migration with failover
Failover allows the selection of a profile that acts as a failover device during virtual machine migration when the VF needs to be detached, preserving virtual machine communication with minimal interruption.
Failover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.
Prerequisites
- The Passthrough and Migratable check boxes of the profile are selected.
- The failover network is attached to the host.
- In order to make a vNIC profile acting as failover editable, you must first remove any failover references.
- vNIC profiles that can act as failover are profiles that are not selected as Passthrough or are not connected to an External Network.
Procedure
-
In the Administration Portal, go to
Failover vNIC profile
from the drop down list. → , select the vNIC profile, click and select a - Click to save the profile settings.
Attaching two vNIC profiles that reference the same failover vNIC profile to the same virtual machine will fail in libvirt.
2.4.2.6. Removing a vNIC Profile
Remove a vNIC profile to delete it from your virtualized environment.
Procedure
- Click → .
- Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab to display available vNIC profiles.
- Select one or more profiles and click Remove.
- Click .
2.4.2.7. Assigning Security Groups to vNIC Profiles
This feature is only available when ovirt-provider-ovn
is added as an external network provider. Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack Networking on the ovirt-provider-ovn
. For more information, see Project Security Management in the Red Hat OpenStack Platform Users and Identity Management Guide.
You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile.
A security group is identified using the ID of that security group as registered in the Open Virtual Network (OVN) External Network Provider. You can find the IDs of security groups for a given tenant using the OpenStack Networking API, see List Security Groups in the OpenStack API Reference.
Procedure
- Click → .
- Click the logical network’s name. This opens the details view.
- Click the vNIC Profiles tab.
- Click New, or select an existing vNIC profile and click Edit.
- From the custom properties drop-down list, select SecurityGroups. Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group.
- In the text field, enter the ID of the security group to attach to the vNIC profile.
- Click .
You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group.
2.4.2.8. User Permissions for vNIC Profiles
Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.
User Permissions for vNIC Profiles
- Click → .
- Click the vNIC profile’s name. This opens the details view.
- Click the Permissions tab to show the current user permissions for the profile.
- Click Add or Remove to change user permissions for the vNIC profile.
- In the Add Permissions to User window, click My Groups to display your user groups. You can use this option to grant permissions to other users in your groups.
You have configured user permissions for a vNIC profile.
2.4.3. External Provider Networks
2.4.3.1. Importing Networks From External Providers
To use networks from an Open Virtual Network (OVN), register the provider with the Manager. See Adding an External Network Provider for more information. Then, use the following procedure to import the networks provided by that provider into the Manager so the networks can be used by virtual machines.
Procedure
- Click → .
- Click Import.
- From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list.
- Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list.
- You can customize the name of the network that you are importing. To customize the name, click the network’s name in the Name column, and change the text.
- From the Data Center drop-down list, select the data center into which the networks will be imported.
- Optional: Clear the Allow All check box to prevent that network from being available to all users.
- Click Import.
The selected networks are imported into the target data center and can be attached to virtual machines. See Adding a New Network Interface in the Virtual Machine Management Guide for more information.
2.4.3.2. Limitations to Using External Provider Networks
The following limitations apply to using logical networks imported from an external provider in a Red Hat Virtualization environment.
- Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks.
- The same logical network can be imported more than once, but only to different data centers.
- You cannot edit logical networks offered by external providers in the Manager. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the external provider that provides that logical network.
- Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers.
- If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Manager while the logical network is still in use by the virtual machine.
- Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported.
2.4.3.3. Configuring Subnets on External Provider Logical Networks
A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the external network provider on which the logical network is hosted is responsible for assigning these IP addresses.
While the Red Hat Virtualization Manager automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Manager.
If you add Open Virtual Network (OVN) (ovirt-provider-ovn) as an external network provider, multiple subnets can be connected to each other by routers. To manage these routers, you can use the OpenStack Networking API v2.0. Please note, however, that ovirt-provider-ovn has a limitation: Source NAT (enable_snat in the OpenStack API) is not implemented.
2.4.3.4. Adding Subnets to External Provider Logical Networks
Create a subnet on a logical network provided by an external provider.
Procedure
- Click → .
- Click the logical network’s name. This opens the details view.
- Click the Subnets tab.
- Click New.
- Enter a Name and CIDR for the new subnet.
- From the IP Version drop-down list, select either IPv4 or IPv6.
- Click .
For IPv6, Red Hat Virtualization supports only static addressing.
2.4.3.5. Removing Subnets from External Provider Logical Networks
Remove a subnet from a logical network provided by an external provider.
Procedure
- Click → .
- Click the logical network’s name. This opens the details view.
- Click the Subnets tab.
- Select a subnet and click Remove.
- Click .
2.4.3.6. Assigning Security Groups to Logical Networks and Ports
This feature is only available when Open Virtual Network (OVN) is added as an external network provider (as ovirt-provider-ovn). Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack Networking API v2.0 or Ansible.
A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network. You can also use security groups to filter traffic at the port level.
In Red Hat Virtualization 4.2.7, security groups are disabled by default.
Procedure
- Click → .
- Click the cluster name. This opens the details view.
- Click the Logical Networks tab.
-
Click Add Network and define the properties, ensuring that you select
ovirt-provider-ovn
from theExternal Providers
drop-down list. For more information, see Creating a new logical network in a data center or cluster. -
Select
Enabled
from theSecurity Group
drop-down list. For more details see Logical Network General Settings Explained. -
Click
OK
. - Create security groups using either OpenStack Networking API v2.0 or Ansible.
- Create security group rules using either OpenStack Networking API v2.0 or Ansible.
- Update the ports with the security groups that you defined using either OpenStack Networking API v2.0 or Ansible.
-
Optional. Define whether the security feature is enabled at the port level. Currently, this is only possible using the OpenStack Networking API. If the
port_security_enabled
attribute is not set, it will default to the value specified in the network to which it belongs.
2.4.4. Hosts and Networking
2.4.4.1. Network Manager Stateful Configuration (nmstate)
Version 4.4 of Red Hat Virtualization (RHV) uses Network Manager Stateful Configuration (nmstate) to configure networking for RHV hosts that are based on RHEL 8. RHV version 4.3 and earlier use interface configuration (ifcfg) network scripts to manage host networking.
To use nmstate, upgrade the Red Hat Virtualization Manager and hosts as described in the RHV Upgrade Guide.
As an administrator, you do not need to install or configure nmstate. It is enabled by default and runs in the background.
Always use RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration.
The change to nmstate is nearly transparent. It only changes how you configure host networking in the following ways:
- After you add a host to a cluster, always use the RHV Manager to modify the host network.
- Modifying the host network without using the Manager can create an unsupported configuration.
- To fix an unsupported configuration, you replace it with a supported one by using the Manager to synchronize the host network. For details, see Synchronizing Host Networks.
- The only situation where you modify host networks outside the Manager is to configure a static route on a host. For more details, see Adding a static route on a host.
The change to nmstate improves how RHV Manager applies configuration changes you make in Cockpit and Anaconda before adding the host to the Manager. This fixes some issues, such as BZ#1680970 Static IPv6 Address is lost on host deploy if NM manages the interface.
If you use dnf
or yum
to manually update the nmstate
package, restart vdsmd
and supervdsmd
on the host. For example:
# dnf update nmstate # systemctl restart vdsmd supervdsmd
If you use dnf
or yum
to manually update the Network Manager package, restart NetworkManager
on the host. For example:
# dnf update NetworkManager # systemctl restart NetworkManager
2.4.4.2. Refreshing Host Capabilities
When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager.
Procedure
- Click → and select a host.
- Click → .
The list of network interface cards in the Network Interfaces tab for the selected host is updated. Any new network interface cards can now be used in the Manager.
2.4.4.3. Editing Host Network Interfaces and Assigning Logical Networks to Hosts
You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported.
The only way to change the IP address of a host in Red Hat Virtualization is to remove the host and then to add it again.
To change the VLAN settings of a host, see Editing VLAN Settings.
You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.
If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port’s current configuration. This can help to prevent incorrect configuration. Check the following information prior to assigning logical networks:
- Port Description (TLV type 4) and System Name (TLV type 5) help to detect to which ports and on which switch the host’s interfaces are patched.
- Port VLAN ID shows the native VLAN ID configured on the switch port for untagged ethernet frames. All VLANs configured on the switch port are shown as VLAN Name and VLAN ID combinations.
Procedure
- Click → .
- Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Optionally, hover your cursor over host network interface to view configuration information provided by the switch.
Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.
NoteIf a NIC is connected to more than one logical network, only one of the networks can be non-VLAN. All the other logical networks must be unique VLANs.
Configure the logical network:
- Hover your cursor over an assigned logical network and click the pencil icon. This opens the Edit Management Network window.
From the IPv4 tab, select a Boot Protocol from None, DHCP, or Static. If you selected Static, enter the IP, Netmask / Routing Prefix, and the Gateway.
NoteFor IPv6, only static IPv6 addressing is supported. To configure the logical network, select the IPv6 tab and make the following entries:
- Set Boot Protocol to Static.
-
For Routing Prefix, enter the length of the prefix using a forward slash and decimals. For example:
/48
-
IP: The complete IPv6 address of the host network interface. For example:
2001:db8::1:0:0:6
-
Gateway: The source router’s IPv6 address. For example:
2001:db8::1:0:0:1
NoteIf you change the host’s management network IP address, you must reinstall the host for the new IP address to be configured.
Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network’s gateway instead of the default gateway used by the management network.
ImportantSet all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported.
Use the QoS tab to override the default host network quality of service. Select Override QoS and enter the desired values in the following fields:
- Weighted Share: Signifies how much of the logical link’s capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
- Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
- Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.
To configure a network bridge, click the Custom Properties tab and select bridge_opts from the drop-down list. Enter a valid key and value with the following syntax: key=value. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Explanation of bridge_opts Parameters.
forward_delay=1500 group_addr=1:80:c2:0:0:0 group_fwd_mask=0x0 hash_max=512 hello_time=200 max_age=2000 multicast_last_member_count=2 multicast_last_member_interval=100 multicast_membership_interval=26000 multicast_querier=0 multicast_querier_interval=25500 multicast_query_interval=13000 multicast_query_response_interval=1000 multicast_query_use_ifaddr=0 multicast_router=1 multicast_snooping=1 multicast_startup_query_count=2 multicast_startup_query_interval=3125
To configure ethernet properties, click the Custom Properties tab and select ethtool_opts from the drop-down list. Enter a valid value using the format of the command-line arguments of ethtool. For example: :
--coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off --change em1 speed 1000 duplex half
This field can accept wild cards. For example, to apply the same option to all of this network’s interfaces, use:
--coalesce * rx-usecs 14 sample-interval 3
The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. See How to Set Up Manager to Use Ethtool for more information. For more information on ethtool properties, see the manual page by typing
man ethtool
in the command line.To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties tab and select fcoe from the drop-down list. Enter a valid key and value with the following syntax: key=value. At least
enable=yes
is required. You can also adddcb=[yes|no]
and `auto_vlan=[yes|no]. Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See How to Set Up Manager to Use FCoE for more information.NoteA separate, dedicated logical network is recommended for use with FCoE.
- To change the default network used by the host from the management network (ovirtmgmt) to a non-management network, configure the non-management network’s default route. See Configuring a Default Route for more information.
- If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. For more information about unsynchronized hosts and how to synchronize them, see Synchronizing host networks.
- Select the Verify connectivity between Host and Engine check box to check network connectivity. This action only works if the host is in maintenance mode.
- Click .
If not all network interface cards for the host are displayed, click
→ to update the list of network interface cards available for that host.Troubleshooting
In some cases, making multiple concurrent changes to a host network configuration using the Setup Host Networks window or setupNetwork
command fails with an Operation failed: [Cannot setup Networks]. Another Setup Networks or Host Refresh process in progress on the host. Please try later.]
error in the event log. This error indicates that some of the changes were not configured on the host. This happens because, to preserve the integrity of the configuration state, only a single setup network command can be processed at a time. Other concurrent configuration commands are queued for up to a default timeout of 20 seconds. To help prevent the above failure from happening, use the engine-config
command to increase the timeout period of SetupNetworksWaitTimeoutSeconds
beyond 20 seconds. For example:
# engine-config --set SetupNetworksWaitTimeoutSeconds=40
Additional resources
2.4.4.4. Synchronizing Host Networks
The Manager defines a network interface as out-of-sync
when the definition of the interface on the host differs from the definitions stored by the Manager.
Out-of-sync networks appear with an Out-of-sync icon in the host’s Network Interfaces tab and with this icon in the Setup Host Networks window.
When a host’s network is out of sync, the only activities that you can perform on the unsynchronized network in the Setup Host Networks window are detaching the logical network from the network interface or synchronizing the network.
Understanding How a Host Becomes out-of-sync
A host will become out of sync if:
You make configuration changes on the host rather than using the the Edit Logical Networks window, for example:
- Changing the VLAN identifier on the physical host.
- Changing the Custom MTU on the physical host.
- You move a host to a different data center with the same network name, but with different values/parameters.
- You change a network’s VM Network property by manually removing the bridge from the host.
If you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414.
Preventing Hosts from Becoming Unsynchronized
Following these best practices will prevent your host from becoming unsynchronized:
- Use the Administration Portal to make changes rather than making changes locally on the host.
- Edit VLAN settings according to the instructions in Editing VLAN Settings.
Synchronizing Hosts
Synchronizing a host’s network interface definitions involves using the definitions from the Manager and applying them to the host. If these are not the definitions that you require, after synchronizing your hosts update their definitions from the Administration Portal. You can synchronize a host’s networks on three levels:
- Per logical network
- Per host
- Per cluster
Synchronizing Host Networks on the Logical Network Level
- Click → .
- Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Hover your cursor over the unsynchronized network and click the pencil icon. This opens the Edit Network window.
- Select the Sync network check box.
- Click to save the network change.
- Click Setup Host Networks window. to close the
Synchronizing a Host’s Networks on the Host level
- Click the Sync All Networks button in the host’s Network Interfaces tab to synchronize all of the host’s unsynchronized network interfaces.
Synchronizing a Host’s Networks on the Cluster level
- Click the Sync All Networks button in the cluster’s Logical Networks tab to synchronize all unsynchronized logical network definitions for the entire cluster.
You can also synchronize a host’s networks via the REST API. See syncallnetworks in the REST API Guide.
2.4.4.5. Editing a Host’s VLAN Settings
To change the VLAN settings of a host, the host must be removed from the Manager, reconfigured, and re-added to the Manager.
To keep networking synchronized, do the following:
- Put the host in maintenance mode.
- Manually remove the management network from the host. This will make the host reachable over the new VLAN.
- Add the host to the cluster. Virtual machines that are not connected directly to the management network can be migrated between hosts safely.
The following warning message appears when the VLAN ID of the management network is changed:
Changing certain properties (e.g. VLAN, MTU) of the management network could lead to loss of connectivity to hosts in the data center, if its underlying network infrastructure isn't configured to accommodate the changes. Are you sure you want to proceed?
Proceeding causes all of the hosts in the data center to lose connectivity to the Manager and causes the migration of hosts to the new management network to fail. The management network will be reported as "out-of-sync".
If you change the management network’s VLAN ID, you must reinstall the host to apply the new VLAN ID.
2.4.4.6. Adding Multiple VLANs to a Single Network Interface Using Logical Networks
Multiple VLANs can be added to a single network interface to separate traffic on the one host.
You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.
Procedure
- Click → .
- Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
Edit the logical networks:
- Hover your cursor over an assigned logical network and click the pencil icon.
- If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
Select a Boot Protocol:
- None
- DHCP
- Static
- Provide the IP and Subnet Mask.
- Click .
- Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
- Click .
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational.
This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.
2.4.4.6.1. Copying host networks
To save time, you can copy a source host’s network configuration to a target host in the same cluster.
Copying the network configuration includes:
-
Logical networks attached to the host, except the
ovirtmgmt
management network - Bonds attached to interfaces
Limitations
-
Do not copy network configurations that contain static IP addresses. Doing this sets the boot protocol in the target host to
none
. - Copying a configuration to a target host with the same interface names as the source host but different physical network connections produces a wrong configuration.
- The target host must have an equal or greater number of interfaces than the source host. Otherwise, the operation fails.
-
Copying
QoS
,DNS
, andcustom_properties
is not supported. - Network interface labels are not copied.
Copying host networks replaces ALL network settings on the target host except its attachment to the ovirtmgmt
management network.
Prerequisites
- The number of NICs on the target host must be equal or greater than those on the source host. Otherwise, the operation fails.
- The hosts must be in the same cluster.
Procedure
- In the Administration Portal, click → .
- Select the source host whose configuration you want to copy.
- Click Copy Host Networks window. . This opens the
- Use Target Host to select the host that should receive the configuration. The list only shows hosts that are in the same cluster.
- Click .
- Verify the network settings of the target host
Tips
- Selecting multiple hosts disables the button and context menu.
- Instead of using the button, you can right-click a host and select from the context menu.
- The button is also available in any host’s details view.
2.4.4.7. Assigning Additional IPv4 Addresses to a Host Network
A host network, such as the ovirtmgmt management network, is created with only one IP address when initially set up. This means that if a NIC’s configuration file is configured with multiple IP addresses, only the first listed IP address will be assigned to the host network. Additional IP addresses may be required if connecting to storage, or to a server on a separate private subnet using the same NIC.
The vdsm-hook-extra-ipv4-addrs
hook allows you to configure additional IPv4 addresses for host networks. For more information about hooks, see VDSM and Hooks.
In the following procedure, the host-specific tasks must be performed on each host for which you want to configure additional IP addresses.
Procedure
On the host that you want to configure additional IPv4 addresses for, install the VDSM hook package. The package needs to be installed manually on Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts.
# dnf install vdsm-hook-extra-ipv4-addrs
On the Manager, run the following command to add the key:
# engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.*'
Restart the
ovirt-engine
service:# systemctl restart ovirt-engine.service
- In the Administration Portal, click → .
- Click the host’s name. This opens the details view.
- Click the Network Interfaces tab and click Setup Host Networks.
- Edit the host network interface by hovering the cursor over the assigned logical network and clicking the pencil icon.
- Select ipv4_addr from the Custom Properties drop-down list and add the additional IP address and prefix (for example 5.5.5.5/24). Multiple IP addresses must be comma-separated.
- Click Edit Network window. to close the
- Click Setup Host Networks window. to close the
The additional IP addresses will not be displayed in the Manager, but you can run the command ip addr show
on the host to confirm that they have been added.
2.4.4.8. Adding Network Labels to Host Network Interfaces
Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces. Setting a label on a role network (for instance, a migration network or a display network) causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.
There are two methods of adding labels to a host network interface:
- Manually, in the Administration Portal
- Automatically, with the LLDP Labeler service
Procedure
- Click → .
- Click the host’s name. This opens the details view.
- Click the Network Interfaces tab.
- Click Setup Host Networks.
- Click Labels and right-click [New Label]. Select a physical network interface to label.
- Enter a name for the network label in the Label text field.
- Click .
Procedure
You can automate the process of assigning labels to host network interfaces in the configured list of clusters with the LLDP Labeler service.
2.4.4.8.1. Configuring the LLDP Labeler
By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations.
Prerequisites
- The interfaces must be connected to a Juniper switch.
-
The Juniper switch must be configured to provide the
Port VLAN
using LLDP.
Procedure
Configure the
username
andpassword
in/etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
username
- the username of the Manager administrator. The default isadmin@internal
. -
password
- the password of the Manager administrator. The default is123456
.
-
Configure the LLDP Labeler service by updating the following values in
etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
clusters
- a comma-separated list of clusters on which the service should run. Wildcards are supported. For example,Cluster*
defines LLDP Labeler to run on all clusters starting with wordCluster
. To run the service on all clusters in the data center, type*
. The default isDef*
. -
api_url
- the full URL of the Manager’s API. The default ishttps://Manager_FQDN/ovirt-engine/api
-
ca_file
- the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. -
auto_bonding
- enables LLDP Labeler’s bonding capabilities. The default istrue
. -
auto_labeling
- enables LLDP Labeler’s labeling capabilities. The default istrue
.
-
-
Optionally, you can configure the service to run at a different time interval by changing the value of
OnUnitActiveSec
inetc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer
. The default is1h
. Configure the service to start now and at boot by entering the following command:
# systemctl enable --now ovirt-lldp-labeler
To invoke the service manually, enter the following command:
# /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py
You have added a network label to a host network interface. Newly created logical networks with the same label are automatically assigned to all host network interfaces with that label. Removing a label from a logical network automatically removes that logical network from all host network interfaces with that label.
2.4.4.9. Changing the FQDN of a Host
Use the following procedure to change the fully qualified domain name of hosts.
Procedure
- Place the host into maintenance mode so the virtual machines are live migrated to another host. See Moving a host to maintenance mode for more information. Alternatively, manually shut down or migrate all the virtual machines to another host. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
- Click Remove, and click to remove the host from the Administration Portal.
Use the
hostnamectl
tool to update the host name. For more options, see Configure Host Names in the Red Hat Enterprise Linux 7 Networking Guide.# hostnamectl set-hostname NEW_FQDN
- Reboot the host.
- Re-register the host with the Manager. See Adding standard hosts to the Manager for more information.
2.4.4.9.1. IPv6 Networking Support
Red Hat Virtualization supports static IPv6 networking in most contexts.
Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it.
Limitations for IPv6
- Only static IPv6 addressing is supported. Dynamic IPv6 addressing with DHCP or Stateless Address Autoconfiguration are not supported.
- Dual-stack addressing, IPv4 and IPv6, is not supported.
- OVN networking can be used with only IPv4 or IPv6.
- Switching clusters from IPv4 to IPv6 is not supported.
- Only a single gateway per host can be set for IPv6.
- If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network. The host and Manager should have the same IPv6 gateway. If the host and Manager are not on the same subnet, the Manager might lose connectivity with the host because the IPv6 gateway was removed.
- Using a glusterfs storage domain with an IPv6-addressed gluster server is not supported.
2.4.4.9.2. Setting Up and Configuring SR-IOV
This topic summarizes the steps for setting up and configuring SR-IOV, with links out to topics that cover each step in detail.
Prerequisites
Set up your hardware in accordance with the Hardware Considerations for Implementing SR-IOV
Procedure
To set up and configure SR-IOV, complete the following tasks.
Notes
- The number of the 'passthrough' vNICs depends on the number of available virtual functions (VFs) on the host. For example, to run a virtual machine (VM) with three SR-IOV cards (vNICs), the host must have three or more VFs enabled.
- Hotplug and unplug are supported.
- Live migration is supported.
- To migrate a VM, the destination host must also have enough available VFs to receive the VM. During the migration, the VM releases a number of VFs on the source host and occupies the same number of VFs on the destination host.
- On the host, you will see a device, link, or ifcae like any other interface. That device disappears when it is attached to a VM, and reappears when it is released.
- Avoid attaching a host device directly to a VM for SR-IOV feature.
- To use a VF as a trunk port with several VLANs and configure the VLANs within the Guest, please see Cannot configure VLAN on SR-IOV VF interfaces inside the Virtual Machine.
Here is an example of what the libvirt XML for the interface would look like:
---- <interface type='hostdev'> <mac address='00:1a:yy:xx:vv:xx'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/> </source> <alias name='ua-18400536-5688-4477-8471-be720e9efc68'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </interface> ----
Troubleshooting
The following example shows you how to get diagnostic information about the VFs attached to an interface.
# ip -s link show dev enp5s0f0 1: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT qlen 1000 link/ether 86:e2:ba:c2:50:f0 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 30931671 218401 0 0 0 19165434 TX: bytes packets errors dropped carrier collsns 997136 13661 0 0 0 0 vf 0 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off vf 1 MAC 00:1a:4b:16:01:5e, spoof checking on, link-state auto, trust off, query_rss off vf 2 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off
2.4.4.9.2.1. Additional Resources
2.4.5. Network Bonding
2.4.5.1. Bonding methods
Network bonding combines multiple NICs into a bond device, with the following advantages:
- The transmission speed of bonded NICs is greater than that of a single NIC.
- Network bonding provides fault tolerance, because the bond device will not fail unless all its NICs fail.
Using NICs of the same make and model ensures that they support the same bonding options and modes.
Red Hat Virtualization’s default bonding mode, (Mode 4) Dynamic Link Aggregation
, requires a switch that supports 802.3ad.
The logical networks of a bond must be compatible. A bond can support only 1 non-VLAN logical network. The rest of the logical networks must have unique VLAN IDs.
Bonding must be enabled for the switch ports. Consult the manual provided by your switch vendor for specific instructions.
You can create a network bond device using one of the following methods:
- Manually, in the Administration Portal, for a specific host
- Automatically, using LLDP Labeler, for unbonded NICs of all hosts in a cluster or data center
If your environment uses iSCSI storage and you want to implement redundancy, follow the instructions for configuring iSCSI multipathing.
2.4.5.2. Creating a Bond Device in the Administration Portal
You can create a bond device on a specific host in the Administration Portal. The bond device can carry both VLAN-tagged and untagged traffic.
Procedure
- Click → .
- Click the host’s name. This opens the details view.
- Click the Network Interfaces tab to list the physical network interfaces attached to the host.
- Click Setup Host Networks.
- Check the switch configuration. If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, hover your cursor over a physical NIC to view the switch port’s aggregation configuration.
Drag and drop a NIC onto another NIC or onto a bond.
NoteTwo NICs form a new bond. A NIC and a bond adds the NIC to the existing bond.
If the logical networks are incompatible, the bonding operation is blocked.
Select the Bond Name and Bonding Mode from the drop-down menus. See Bonding Modes for details.
If you select the Custom bonding mode, you can enter bonding options in the text field, as in the following examples:
-
If your environment does not report link states with
ethtool
, you can set ARP monitoring by enteringmode=1 arp_interval=1 arp_ip_target=192.168.0.2
. You can designate a NIC with higher throughput as the primary interface by entering
mode=1 primary=eth0
.For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.
-
If your environment does not report link states with
- Click .
Attach a logical network to the new bond and configure it. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts for instructions.
NoteYou cannot attach a logical network directly to an individual NIC in the bond.
- Optionally, you can select Verify connectivity between Host and Engine if the host is in maintenance mode.
- Click .
2.4.5.3. Creating a Bond Device with the LLDP Labeler Service
The LLDP Labeler service enables you to create a bond device automatically with all unbonded NICs, for all the hosts in one or more clusters or in the entire data center. The bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
NICs with incompatible logical networks cannot be bonded.
2.4.5.3.1. Configuring the LLDP Labeler
By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations.
Prerequisites
- The interfaces must be connected to a Juniper switch.
- The Juniper switch must be configured for Link Aggregation Control Protocol (LACP) using LLDP.
Procedure
Configure the
username
andpassword
in/etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
username
- the username of the Manager administrator. The default isadmin@internal
. -
password
- the password of the Manager administrator. The default is123456
.
-
Configure the LLDP Labeler service by updating the following values in
etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
clusters
- a comma-separated list of clusters on which the service should run. Wildcards are supported. For example,Cluster*
defines LLDP Labeler to run on all clusters starting with wordCluster
. To run the service on all clusters in the data center, type*
. The default isDef*
. -
api_url
- the full URL of the Manager’s API. The default ishttps://Manager_FQDN/ovirt-engine/api
-
ca_file
- the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. -
auto_bonding
- enables LLDP Labeler’s bonding capabilities. The default istrue
. -
auto_labeling
- enables LLDP Labeler’s labeling capabilities. The default istrue
.
-
-
Optionally, you can configure the service to run at a different time interval by changing the value of
OnUnitActiveSec
inetc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer
. The default is1h
. Configure the service to start now and at boot by entering the following command:
# systemctl enable --now ovirt-lldp-labeler
To invoke the service manually, enter the following command:
# /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py
Attach a logical network to the new bond and configure it. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts for instructions.
NoteYou cannot attach a logical network directly to an individual NIC in the bond.
2.4.5.4. Bonding Modes
The packet dispersal algorithm is determined by the bonding mode. (See the Linux Ethernet Bonding Driver HOWTO for details). Red Hat Virtualization’s default bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
Red Hat Virtualization supports the following bonding modes, because they can be used in virtual machine (bridged) networks:
(Mode 1) Active-Backup
- One NIC is active. If the active NIC fails, one of the backup NICs replaces it as the only active NIC in the bond. The MAC address of this bond is visible only on the network adapter port. This prevents MAC address confusion that might occur if the MAC address of the bond were to change, reflecting the MAC address of the new active NIC.
(Mode 2) Load Balance (balance-xor)
-
The NIC that transmits packets is selected by performing an XOR operation on the source MAC address and the destination MAC address, multiplied by the
modulo
of the total number of NICs. This algorithm ensures that the same NIC is selected for each destination MAC address. (Mode 3) Broadcast
- Packets are transmitted to all NICs.
(Mode 4) Dynamic Link Aggregation(802.3ad)
(Default)The NICs are aggregated into groups that share the same speed and duplex settings . All the NICs in the active aggregation group are used.
Note(Mode 4) Dynamic Link Aggregation(802.3ad)
requires a switch that supports 802.3ad.The bonded NICs must have the same aggregator IDs. Otherwise, the Manager displays a warning exclamation mark icon on the bond in the Network Interfaces tab and the
ad_partner_mac
value of the bond is reported as00:00:00:00:00:00
. You can check the aggregator IDs by entering the following command:# cat /proc/net/bonding/bond0
The following bonding modes are incompatible with virtual machine logical networks and therefore only non-VM logical networks can be attached to bonds using these modes:
(Mode 0) Round-Robin
- The NICs transmit packets in sequential order. Packets are transmitted in a loop that begins with the first available NIC in the bond and ends with the last available NIC in the bond. Subsequent loops start with the first available NIC.
(Mode 5) Balance-TLB
, also called Transmit Load-Balance- Outgoing traffic is distributed, based on the load, over all the NICs in the bond. Incoming traffic is received by the active NIC. If the NIC receiving incoming traffic fails, another NIC is assigned.
(Mode 6) Balance-ALB
, also called Adaptive Load-Balance-
(Mode 5) Balance-TLB
is combined with receive load-balancing for IPv4 traffic. ARP negotiation is used for balancing the receive load.
2.5. Hosts
2.5.1. Introduction to Hosts
Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM).
KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the Red Hat Virtualization Manager. A Red Hat Virtualization environment has one or more hosts attached to it.
Red Hat Virtualization supports two methods of installing hosts. You can use the Red Hat Virtualization Host (RHVH) installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux installation.
You can identify the host type of an individual host in the Red Hat Virtualization Manager by selecting the host’s name. This opens the details view. Then look at the OS Description under Software.
Hosts use tuned
profiles, which provide virtualization optimizations. For more information on tuned
, see the TuneD Profiles in Red Hat Enterprise Linux Monitoring and managing system status and performance.
The Red Hat Virtualization Host has security features enabled. Security Enhanced Linux (SELinux) and the firewall are fully configured and on by default. The status of SELinux on a selected host is reported under SELinux mode in the General tab of the details view. The Manager can open required ports on Red Hat Enterprise Linux hosts when it adds them to the environment.
A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 7 AMD64/Intel 64 version.
A physical host on the Red Hat Virtualization platform:
- Must belong to only one cluster in the system.
- Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions.
- Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation.
- Has a minimum of 2 GB RAM.
- Can have an assigned system administrator with system permissions.
Administrators can receive the latest security advisories from the Red Hat Virtualization watch list. Subscribe to the Red Hat Virtualization watch list to receive new security advisories for Red Hat Virtualization products by email. Subscribe by completing this form:
2.5.2. Red Hat Virtualization Host
Red Hat Virtualization Host (RHVH) is installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. It uses an Anaconda
installation interface based on the one used by Red Hat Enterprise Linux hosts, and can be updated through the Red Hat Virtualization Manager or via yum
. Using the yum
command is the only way to install additional packages and have them persist after an upgrade.
RHVH features a Cockpit web interface for monitoring the host’s resources and performing administrative tasks. Direct access to RHVH via SSH or console is not supported, so the Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the Red Hat Virtualization Manager, such as configuring networking or running terminal commands via the Terminal sub-tab.
Access the Cockpit web interface at https://HostFQDNorIP:9090 in your web browser. Cockpit for RHVH includes a custom Virtualization dashboard that displays the host’s health status, SSH Host Key, self-hosted engine status, virtual machines, and virtual machine statistics.
Starting in Red Hat Virtualization version 4.4 SP1 the RHVH uses systemd-coredump
to gather, save and process core dumps. For more information, see the documentation for core dump storage configuration files and systemd-coredump service.
In Red Hat Virtualization 4.4 and earlier RHVH uses the Automatic Bug Reporting Tool (ABRT) to collect meaningful debug information about application crashes. For more information, see the Red Hat Enterprise Linux System Administrator’s Guide.
Custom boot kernel arguments can be added to Red Hat Virtualization Host using the grubby
tool. The grubby
tool makes persistent changes to the grub.cfg file. Navigate to the Terminal sub-tab in the host’s Cockpit web interface to use grubby
commands. See the Red Hat Enterprise Linux System Administrator’s Guide for more information.
Do not create untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities.
2.5.3. Red Hat Enterprise Linux hosts
You can use a Red Hat Enterprise Linux 7 installation on capable hardware as a host. Red Hat Virtualization supports hosts running Red Hat Enterprise Linux 7 Server AMD64/Intel 64 version with Intel VT or AMD-V extensions. To use your Red Hat Enterprise Linux machine as a host, you must also attach the Red Hat Enterprise Linux Server
and Red Hat Virtualization
subscriptions.
Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and the creation of a bridge. Use the details view to monitor the process as the host and management system establish a connection.
Optionally, you can install a Cockpit web interface for monitoring the host’s resources and performing administrative tasks. The Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the Red Hat Virtualization Manager, such as configuring networking or running terminal commands via the Terminal sub-tab.
Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM.
2.5.4. Satellite Host Provider Hosts
Hosts provided by a Satellite host provider can also be used as virtualization hosts by the Red Hat Virtualization Manager. After a Satellite host provider has been added to the Manager as an external provider, any hosts that it provides can be added to and used in Red Hat Virtualization in the same way as Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts.
2.5.5. Host Tasks
2.5.5.1. Adding Standard Hosts to the Red Hat Virtualization Manager
Always use the RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate).
Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge.
Procedure
- From the Administration Portal, click → .
- Click .
- Use the drop-down list to select the Data Center and Host Cluster for the new host.
- Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
Select an authentication method to use for the Manager to access the host.
- Enter the root user’s password to use password authentication.
- Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
Optionally, click the Advanced Parameters button to change the following advanced host settings:
- Disable automatic firewall configuration.
- Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
- Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
- Click .
The new host displays in the list of hosts with a status of Installing
, and you can view the progress of the installation in the Events section of the Notification Drawer (
). After a brief delay the host status changes to Up
.
2.5.5.2. Adding a Satellite Host Provider Host
The process for adding a Satellite host provider host is almost identical to that of adding a Red Hat Enterprise Linux host except for the method by which the host is identified in the Manager. The following procedure outlines how to add a host provided by a Satellite host provider.
Procedure
- Click → .
- Click New.
- Use the drop-down menu to select the Host Cluster for the new host.
- Select the Foreman/Satellite check box to display the options for adding a Satellite host provider host and select the provider from which the host is to be added.
Select either Discovered Hosts or Provisioned Hosts.
- Discovered Hosts (default option): Select the host, host group, and compute resources from the drop-down lists.
Provisioned Hosts: Select a host from the Providers Hosts drop-down list.
Any details regarding the host that can be retrieved from the external provider are automatically set, and can be edited as desired.
- Enter the Name and SSH Port (Provisioned Hosts only) of the new host.
Select an authentication method to use with the host.
- Enter the root user’s password to use password authentication.
- Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication (Provisioned Hosts only).
You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings.
- Optionally disable automatic firewall configuration.
- Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
- You can configure the Power Management, SPM, Console, and Network Provider using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
- Click to add the host and close the window.
The new host displays in the list of hosts with a status of Installing
, and you can view the progress of the installation in the details view. After installation is complete, the status will update to Reboot
. The host must be activated for the status to change to Up
.
2.5.5.3. Setting up Satellite errata viewing for a host
In the Administration Portal, you can configure a host to view errata from Red Hat Satellite. After you associate a host with a Red Hat Satellite provider, you can receive updates in the host configuration dashboard about available errata and their importance, and decide when it is practical to apply the updates.
Red Hat Virtualization 4.4 supports viewing errata with Red Hat Satellite 6.6.
Prerequisites
- The Satellite server must be added as an external provider.
The Manager and any hosts on which you want to view errata must be registered in the Satellite server by their respective FQDNs. This ensures that external content host IDs do not need to be maintained in Red Hat Virtualization.
ImportantHosts added using an IP address cannot report errata.
- The Satellite account that manages the host must have Administrator permissions and a default organization set.
- The host must be registered to the Satellite server.
- Use Red Hat Satellite remote execution to manage packages on hosts.
The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely.
Procedure
- Click → and select the host.
- Click Edit.
- Select the Use Foreman/Satellite check box.
- Select the required Satellite server from the drop-down list.
- Click .
The host is now configured to show the available errata, and their importance, in the same dashboard used to manage the host’s configuration.
Additional resources
- Adding a Red Hat Satellite Instance for Host Provisioning
- Host Management Without Goferd and Katello Agent in the Red Hat Satellite document Managing Hosts
2.5.5.3.1. Configuring a Host for PCI Passthrough
This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV
Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first.
Prerequisites
- Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information.
Configuring a Host for PCI Passthrough
- Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information.
Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually.
- To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained.
- To edit the grub configuration file manually, see Enabling IOMMU Manually.
- For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information.
Enabling IOMMU Manually
Enable IOMMU by editing the grub configuration file.
NoteIf you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default.
For Intel, boot the machine, and append
intel_iommu=on
to the end of theGRUB_CMDLINE_LINUX
line in the grub configuration file.# vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... intel_iommu=on ...
For AMD, boot the machine, and append
amd_iommu=on
to the end of theGRUB_CMDLINE_LINUX
line in the grub configuration file.# vi /etc/default/grub … GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 … amd_iommu=on …
NoteIf
intel_iommu=on
or an AMD IOMMU is detected, you can try addingiommu=pt
. Thept
option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to the previous option if thept
option doesn’t work for your host.If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the
allow_unsafe_interrupts
option if the virtual machines are trusted. Theallow_unsafe_interrupts
is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option:# vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1
Refresh the grub.cfg file and reboot the host for these changes to take effect:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot
2.5.5.3.2. Enabling nested virtualization for all virtual machines
Using hooks to enable nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Red Hat Technology Preview Features Support Scope.
Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines.
Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to Red Hat Virtualization (RHV) administrators.
By default, nested virtualization is not enabled in RHV. To enable nested virtualization, you install a VDSM hook, vdsm-hook-nestedvt
, on all of the hosts in the cluster. Then, all of the virtual machines that run on these hosts can function as parent virtual machines.
You should only run parent virtual machines on hosts that support nested virtualization. If a parent virtual machine migrates to a host that does not support nested virtualization, its child virtual machines fail. To prevent this from happening, configure all of the hosts in the cluster to support nested virtualization. Otherwise, restrict parent virtual machines from migrating to hosts that do not support nested virtualization.
Take precautions to prevent parent virtual machines from migrating to hosts that do not support nested virtualization.
Procedure
- In the Administration Portal, click → .
- Select a host in the cluster where you want to enable nested virtualization and click → and .
- Select the host again, click , and log into the host console.
Install the VDSM hook:
# dnf install vdsm-hook-nestedvt
- Reboot the host.
Log into the host console again and verify that nested virtualization is enabled:
$ cat /sys/module/kvm*/parameters/nested
If this command returns
Y
or1
, the feature is enabled.- Repeat this procedure for all of the hosts in the cluster.
Additional resources
2.5.5.3.3. Enabling nested virtualization for individual virtual machines
Nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.
Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines.
Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to Red Hat Virtualization (RHV) administrators.
To enable nested virtualization on specific virtual machines, not all virtual machines, you configure a host or hosts to support nested virtualization. Then you configure the virtual machine or virtual machines on run on those specific hosts and enable Pass-Through Host CPU. This option lets the virtual machines use the nested virtualization settings you just configured on the host. This option also restricts which hosts the virtual machines can run on and requires manual migration.
Otherwise, to enable nested virtualization for all of the virtual machines in a cluster, see Enabling nested virtualization for all virtual machines
Only run parent virtual machines on hosts that support nested virtualization. If you migrate a parent virtual machine to a host that does not support nested virtualization, its child virtual machines will fail.
Do not migrate parent virtual machines to hosts that do not support nested virtualization.
Avoid live migration of parent virtual machines that are running child virtual machines. Even if the source and destination hosts are identical and support nested virtualization, the live migration can cause the child virtual machines to fail. Instead, shut down virtual machines before migration.
Procedure
Configure the hosts to support nested virtualization:
- In the Administration Portal, click → .
- Select a host in the cluster where you want to enable nested virtualization and click → and .
- Select the host again, click , and log into the host console.
- In the Edit Host window, select the Kernel tab.
- Under Kernel boot parameters, if the checkboxes are greyed-out, click .
Select Nested Virtualization and click .
This action displays a
kvm-<architecture>.nested=1
parameter in Kernel command line. The following steps add this parameter to the Current kernel CMD line.- Click → .
-
When the host status returns to
Up
, click → under Power Management or SSH Management. Verify that nested virtualization is enabled. Log into the host console and enter:
$ cat /sys/module/kvm*/parameters/nested
If this command returns
Y
or1
, the feature is enabled.- Repeat this procedure for all of the hosts you need to run parent virtual machines.
Enable nested virtualization in specific virtual machines:
- In the Administration Portal, click → .
- Select a virtual machine and click
- In the Edit Vitual Machine window, click and select the Host tab.
- Under Start Running On, click Specific Host and select the host or hosts you configured to support nested virtualization.
Under CPU Options, select Pass-Through Host CPU. This action automatically sets the Migration mode to Allow manual migration only.
NoteIn RHV version 4.2, you can only enable Pass-Through Host CPU when Do not allow migration is selected.
Additional resources
- VDSM hooks
- Creating nested virtual machines in the RHEL documentation.
2.5.5.4. Moving a Host to Maintenance Mode
Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. Hosts should be placed into maintenance mode before any event that might cause VDSM to stop working properly, such as a reboot, or issues with networking or storage.
When a host is placed into maintenance mode the Red Hat Virtualization Manager attempts to migrate all running virtual machines to alternative hosts. The standard prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.
Virtual machines that are pinned to the host and cannot be migrated are shut down. You can check which virtual machines are pinned to the host by clicking Virtual Machines tab of the host’s details view.
in thePlacing a Host into Maintenance Mode
- Click → and select the desired host.
- Click Maintenance Host(s) confirmation window. → . This opens the
Optionally, enter a Reason for moving the host into maintenance mode, which will appear in the logs and when the host is activated again. Then, click
NoteThe host maintenance Reason field will only appear if it has been enabled in the cluster settings. See Cluster General Settings Explained for more information.
Optionally, select the required options for hosts that support Gluster.
Select the Ignore Gluster Quorum and Self-Heal Validations option to avoid the default checks. By default, the Manager checks that the Gluster quorum is not lost when the host is moved to maintenance mode. The Manager also checks that there is no self-heal activity that will be affected by moving the host to maintenance mode. If the Gluster quorum will be lost or if there is self-heal activity that will be affected, the Manager prevents the host from being placed into maintenance mode. Only use this option if there is no other way to place the host in maintenance mode.
Select the Stop Gluster Service option to stop all Gluster services while moving the host to maintenance mode.
NoteThese fields will only appear in the host maintenance window when the selected host supports Gluster. See Replacing the Primary Gluster Storage Node in Maintaining Red Hat Hyperconverged Infrastructure for more information.
- Click to initiate maintenance mode.
All running virtual machines are migrated to alternative hosts. If the host is the Storage Pool Manager (SPM), the SPM role is migrated to ano