Search

Administration Guide

download PDF
Red Hat Enterprise Virtualization 3.6

Administration Tasks in Red Hat Enterprise Virtualization

Red Hat Enterprise Virtualization Documentation Team

Red Hat Customer Content Services

Abstract

This book contains information and procedures relevant to Red Hat Enterprise Virtualization administrators.

Chapter 1. Administering and Maintaining the Red Hat Enterprise Virtualization Environment

The Red Hat Enterprise Virtualization environment requires an administrator to keep it running. As an administrator, your tasks include:
  • Managing physical and virtual resources such as hosts and virtual machines. This includes upgrading and adding hosts, importing domains, converting virtual machines created on foreign hypervisors, and managing virtual machine pools.
  • Monitoring the overall system resources for potential problems such as extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions (such as migrating virtual machines to other hosts to lessen the load or freeing resources by shutting down machines).
  • Responding to the new requirements of virtual machines (for example, upgrading the operating system or allocating more memory).
  • Managing customized object properties using tags.
  • Managing searches saved as public bookmarks.
  • Managing user setup and setting permission levels.
  • Troubleshooting for specific users or virtual machines for overall system functionality.
  • Generating general and specific reports.

1.1. Global Configuration

Accessed from the header bar in the Administration Portal, the Configure window allows you to configure a number of global resources for your Red Hat Enterprise Virtualization environment, such as users, roles, system permissions, scheduling policies, instance types, and MAC address pools. This window allows you to customize the way in which users interact with resources in the environment, and provides a central location for configuring options that can be applied to multiple clusters.
Accessing the Configure window

Figure 1.1. Accessing the Configure window

1.1.1. Roles

Roles are predefined sets of privileges that can be configured from Red Hat Enterprise Virtualization Manager. Roles provide access and management permissions to different levels of resources in the data center, and to specific physical and virtual resources.
With multilevel administration, any permissions which apply to a container object also apply to all individual objects within that container. For example, when a host administrator role is assigned to a user on a specific host, the user gains permissions to perform any of the available host operations, but only on the assigned host. However, if the host administrator role is assigned to a user on a data center, the user gains permissions to perform host operations on all hosts within the cluster of the data center.
1.1.1.1. Creating a New Role
If the role you require is not on Red Hat Enterprise Virtualization's default list of roles, you can create a new role and customize it to suit your purposes.

Procedure 1.1. Creating a New Role

  1. On the header bar, click the Configure button to open the Configure window. The window shows a list of default User and Administrator roles, and any custom roles.
  2. Click New. The New Role dialog box displays.
    The New Role Dialog

    Figure 1.2. The New Role Dialog

  3. Enter the Name and Description of the new role.
  4. Select either Admin or User as the Account Type.
  5. Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects in the Check Boxes to Allow Action list. You can also expand or collapse the options for each object.
  6. For each of the objects, select or clear the actions you wish to permit or deny for the role you are setting up.
  7. Click OK to apply the changes you have made. The new role displays on the list of roles.
1.1.1.2. Editing or Copying a Role
You can change the settings for roles you have created, but you cannot change default roles. To change default roles, clone and modify them to suit your requirements.

Procedure 1.2. Editing or Copying a Role

  1. On the header bar, click the Configure button to open the Configure window. The window shows a list of default User and Administrator roles, and any custom roles.
  2. Select the role you wish to change. Click Edit to open the Edit Role window, or click Copy to open the Copy Role window.
  3. If necessary, edit the Name and Description of the role.
  4. Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects. You can also expand or collapse the options for each object.
  5. For each of the objects, select or clear the actions you wish to permit or deny for the role you are editing.
  6. Click OK to apply the changes you have made.
1.1.1.3. User Role and Authorization Examples
The following examples illustrate how to apply authorization controls for various scenarios, using the different features of the authorization system described in this chapter.

Example 1.1. Cluster Permissions

Sarah is the system administrator for the accounts department of a company. All the virtual resources for her department are organized under a Red Hat Enterprise Virtualization cluster called Accounts. She is assigned the ClusterAdmin role on the accounts cluster. This enables her to manage all virtual machines in the cluster, since the virtual machines are child objects of the cluster. Managing the virtual machines includes editing, adding, or removing virtual resources such as disks, and taking snapshots. It does not allow her to manage any resources outside this cluster. Because ClusterAdmin is an administrator role, it allows her to use the Administration Portal to manage these resources, but does not give her any access via the User Portal.

Example 1.2. VM PowerUser Permissions

John is a software developer in the accounts department. He uses virtual machines to build and test his software. Sarah has created a virtual desktop called johndesktop for him. John is assigned the UserVmManager role on the johndesktop virtual machine. This allows him to access this single virtual machine using the User Portal. Because he has UserVmManager permissions, he can modify the virtual machine and add resources to it, such as new virtual disks. Because UserVmManager is a user role, it does not allow him to use the Administration Portal.

Example 1.3. Data Center Power User Role Permissions

Penelope is an office manager. In addition to her own responsibilities, she occasionally helps the HR manager with recruitment tasks, such as scheduling interviews and following up on reference checks. As per corporate policy, Penelope needs to use a particular application for recruitment tasks.
While Penelope has her own machine for office management tasks, she wants to create a separate virtual machine to run the recruitment application. She is assigned PowerUserRole permissions for the data center in which her new virtual machine will reside. This is because to create a new virtual machine, she needs to make changes to several components within the data center, including creating the virtual machine disk image in the storage domain.
Note that this is not the same as assigning DataCenterAdmin privileges to Penelope. As a PowerUser for a data center, Penelope can log in to the User Portal and perform virtual machine-specific actions on virtual machines within the data center. She cannot perform data center-level operations such as attaching hosts or storage to a data center.

Example 1.4. Network Administrator Permissions

Chris works as the network administrator in the IT department. Her day-to-day responsibilities include creating, manipulating, and removing networks in the department's Red Hat Enterprise Virtualization environment. For her role, she requires administrative privileges on the resources and on the networks of each resource. For example, if Chris has NetworkAdmin privileges on the IT department's data center, she can add and remove networks in the data center, and attach and detach networks for all virtual machines belonging to the data center.
In addition to managing the networks of the company's virtualized infrastructure, Chris also has a junior network administrator reporting to her. The junior network administrator, Pat, is managing a smaller virtualized environment for the company's internal training department. Chris has assigned Pat VnicProfileUser permissions and UserVmManager permissions for the virtual machines used by the internal training department. With these permissions, Pat can perform simple administrative tasks such as adding network interfaces onto virtual machines in the Extended tab of the User Portal. However, he does not have permissions to alter the networks for the hosts on which the virtual machines run, or the networks on the data center to which the virtual machines belong.

Example 1.5. Custom Role Permissions

Rachel works in the IT department, and is responsible for managing user accounts in Red Hat Enterprise Virtualization. She needs permission to add user accounts and assign them the appropriate roles and permissions. She does not use any virtual machines herself, and should not have access to administration of hosts, virtual machines, clusters or data centers. There is no built-in role which provides her with this specific set of permissions. A custom role must be created to define the set of permissions appropriate to Rachel's position.
UserManager Custom Role

Figure 1.3. UserManager Custom Role

The UserManager custom role shown above allows manipulation of users, permissions and roles. These actions are organized under System - the top level object of the hierarchy shown in Figure 1.3, “UserManager Custom Role”. This means they apply to all other objects in the system. The role is set to have an Account Type of Admin. This means that when she is assigned this role, Rachel can only use the Administration Portal, not the User Portal.

1.1.2. System Permissions

Permissions enable users to perform actions on objects, where objects are either individual objects or container objects.
Permissions & Roles

Figure 1.4. Permissions & Roles

Any permissions that apply to a container object also apply to all members of that container. The following diagram depicts the hierarchy of objects in the system.
Red Hat Enterprise Virtualization Object Hierarchy

Figure 1.5. Red Hat Enterprise Virtualization Object Hierarchy

1.1.2.1. User Properties
Roles and permissions are the properties of the user. Roles are predefined sets of privileges that permit access to different levels of physical and virtual resources. Multilevel administration provides a finely grained hierarchy of permissions. For example, a data center administrator has permissions to manage all objects in the data center, while a host administrator has system administrator permissions to a single physical host. A user can have permissions to use a single virtual machine but not make any changes to the virtual machine configurations, while another user can be assigned system permissions to a virtual machine.
1.1.2.2. User and Administrator Roles
Red Hat Enterprise Virtualization provides a range of pre-configured roles, from an administrator with system-wide permissions to an end user with access to a single virtual machine. While you cannot change or remove the default roles, you can clone and customize them, or create new roles according to your requirements. There are two types of roles:
  • Administrator Role: Allows access to the Administration Portal for managing physical and virtual resources. An administrator role confers permissions for actions to be performed in the User Portal; however, it has no bearing on what a user can see in the User Portal.
  • User Role: Allows access to the User Portal for managing and accessing virtual machines and templates. A user role determines what a user can see in the User Portal. Permissions granted to a user with an administrator role are reflected in the actions available to that user in the User Portal.
For example, if you have an administrator role on a cluster, you can manage all virtual machines in the cluster using the Administration Portal. However, you cannot access any of these virtual machines in the User Portal; this requires a user role.
1.1.2.3. User Roles Explained
The table below describes basic user roles which confer permissions to access and configure virtual machines in the User Portal.
Table 1.1. Red Hat Enterprise Virtualization User Roles - Basic
Role Privileges Notes
UserRole Can access and use virtual machines and pools. Can log in to the User Portal, use assigned virtual machines and pools, view virtual machine state and details.
PowerUserRole Can create and manage virtual machines and templates. Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center.
UserVmManager System administrator of a virtual machine. Can manage virtual machines and create and use snapshots. A user who creates a virtual machine in the User Portal is automatically assigned the UserVmManager role on the machine.
The table below describes advanced user roles which allow you to do more fine tuning of permissions for resources in the User Portal.
Table 1.2. Red Hat Enterprise Virtualization User Roles - Advanced
Role Privileges Notes
UserTemplateBasedVm Limited privileges to only use Templates. Can use templates to create virtual machines.
DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached.
VmCreator Can create virtual machines in the User Portal. This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or clusters. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains.
TemplateCreator Can create, edit, manage and remove virtual machine templates within assigned resources. This role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains.
DiskCreator Can create, edit, manage and remove virtual machine disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or storage domains.
TemplateOwner Can edit and delete the template, assign and manage user permissions for the template. This role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template.
VnicProfileUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks.
1.1.2.4. Administrator Roles Explained
The table below describes basic administrator roles which confer permissions to access and configure resources in the Administration Portal.
Table 1.3. Red Hat Enterprise Virtualization System Administrator Roles - Basic
Role Privileges Notes
SuperUser System Administrator of the Red Hat Enterprise Virtualization environment. Has full permissions across all objects and levels, can manage all objects across all data centers.
ClusterAdmin Cluster Administrator. Possesses administrative permissions for all objects underneath a specific cluster.
DataCenterAdmin Data Center Administrator. Possesses administrative permissions for all objects underneath a specific data center except for storage.

Important

Do not use the administrative user for the directory server as the Red Hat Enterprise Virtualization administrative user. Create a user in the directory server specifically for use as the Red Hat Enterprise Virtualization administrative user.
The table below describes advanced administrator roles which allow you to do more fine tuning of permissions for resources in the Administration Portal.
Table 1.4. Red Hat Enterprise Virtualization System Administrator Roles - Advanced
Role Privileges Notes
TemplateAdmin Administrator of a virtual machine template. Can create, delete, and configure the storage domains and network details of templates, and move templates between domains.
StorageAdmin Storage Administrator. Can create, delete, configure, and manage an assigned storage domain.
HostAdmin Host Administrator. Can attach, remove, configure, and manage a specific host.
NetworkAdmin Network Administrator. Can configure and manage the network of a particular data center or cluster. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster.
VmPoolAdmin System Administrator of a virtual pool. Can create, delete, and configure a virtual pool; assign and remove virtual pool users; and perform basic operations on a virtual machine in the pool.
GlusterAdmin Gluster Storage Administrator. Can create, delete, configure, and manage Gluster storage volumes.
VmImporterExporter Import and export Administrator of a virtual machine. Can import and export virtual machines. Able to view all virtual machines and templates exported by other users.

1.1.3. Scheduling Policies

A scheduling policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster that scheduling policy is applied to. Scheduling policies determine this logic via a combination of filters, weightings, and a load balancing policy. The Red Hat Enterprise Virtualization Manager provides five default scheduling policies: Evenly_Distributed, InClusterUpgrade, None, Power_Saving, and VM_Evenly_Distributed. You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines.
1.1.3.1. Creating a Scheduling Policy
You can create new scheduling policies to control the logic by which virtual machines are distributed amongst a given cluster in your Red Hat Enterprise Virtualization environment.

Procedure 1.3. Creating a Scheduling Policy

  1. Click the Configure button in the header bar of the Administration Portal to open the Configure window.
  2. Click Scheduling Policies to view the scheduling policies tab.
  3. Click New to open the New Scheduling Policy window.
    The New Scheduling Policy Window

    Figure 1.6. The New Scheduling Policy Window

  4. Enter a Name and Description for the scheduling policy.
  5. Configure filter modules:
    1. In the Filter Modules section, drag and drop the preferred filter modules to apply to the scheduling policy from the Disabled Filters section into the Enabled Filters section.
    2. Specific filter modules can also be set as the First, to be given highest priority, or Last, to be given lowest priority, for basic optimization.
      To set the priority, right-click any filter module, hover the cursor over Position and select First or Last.
  6. Configure weight modules:
    1. In the Weights Modules section, drag and drop the preferred weights modules to apply to the scheduling policy from the Disabled Weights section into the Enabled Weights & Factors section.
    2. Use the + and - buttons to the left of the enabled weight modules to increase or decrease the weight of those modules.
  7. Specify a load balancing policy:
    1. From the drop-down menu in the Load Balancer section, select the load balancing policy to apply to the scheduling policy.
    2. From the drop-down menu in the Properties section, select a load balancing property to apply to the scheduling policy and use the text field to the right of that property to specify a value.
    3. Use the + and - buttons to add or remove additional properties.
  8. Click OK.
1.1.3.2. Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window
The following table details the options available in the New Scheduling Policy and Edit Scheduling Policy windows.
Table 1.5. New Scheduling Policy and Edit Scheduling Policy Settings
Field Name
Description
Name
The name of the scheduling policy. This is the name used to refer to the scheduling policy in the Red Hat Enterprise Virtualization Manager.
Description
A description of the scheduling policy. This field is recommended but not mandatory.
Filter Modules
A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below:
  • CpuPinning: Hosts which do not satisfy the CPU pinning definition.
  • Migration: Prevent migration to the same host.
  • PinToHost: Hosts other than the host to which the virtual machine is pinned.
  • CPU-Level: Hosts that do not meet the CPU topology of the virtual machine.
  • CPU: Hosts with fewer CPUs than the number assigned to the virtual machine.
  • Memory: Hosts that do not have sufficient memory to run the virtual machine.
  • VmAffinityGroups: Hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on the same host or on separate hosts.
  • InClusterUpgrade: Hosts which run an older operating system than the virtual machine currently runs on.
  • HostDevice: Hosts that do not support host devices required by the virtual machine.
  • HA: Forces the hosted engine virtual machine to only run on hosts with a positive high availability score.
  • Emulated-Machine: Hosts which do not have proper emulated machine support.
  • Network: Hosts on which networks required by the network interface controller of a virtual machine are not installed, or on which the cluster's display network is not installed.
Weights Modules
A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
  • InClusterUpgrade: Weight hosts in accordance with their operating system version. The weight penalizes hosts with older operating systems more than hosts with the same operating system, giving priority to hosts with newer operating systems.
  • OptimalForHaReservation: Weights hosts in accordance with their high availability score.
  • None: Weights hosts in accordance with the even distribution module.
  • OptimalForEvenGuestDistribution: Weights hosts in accordance with the number of virtual machines running on those hosts.
  • VmAffinityGroups: Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on the same host or on separate hosts in accordance with the parameters of that affinity group.
  • OptimalForPowerSaving: Weights hosts in accordance with their CPU usage, giving priority to hosts with higher CPU usage.
  • OptimalForEvenDistribution: Weights hosts in accordance with their CPU usage, giving priority to hosts with lower CPU usage.
  • HA: Weights hosts in accordance with their high availability score.
Load Balancer
This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage.
Properties
This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the + and - buttons to add or remove additional properties to or from the load balancing module.

1.1.4. Instance Types

Instance types can be used to define the hardware configuration of a virtual machine. Selecting an instance type when creating or editing a virtual machine will automatically fill in the hardware configuration fields. This allows users to create multiple virtual machines with the same hardware configuration without having to manually fill in every field.
A set of predefined instance types are available by default, as outlined in the following table:
Table 1.6. Predefined Instance Types
Name
Memory
vCPUs
Tiny
512 MB
1
Small
2 GB
1
Medium
4 GB
2
Large
8 GB
2
XLarge
16 GB
4
Administrators can also create, edit, and remove instance types from the Instance Types tab of the Configure window.
The Instance Types Tab

Figure 1.7. The Instance Types Tab

Fields in the New Virtual Machine and Edit Virtual Machine windows that are bound to an instance type will have a chain link image next to them ( ). If the value of one of these fields is changed, the virtual machine will be detached from the instance type, changing to Custom, and the chain will appear broken ( ). However, if the value is changed back, the chain will relink and the instance type will move back to the selected one.
1.1.4.1. Creating Instance Types
Administrators can create new instance types, which can then be selected by users when creating or editing virtual machines.

Procedure 1.4. Creating an Instance Type

  1. On the header bar, click Configure.
  2. Click the Instance Types tab.
  3. Click New to open the New Instance Type window.
    The New Instance Type Window

    Figure 1.8. The New Instance Type Window

  4. Enter a Name and Description for the instance type.
  5. Click Show Advanced Options and configure the instance type's settings as required. The settings that appear in the New Instance Type window are identical to those in the New Virtual Machine window, but with the relevant fields only. See Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows in the Virtual Machine Management Guide.
  6. Click OK.
The new instance type will appear in the Instance Types tab in the Configure window, and can be selected from the Instance Type drop-down list when creating or editing a virtual machine.
1.1.4.2. Editing Instance Types
Administrators can edit existing instance types from the Configure window.

Procedure 1.5. Editing Instance Type Properties

  1. On the header bar, click Configure.
  2. Click the Instance Types tab.
  3. Select the instance type to be edited.
  4. Click Edit to open the Edit Instance Type window.
  5. Change the settings as required.
  6. Click OK.
The configuration of the instance type is updated. New virtual machines and restarted existing virtual machines based on the instance type will use the new configuration.
1.1.4.3. Removing Instance Types

Procedure 1.6. Removing an Instance Type

  1. On the header bar, click Configure.
  2. Click the Instance Types tab.
  3. Select the instance type to be removed.
  4. Click Remove to open the Remove Instance Type window.
  5. If any virtual machines are based on the instance type to be removed, a warning window listing the attached virtual machines will appear. To continue removing the instance type, select the Approve Operation checkbox. Otherwise click Cancel.
  6. Click OK.
The instance type is removed from the Instance Types list and can no longer be used when creating a new virtual machine. Any virtual machines that were attached to the removed instance type will now be attached to Custom (no instance type).

1.1.5. MAC Address Pools

MAC address pools define the range of MAC addresses from which MAC addresses are allocated for each data center. A MAC address pool is specified for each data center. By using MAC address pools Red Hat Enterprise Virtualization can automatically generate and assign MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. MAC address pools are more memory efficient when all MAC addresses related to a data center are within the range for the assigned MAC address pool.
The same MAC address pool can be shared by multiple data centers, but each data center has a single MAC address pool assigned. A default MAC address pool is created by Red Hat Enterprise Virtualization and is used if another MAC address pool is not assigned. For more information about assigning MAC address pools to data centers see Section 3.5.1, “Creating a New Data Center”.
The MAC address pool assigns the next available MAC address following the last address that was returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses defined in a single MAC address pool, the ranges take turns in serving incoming requests in the same way available MAC addresses are selected.
1.1.5.1. Creating MAC Address Pools
You can create new MAC address pools.

Procedure 1.7. Creating a MAC Address Pool

  1. On the header bar, click the Configure button to open the Configure window.
  2. Click the MAC Address Pools tab.
  3. Click the Add button to open the New MAC Address Pool window.
    The New MAC Address Pool Window

    Figure 1.9. The New MAC Address Pool Window

  4. Enter the Name and Description of the new MAC address pool.
  5. Select the Allow Duplicates check box to allow a MAC address to be used multiple times in a pool. The MAC address pool will not automatically use a duplicate MAC address, but enabling the duplicates option means a user can manually use a duplicate MAC address.

    Note

    If one MAC address pool has duplicates disabled, and another has duplicates enabled, each MAC address can be used once in the pool with duplicates disabled but can be used multiple times in the pool with duplicates enabled.
  6. Enter the required MAC Address Ranges. To enter multiple ranges click the plus button next to the From and To fields.
  7. Click OK.
1.1.5.2. Editing MAC Address Pools
You can edit MAC address pools to change the details, including the range of MAC addresses available in the pool and whether duplicates are allowed.

Procedure 1.8. Editing MAC Address Pool Properties

  1. On the header bar, click the Configure button to open the Configure window.
  2. Click the MAC Address Pools tab.
  3. Select the MAC address pool to be edited.
  4. Click the Edit button to open the Edit MAC Address Pool window.
  5. Change the Name, Description, Allow Duplicates, and MAC Address Ranges fields as required.

    Note

    When a MAC address range is updated, the MAC addresses of existing NICs are not reassigned. MAC addresses that were already assigned, but are outside of the new MAC address range, are added as user-specified MAC addresses and are still tracked by that MAC address pool.
  6. Click OK.
1.1.5.3. Editing MAC Address Pool Permissions
After a MAC address pool has been created, you can edit its user permissions. The user permissions control which data centers can use the MAC address pool. See Section 1.1.1, “Roles” for more information on adding new user permissions.

Procedure 1.9. Editing MAC Address Pool Permissions

  1. On the header bar, click the Configure button to open the Configure window.
  2. Click the MAC Address Pools tab.
  3. Select the required MAC address pool.
  4. Edit the user permissions for the MAC address pool:
    • To add user permissions to a MAC address pool:
      1. Click Add in the user permissions pane at the bottom of the Configure window.
      2. Search for and select the required users.
      3. Select the required role from the Role to Assign drop-down list.
      4. Click OK to add the user permissions.
    • To remove user permissions from a MAC address pool:
      1. Select the user permission to be removed in the user permissions pane at the bottom of the Configure window.
      2. Click Remove to remove the user permissions.
1.1.5.4. Removing MAC Address Pools
You can remove created MAC address pools, but the default MAC address pool cannot be removed.

Procedure 1.10. Removing a MAC Address Pool

  1. On the header bar, click the Configure button to open the Configure window.
  2. Click the MAC Address Pools tab.
  3. Select the MAC address pool to be removed.
  4. Click the Remove button to open the Remove MAC Address Pool window.
  5. Click OK.

Part I. Administering the Resources

Chapter 2. Quality of Service

Red Hat Enterprise Virtualization allows you to define quality of service entries that provide fine-grained control over the level of input and output, processing, and networking capabilities that resources in your environment can access. Quality of service entries are defined at the data center level and are assigned to profiles created under clusters and storage domains. These profiles are then assigned to individual resources in the clusters and storage domains where the profiles were created.

2.1. Storage Quality of Service

Storage quality of service defines the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Assigning storage quality of service to a virtual disk allows you to fine tune the performance of storage domains and prevent the storage operations associated with one virtual disk from affecting the storage capabilities available to other virtual disks hosted in the same storage domain.

2.1.1. Creating a Storage Quality of Service Entry

Create a storage quality of service entry.

Procedure 2.1. Creating a Storage Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click Storage.
  4. Click New.
  5. Enter a name for the quality of service entry in the QoS Name field.
  6. Enter a description for the quality of service entry in the Description field.
  7. Specify the throughput quality of service:
    1. Select the Throughput check box.
    2. Enter the maximum permitted total throughput in the Total field.
    3. Enter the maximum permitted throughput for read operations in the Read field.
    4. Enter the maximum permitted throughput for write operations in the Write field.
  8. Specify the input and output quality of service:
    1. Select the IOps check box.
    2. Enter the maximum permitted number of input and output operations per second in the Total field.
    3. Enter the maximum permitted number of input operations per second in the Read field.
    4. Enter the maximum permitted number of output operations per second in the Write field.
  9. Click OK.
You have created a storage quality of service entry, and can create disk profiles based on that entry in data storage domains that belong to the data center.

2.1.2. Removing a Storage Quality of Service Entry

Remove an existing storage quality of service entry.

Procedure 2.2. Removing a Storage Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click Storage.
  4. Select the storage quality of service entry to remove.
  5. Click Remove.
  6. Click OK when prompted.
You have removed the storage quality of service entry, and that entry is no longer available. If any disk profiles were based on that entry, the storage quality of service entry for those profiles is automatically set to [unlimited].

2.2. Virtual Machine Network Quality of Service

Virtual machine network quality of service is a feature that allows you to create profiles for limiting both the inbound and outbound traffic of individual virtual network interface controllers. With this feature, you can limit bandwidth in a number of layers, controlling the consumption of network resources.

Important

Virtual machine network quality of service is only supported on cluster compatibility version 3.3 and higher.

2.2.1. Creating a Virtual Machine Network Quality of Service Entry

Create a virtual machine network quality of service entry to regulate network traffic when applied to a virtual network interface controller (vNIC) profile, also known as a virtual machine network interface profile.

Procedure 2.3. Creating a Virtual Machine Network Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click the QoS tab in the details pane.
  3. Click VM Network.
  4. Click New.
  5. Enter a name for the virtual machine network quality of service entry in the Name field.
  6. Enter the limits for the Inbound and Outbound network traffic.
  7. Click OK.
You have created a virtual machine network quality of service entry that can be used in a virtual network interface controller.

2.2.2. Settings in the New Virtual Machine Network QoS and Edit Virtual Machine Network QoS Windows Explained

Virtual machine network quality of service settings allow you to configure bandwidth limits for both inbound and outbound traffic on three distinct levels.
Table 2.1. Virtual Machine Network QoS Settings
Field Name
Description
Data Center
The data center to which the virtual machine network QoS policy is to be added. This field is configured automatically according to the selected data center.
Name
A name to represent the virtual machine network QoS policy within the Manager.
Inbound
The settings to be applied to inbound traffic. Select or clear the Inbound check box to enable or disable these settings.
  • Average: The average speed of inbound traffic.
  • Peak: The speed of inbound traffic during peak times.
  • Burst: The speed of inbound traffic during bursts.
Outbound
The settings to be applied to outbound traffic. Select or clear the Outbound check box to enable or disable these settings.
  • Average: The average speed of outbound traffic.
  • Peak: The speed of outbound traffic during peak times.
  • Burst: The speed of outbound traffic during bursts.

2.2.3. Removing a Virtual Machine Network Quality of Service Entry

Remove an existing virtual machine network quality of service entry.

Procedure 2.4. Removing a Virtual Machine Network Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click the QoS tab in the details pane.
  3. Click VM Network.
  4. Select the virtual machine network quality of service entry to remove.
  5. Click Remove.
  6. Click OK when prompted.
You have removed the virtual machine network quality of service entry, and that entry is no longer available.

2.3. Host Network Quality of Service

Host network quality of service configures the networks on a host to enable the control of network traffic through the physical interfaces. Host network quality of service allows for the fine tuning of network performance by controlling the consumption of network resources on the same physical network interface controller. This helps to prevent situations where one network causes other networks attached to the same physical network interface controller to no longer function due to heavy traffic. By configuring host network quality of service, these networks can now function on the same physical network interface controller without congestion issues.

2.3.1. Creating a Host Network Quality of Service Entry

Create a host network quality of service entry.

Procedure 2.5. Creating a Host Network Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click Host Network.
  4. Click New.
  5. Enter a name for the quality of service entry in the QoS Name field.
  6. Enter a description for the quality of service entry in the Description field.
  7. Enter the desired values for Weighted Share, Rate Limit [Mbps], and Committed Rate [Mbps].
  8. Click OK.

2.3.2. Settings in the New Host Network Quality of Service and Edit Host Network Quality of Service Windows Explained

Host network quality of service settings allow you to configure bandwidth limits for outbound traffic.
Table 2.2. Host Network QoS Settings
Field Name
Description
Data Center
The data center to which the host network QoS policy is to be added. This field is configured automatically according to the selected data center.
QoS Name
A name to represent the host network QoS policy within the Manager.
Description
A description of the host network QoS policy.
Outbound
The settings to be applied to outbound traffic.
  • Weighted Share: Signifies how much of the logical link's capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
  • Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
  • Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.

2.3.3. Removing a Host Network Quality of Service Entry

Remove an existing network quality of service entry.

Procedure 2.6. Removing a Host Network Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click the QoS tab in the details pane.
  3. Click Host Network.
  4. Select the network quality of service entry to remove.
  5. Click Remove.
  6. Click OK when prompted.

2.4. CPU Quality of Service

CPU quality of service defines the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. Assigning CPU quality of service to a virtual machine allows you to prevent the workload on one virtual machine in a cluster from affecting the processing resources available to other virtual machines in that cluster.

2.4.1. Creating a CPU Quality of Service Entry

Create a CPU quality of service entry.

Procedure 2.7. Creating a CPU Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click CPU.
  4. Click New.
  5. Enter a name for the quality of service entry in the QoS Name field.
  6. Enter a description for the quality of service entry in the Description field.
  7. Enter the maximum processing capability the quality of service entry permits in the Limit field, in percentage. Do not include the % symbol.
  8. Click OK.
You have created a CPU quality of service entry, and can create CPU profiles based on that entry in clusters that belong to the data center.

2.4.2. Removing a CPU Quality of Service Entry

Remove an existing CPU quality of service entry.

Procedure 2.8. Removing a CPU Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click CPU.
  4. Select the CPU quality of service entry to remove.
  5. Click Remove.
  6. Click OK when prompted.
You have removed the CPU quality of service entry, and that entry is no longer available. If any CPU profiles were based on that entry, the CPU quality of service entry for those profiles is automatically set to [unlimited].

Chapter 3. Data Centers

3.1. Introduction to Data Centers

A data center is a logical entity that defines the set of resources used in a specific environment. A data center is considered a container resource, in that it is comprised of logical resources, in the form of clusters and hosts; network resources, in the form of logical networks and physical NICs; and storage resources, in the form of storage domains.
A data center can contain multiple clusters, which can contain multiple hosts; it can have multiple storage domains associated to it; and it can support multiple virtual machines on each of its hosts. A Red Hat Enterprise Virtualization environment can contain multiple data centers; the data center infrastructure allows you to keep these centers separate.
All data centers are managed from the single Administration Portal.
Data Centers

Figure 3.1. Data Centers

Red Hat Enterprise Virtualization creates a default data center during installation. You can configure the default data center, or set up new appropriately named data centers.
Data Center Objects

Figure 3.2. Data Center Objects

3.2. The Storage Pool Manager

The Storage Pool Manager (SPM) is a role given to one of the hosts in the data center enabling it to manage the storage domains of the data center. The SPM entity can be run on any host in the data center; the Red Hat Enterprise Virtualization Manager grants the role to one of the hosts. The SPM does not preclude the host from its standard operation; a host running as SPM can still host virtual resources.
The SPM entity controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN). This is an exclusive responsibility: only one host can be the SPM in the data center at one time to ensure metadata integrity.
The Red Hat Enterprise Virtualization Manager ensures that the SPM is always available. The Manager moves the SPM role to a different host if the SPM host encounters problems accessing the storage. When the SPM starts, it ensures that it is the only host granted the role; therefore it will acquire a storage-centric lease. This process can take some time.

3.3. SPM Priority

The SPM role uses some of a host's available resources. The SPM priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Critical virtual machines on hosts with low SPM priority will not have to contend with SPM operations for host resources.
You can change a host's SPM priority by editing the host.

3.4. Using the Events Tab to Identify Problem Objects in Data Centers

The Events tab for a data center displays all events associated with that data center; events include audits, warnings, and errors. The information displayed in the results list will enable you to identify problem objects in your Red Hat Enterprise Virtualization environment.
The Events results list has two views: Basic and Advanced. Basic view displays the event icon, the time of the event, and the description of the events. Advanced view displays these also and includes, where applicable, the event ID; the associated user, host, virtual machine, template, data center, storage, and cluster; the Gluster volume, and the correlation ID.

3.5. Data Center Tasks

3.5.1. Creating a New Data Center

This procedure creates a data center in your virtualization environment. The data center requires a functioning cluster, host, and storage domain to operate.

Note

The storage Type can be edited until the first storage domain is added to the data center. Once a storage domain has been added, the storage Type cannot be changed.
If you set the Compatibility Version as 3.6, it cannot be changed to 3.5 at a later time; version regression is not allowed.

Procedure 3.1. Creating a New Data Center

  1. Select the Data Centers resource tab to list all data centers in the results list.
  2. Click New to open the New Data Center window.
  3. Enter the Name and Description of the data center.
  4. Select the storage Type, Compatibility Version, and Quota Mode of the data center from the drop-down menus.
  5. Optionally, change the MAC address pool for the data center. The default MAC address pool is preselected by default. For more information on creating MAC address pools see Section 1.1.5, “MAC Address Pools”.
    1. Click the MAC Address Pool tab.
    2. Select the required MAC address pool from the MAC Address Pool drop-down list.
  6. Click OK to create the data center and open the New Data Center - Guide Me window.
  7. The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the data center and clicking the Guide Me button.
The new data center is added to the virtualization environment. It will remain Uninitialized until a cluster, host, and storage domain are configured for it; use Guide Me to configure these entities.

3.5.2. Explanation of Settings in the New Data Center and Edit Data Center Windows

The table below describes the settings of a data center as displayed in the New Data Center and Edit Data Center windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.
Table 3.1. Data Center Properties
Field
Description/Action
Name
The name of the data center. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
The description of the data center. This field is recommended but not mandatory.
Type
The storage type. Choose one of the following:
  • Shared
  • Local
The type of data domain dictates the type of the data center and cannot be changed after creation without significant disruption. Multiple types of storage domains (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, though local and shared domains cannot be mixed.
Compatibility Version
The version of Red Hat Enterprise Virtualization. Choose one of the following:
  • 3.0
  • 3.1
  • 3.2
  • 3.3
  • 3.4
  • 3.5
  • 3.6
After upgrading the Red Hat Enterprise Virtualization Manager, the hosts, clusters and data centers may still be in the earlier version. Ensure that you have upgraded all the hosts, then the clusters, before you upgrade the Compatibility Level of the data center.
Quota Mode
Quota is a resource limitation tool provided with Red Hat Enterprise Virtualization. Choose one of:
  • Disabled: Select if you do not want to implement Quota
  • Audit: Select if you want to edit the Quota settings
  • Enforced: Select to implement Quota
MAC Address Pool
The MAC address pool of the data center. If no other MAC address pool is assigned the default MAC address pool is used. For more information on MAC address pools see Section 1.1.5, “MAC Address Pools”

3.5.3. Re-Initializing a Data Center: Recovery Procedure

This recovery procedure replaces the master data domain of your data center with a new master data domain; necessary in the event of data corruption of your master data domain. Re-initializing a data center allows you to restore all other resources associated with the data center, including clusters, hosts, and non-problematic storage domains.
You can import any backup or exported virtual machines or templates into your new master data domain.

Procedure 3.2. Re-Initializing a Data Center

  1. Click the Data Centers resource tab and select the data center to re-initialize.
  2. Ensure that any storage domains attached to the data center are in maintenance mode.
  3. Right-click the data center and select Re-Initialize Data Center from the drop-down menu to open the Data Center Re-Initialize window.
  4. The Data Center Re-Initialize window lists all available (detached; in maintenance mode) storage domains. Click the radio button for the storage domain you are adding to the data center.
  5. Select the Approve operation check box.
  6. Click OK to close the window and re-initialize the data center.
The storage domain is attached to the data center as the master data domain and activated. You can now import any backup or exported virtual machines or templates into your new master data domain.

3.5.4. Removing a Data Center

An active host is required to remove a data center. Removing a data center will not remove the associated resources.

Procedure 3.3. Removing a Data Center

  1. Ensure the storage domains attached to the data center is in maintenance mode.
  2. Click the Data Centers resource tab and select the data center to remove.
  3. Click Remove to open the Remove Data Center(s) confirmation window.
  4. Click OK.

3.5.5. Force Removing a Data Center

A data center becomes Non Responsive if the attached storage domain is corrupt or if the host becomes Non Responsive. You cannot Remove the data center under either circumstance.
Force Remove does not require an active host. It also permanently removes the attached storage domain.
It may be necessary to Destroy a corrupted storage domain before you can Force Remove the data center.

Procedure 3.4. Force Removing a Data Center

  1. Click the Data Centers resource tab and select the data center to remove.
  2. Click Force Remove to open the Force Remove Data Center confirmation window.
  3. Select the Approve operation check box.
  4. Click OK
The data center and attached storage domain are permanently removed from the Red Hat Enterprise Virtualization environment.

3.5.6. Changing the Data Center Compatibility Version

Red Hat Enterprise Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Enterprise Virtualization that the data center is intended to be compatible with. All clusters in the data center must support the desired compatibility level.

Note

To change the data center compatibility version, you must have first updated all the clusters in your data center to a level that supports your desired compatibility level.

Procedure 3.5. Changing the Data Center Compatibility Version

  1. From the Administration Portal, click the Data Centers tab.
  2. Select the data center to change from the list displayed.
  3. Click Edit.
  4. Change the Compatibility Version to the desired value.
  5. Click OK to open the Change Data Center Compatibility Version confirmation window.
  6. Click OK to confirm.
You have updated the compatibility version of the data center.

Warning

Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

3.6. Data Centers and Storage Domains

3.6.1. Attaching an Existing Data Domain to a Data Center

Data domains that are Unattached can be attached to a data center. Shared storage domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center.

Procedure 3.6. Attaching an Existing Data Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach Data to open the Attach Storage window.
  4. Select the check box for the data domain to attach to the data center. You can select multiple check boxes to attach multiple data domains.
  5. Click OK.
The data domain is attached to the data center and is automatically activated.

3.6.2. Attaching an Existing ISO domain to a Data Center

An ISO domain that is Unattached can be attached to a data center. The ISO domain must be of the same Storage Type as the data center.
Only one ISO domain can be attached to a data center.

Procedure 3.7. Attaching an Existing ISO Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach ISO to open the Attach ISO Library window.
  4. Click the radio button for the appropriate ISO domain.
  5. Click OK.
The ISO domain is attached to the data center and is automatically activated.

3.6.3. Attaching an Existing Export Domain to a Data Center

An export domain that is Unattached can be attached to a data center. Only one export domain can be attached to a data center.

Procedure 3.8. Attaching an Existing Export Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach Export to open the Attach Export Domain window.
  4. Click the radio button for the appropriate Export domain.
  5. Click OK.
The export domain is attached to the data center and is automatically activated.

3.6.4. Detaching a Storage Domain from a Data Center

Detaching a storage domain from a data center will stop the data center from associating with that storage domain. The storage domain is not removed from the Red Hat Enterprise Virtualization environment; it can be attached to another data center.
Data, such as virtual machines and templates, remains attached to the storage domain.

Note

The master storage, if it is the last available storage domain, cannot be removed.

Procedure 3.9. Detaching a Storage Domain from a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains attached to the data center.
  3. Select the storage domain to detach. If the storage domain is Active, click Maintenance to open the Maintenance Storage Domain(s) confirmation window.
  4. Click OK to initiate maintenance mode.
  5. Click Detach to open the Detach Storage confirmation window.
  6. Click OK.
You have detached the storage domain from the data center. It can take up to several minutes for the storage domain to disappear from the details pane.

3.7. Data Centers and Permissions

3.7.1. Managing System Permissions for a Data Center

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A data center administrator is a system administration role for a specific data center only. This is useful in virtualization environments with multiple data centers where each data center requires an administrator. The DataCenterAdmin role is a hierarchical model; a user assigned the data center administrator role for a data center can manage all objects in the data center with the exception of storage for that data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment.
The data center administrator role permits the following actions:
  • Create and remove clusters associated with the data center.
  • Add and remove hosts, virtual machines, and pools associated with the data center.
  • Edit user permissions for virtual machines associated with the data center.

Note

You can only assign roles and permissions to existing users.
You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator.

3.7.2. Data Center Administrator Roles Explained

Data Center Permission Roles

The table below describes the administrator roles and privileges applicable to data center administration.

Table 3.2. Red Hat Enterprise Virtualization System Administrator Roles
Role Privileges Notes
DataCenterAdmin Data Center Administrator Can use, create, delete, manage all physical and virtual resources within a specific data center except for storage, including clusters, hosts, templates and virtual machines.
NetworkAdmin Network Administrator Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well.

3.7.3. Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 3.10. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down list.
  6. Click OK.
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

3.7.4. Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 3.11. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK.
You have removed the user's role, and the associated permissions, from the resource.

Chapter 4. Clusters

4.1. Introduction to Clusters

A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models.
Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the Clusters tab and in the Configuration tool during runtime. The cluster is the highest level at which power and load-sharing policies can be defined.
The number of hosts and number of virtual machines that belong to a cluster are displayed in the results list under Host Count and VM Count, respectively.
Clusters run virtual machines or Red Hat Gluster Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together.
Red Hat Enterprise Virtualization creates a default cluster in the default data center during installation.
Cluster

Figure 4.1. Cluster

4.2. Cluster Tasks

4.2.1. Creating a New Cluster

A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must be of the same CPU type (Intel or AMD). It is recommended that you create your hosts before you create your cluster to ensure CPU type optimization. However, you can configure the hosts at a later time using the Guide Me button.

Procedure 4.1. Creating a New Cluster

  1. Select the Clusters resource tab.
  2. Click New.
  3. Select the Data Center the cluster will belong to from the drop-down list.
  4. Enter the Name and Description of the cluster.
  5. Select a network from the Management Network drop-down list to assign the management network role.
  6. Select the CPU Architecture and CPU Type from the drop-down lists. It is important to match the CPU processor family with the minimum CPU processor type of the hosts you intend to attach to the cluster, otherwise the host will be non-operational.

    Note

    For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. If your cluster includes hosts with different CPU models, select the oldest CPU model. For more information on each CPU model, see https://access.redhat.com/solutions/634853.
  7. Select the Compatibility Version of the cluster from the drop-down list.
  8. Select either the Enable Virt Service or Enable Gluster Service radio button to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes. Note that you cannot add Red Hat Enterprise Virtualization Hypervisor hosts to a Gluster-enabled cluster.
  9. Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Manager, allowing the administrator to provide an explanation for the maintenance.
  10. Optionally select the Enable to set Host maintenance reason check box to enable an optional reason field when a host is placed into maintenance mode from the Manager, allowing the administrator to provide an explanation for the maintenance.
  11. Select either the /dev/random source (Linux-provided device) or /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster will use.
  12. Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster.
  13. Click the Resilience Policy tab to select the virtual machine migration policy.
  14. Click the Scheduling Policy tab to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and add a custom serial number policy.
  15. Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster.
  16. Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options.
  17. Click OK to create the cluster and open the New Cluster - Guide Me window.
  18. The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the cluster and clicking the Guide Me button.
The new cluster is added to the virtualization environment.

4.2.2. Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows

4.2.2.1. General Cluster Settings Explained
New Cluster window

Figure 4.2. New Cluster window

The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.
Table 4.1. General Cluster Settings
Field
Description/Action
Data Center
The data center that will contain the cluster. The data center must be created before adding a cluster.
Name
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description / Comment
The description of the cluster or additional notes. These fields are recommended but not mandatory.
Management Network
The logical network which will be assigned the management network role. The default is ovirtmgmt. On existing clusters, the management network can only be changed via the Manage Networks button in the Logical Networks tab in the details pane.
CPU Architecture The CPU architecture of the cluster. Different CPU types are available depending on which CPU architecture is selected.
  • undefined: All CPU types are available.
  • x86_64: All Intel and AMD CPU types are available.
  • ppc64: Only IBM POWER 8 is available.
CPU Type
The CPU type of the cluster. Choose one of:
  • Intel Conroe Family
  • Intel Penryn Family
  • Intel Nehalem Family
  • Intel Westmere Family
  • Intel SandyBridge Family
  • Intel Haswell
  • AMD Opteron G1
  • AMD Opteron G2
  • AMD Opteron G3
  • AMD Opteron G4
  • AMD Opteron G5
  • IBM POWER 8
All hosts in a cluster must run either Intel, AMD, or IBM POWER 8 CPU type; this cannot be changed after creation without significant disruption. The CPU type should be set to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest.
Compatibility Version
The version of Red Hat Enterprise Virtualization. Choose one of:
  • 3.0
  • 3.1
  • 3.2
  • 3.3
  • 3.4
  • 3.5
  • 3.6
You will not be able to select a version older than the version specified for the data center.
Enable Virt Service
If this radio button is selected, hosts in this cluster will be used to run virtual machines.
Enable Gluster Service
If this radio button is selected, hosts in this cluster will be used as Red Hat Gluster Storage Server nodes, and not for running virtual machines. You cannot add a Red Hat Enterprise Virtualization Hypervisor host to a cluster with this option enabled.
Import existing gluster configuration
This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to Red Hat Enterprise Virtualization Manager.
The following options are required for each host in the cluster that is being imported:
  • Address: Enter the IP or fully qualified domain name of the Gluster host server.
  • Fingerprint: Red Hat Enterprise Virtualization Manager fetches the host's fingerprint, to ensure you are connecting with the correct host.
  • Root Password: Enter the root password required for communicating with the host.
Enable to set VM maintenance reason If this check box is selected, an optional reason field will appear when a virtual machine in the cluster is shut down from the Manager. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the virtual machine is powered on again.
Enable to set Host maintenance reason If this check box is selected, an optional reason field will appear when a host in the cluster is moved into maintenance mode from the Manager. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the host is activated again.
Required Random Number Generator sources:
If one of the following check boxes is selected, all hosts in the cluster must have that device available. This enables passthrough of entropy from the random number generator device to virtual machines.
  • /dev/random source - The Linux-provided random number generator.
  • /dev/hwrng source - An external hardware generator.
Note that this feature is only supported on hosts running Red Hat Enterprise Linux 6.6 and later or Red Hat Enterprise Linux 7.0 and later.
4.2.2.2. Optimization Settings Explained
Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your Red Hat Enterprise Virtualization environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine.
CPU Thread Handling allows hosts to run virtual machines with a total number of processor cores greater than number of cores in the host. This is useful for non-CPU-intensive workloads, where allowing a greater number of virtual machines to run can reduce hardware requirements. It also allows virtual machines to run with CPU topologies that would otherwise not be possible, specifically if the number of guest cores is between the number of host cores and number of host threads.
The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows.
Table 4.2. Optimization Settings
Field
Description/Action
Memory Optimization
  • None - Disable memory overcommit: Disables memory page sharing.
  • For Server Load - Allow scheduling of 150% of physical memory: Sets the memory page sharing threshold to 150% of the system memory on each host.
  • For Desktop Load - Allow scheduling of 200% of physical memory: Sets the memory page sharing threshold to 200% of the system memory on each host.
CPU Threads
Selecting the Count Threads As Cores check box allows hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host.
The exposed host threads would be treated as cores which can be utilized by virtual machines. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores.
Memory Balloon
Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this option is set, the Memory Overcommit Manager (MoM) will start ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine.
To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine in cluster level 3.2 and higher includes a balloon device, unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Section 4.2.5, “Updating the MoM Policy on Hosts in a Cluster”.
It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution.
KSM control
Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost.
4.2.2.3. Resilience Policy Settings Explained
The resilience policy sets the virtual machine migration policy in the event of a host becoming non-operational. Virtual machines running on a host that becomes non-operational are live migrated to other hosts in the cluster; this migration is dependent upon your cluster resilience policy. If a host is non-responsive and gets rebooted, virtual machines with high availability are restarted on another host in the cluster. The resilience policy only applies to hosts in a non-operational state.
Table 4.3. Host Failure State Explained
State
Description
Non Operational
Non-operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. If a host becomes non-operational, the migration of virtual machines depends on the cluster resilience policy.
Non Responsive
Non-responsive hosts cannot be communicated with by the Manager. If a host becomes non-responsive, all virtual machines with high availability are restarted on a different host in the cluster.
Virtual machine migration is a network-intensive operation. For instance, on a setup where a host is running ten or more virtual machines, migrating and restarting all of them can be a long and resource-consuming process. Therefore, select the policy action to best suit your setup. If you prefer a conservative approach, disable all migration of virtual machines. Alternatively, if you have many virtual machines, but only several which are running critical workloads, select the option to migrate only highly available virtual machines.
The table below describes the settings for the Resilience Policy tab in the New Cluster and Edit Cluster windows. See Section 4.2.1, “Creating a New Cluster” for more information on how to set the resilience policy when creating a new cluster.
Table 4.4. Resilience Policy Settings
Field
Description/Action
Migrate Virtual Machines
Migrates all virtual machines in order of their defined priority.
Migrate only Highly Available Virtual Machines
Migrates only highly available virtual machines to prevent overloading other hosts.
Do Not Migrate Virtual Machines
Prevents virtual machines from being migrated.
4.2.2.4. Scheduling Policy Settings Explained
Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster.
To add a scheduling policy to an existing cluster, click the Clusters tab and click the Edit button, then click the Scheduling Policy tab.
Scheduling Policy Settings: vm_evenly_distributed

Figure 4.3. Scheduling Policy Settings: vm_evenly_distributed

The table below describes the settings for the Scheduling Policy tab.
Table 4.5. Scheduling Policy Tab Properties
Field
Description/Action
Select Policy
Select a policy from the drop-down list.
  • none: Set the policy value to none to have no load or power sharing between hosts. This is the default mode.
  • evenly_distributed: Distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined Maximum Service Level.
  • InClusterUpgrade: Distributes virtual machines based on host operating system version. Hosts with a newer operating system than the virtual machine currently runs on are given priority over hosts with the same operating system. Virtual machines that migrate to a host with a newer operating system will not migrate back to an older operating system. A virtual machine can restart on any host in the cluster. The policy allows hosts in the cluster to be upgraded by allowing the cluster to have mixed operating system versions. Preconditions must be met before the policy can be enabled. See the Upgrade Guide for more information.
  • power_saving: Distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value.
  • vm_evenly_distributed: Distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold.
Properties
The following properties appear depending on the selected policy, and can be edited if necessary:
  • HighVmCount: Sets the maximum number of virtual machines that can run on each host. Exceeding this limit qualifies the host as overloaded. The default value is 10.
  • MigrationThreshold: Defines a buffer before virtual machines are migrated from the host. It is the maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside the migration threshold. The default value is 5.
  • SpmVmGrace: Defines the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines than other hosts it can run. The default value is 5.
  • CpuOverCommitDurationMinutes: Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action. The defined time interval protects against temporary spikes in CPU load activating scheduling policies and instigating unnecessary virtual machine migration. Maximum two characters. The default value is 2.
  • HighUtilization: Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, the Red Hat Enterprise Virtualization Manager migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold. The default value is 80.
  • LowUtilization: Expressed as a percentage. If the host runs with CPU usage below the low utilization value for the defined time interval, the Red Hat Enterprise Virtualization Manager will migrate virtual machines to other hosts in the cluster. The Manager will power down the original host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. The default value is 20.
  • ScaleDown: Reduces the impact of the HA Reservation weight function, by dividing a host's score by the specified amount. This is an optional property that can be added to any policy, including none.
  • HostsInReserve: Specifies a number of hosts to keep running even though there are no running virtual machines on them. This is an optional property that can be added to the power_saving policy.
  • EnableAutomaticHostPowerManagement: Enables automatic power management for all hosts in the cluster. This is an optional property that can be added to the power_saving policy. The default value is true.
  • MaxFreeMemoryForOverUtilized: Sets the maximum free memory required in MB for the minimum service level. If the host's memory usage runs at, or above this value, the Red Hat Enterprise Virtualization Manager migrates virtual machines to other hosts in the cluster until the host's available memory is below the minimum service threshold. Setting both MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized to 0MB disables memory based balancing. This is an optional property that can be added to the power_saving and evenly_distributed policies.
  • MinFreeMemoryForUnderUtilized: Sets the minimum free memory required in MB before the host is considered underutilized. If the host's memory usage runs below this value, the Red Hat Enterprise Virtualization Manager migrates virtual machines to other hosts in the cluster and will automatically power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Setting both MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized to 0MB disables memory based balancing. This is an optional property that can be added to the power_saving and evenly_distributed policies.
Scheduler Optimization
Optimize scheduling for host weighing/ordering.
  • Optimize for Utilization: Includes weight modules in scheduling to allow best selection.
  • Optimize for Speed: Skips host weighting in cases where there are more than ten pending requests.
Enable Trusted Service
Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details. For more information, see Section 9.4, “Trusted Compute Pools”.
Enable HA Reservation
Enable the Manager to monitor cluster capacity for highly available virtual machines. The Manager ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly.
Provide custom serial number policy
This check box allows you to specify a serial number policy for the virtual machines in the cluster. Select one of the following options:
  • Host ID: Sets the host's UUID as the virtual machine's serial number.
  • Vm ID: Sets the virtual machine's UUID as its serial number.
  • Custom serial number: Allows you to specify a custom serial number.
Auto Converge migrations
This option allows you to set whether auto-convergence is used during live migration of virtual machines in the cluster. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machines. Auto-convergence is disabled globally by default.
  • Select Inherit from global setting to use the auto-convergence setting that is set at the global level with engine-config. This option is selected by default.
  • Select Auto Converge to override the global setting and allow auto-convergence for virtual machines in the cluster.
  • Select Don't Auto Converge to override the global setting and prevent auto-convergence for virtual machines in the cluster.
Enable migration compression
This option allows you to set whether migration compression is used during live migration of virtual machines in the cluster. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Migration compression is disabled globally by default.
  • Select Inherit from global setting to use the compression setting that is set at the global level with engine-config. This option is selected by default.
  • Select Compress to override the global setting and allow compression for virtual machines in the cluster.
  • Select Don't compress to override the global setting and prevent compression for virtual machines in the cluster.
When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log. /var/log/vdsm/mom.log is the Memory Overcommit Manager log file.
4.2.2.5. Cluster Console Settings Explained
The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows.
Table 4.6. Console Settings
Field
Description/Action
Define SPICE Proxy for Cluster
Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the User Portal) is outside of the network where the hypervisors reside.
Overridden SPICE proxy address
The proxy by which the SPICE client will connect to virtual machines. The address must be in the following format:
protocol://[host]:[port]
4.2.2.6. Fencing Policy Settings Explained
The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows.
Table 4.7. Fencing Policy Settings
Field Description/Action
Enable fencing Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere.
Skip fencing if host has live lease on storage If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced.
Skip fencing on cluster connectivity issues If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold. The Threshold value is selected from the drop-down list; available values are 25, 50, 75, and 100.

4.2.3. Editing a Resource

Summary

Edit the properties of a resource.

Procedure 4.2. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result

The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

4.2.4. Setting Load and Power Management Policies for Hosts in a Cluster

The evenly_distributed and power_saving scheduling policies allow you to specify acceptable memory and CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each scheduling policy, see Section 4.2.2.4, “Scheduling Policy Settings Explained”.

Procedure 4.3. Setting Load and Power Management Policies for Hosts

  1. Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.
  2. Click Edit to open the Edit Cluster window.
    Edit Scheduling Policy

    Figure 4.4. Edit Scheduling Policy

  3. Select one of the following policies:
    • none
    • vm_evenly_distributed
      1. Set the maximum number of virtual machines that can run on each host in the HighVmCount field.
      2. Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field.
      3. Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field.
    • evenly_distributed
      1. Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.
      2. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
      3. Enter the minimum required free memory in MB at which virtual machines start migrating to other hosts in the MinFreeMemoryForUnderUtilized.
      4. Enter the maximum required free memory in MB at which virtual machines start migrating to other hosts in the MaxFreeMemoryForOverUtilized.
    • power_saving
      1. Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.
      2. Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field.
      3. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
      4. Enter the minimum required free memory in MB at which virtual machines start migrating to other hosts in the MinFreeMemoryForUnderUtilized.
      5. Enter the maximum required free memory in MB at which virtual machines start migrating to other hosts in the MaxFreeMemoryForOverUtilized.
  4. Choose one of the following as the Scheduler Optimization for the cluster:
    • Select Optimize for Utilization to include weight modules in scheduling to allow best selection.
    • Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests.
  5. If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box.
  6. Optionally select the Enable HA Reservation check box to enable the Manager to monitor cluster capacity for highly available virtual machines.
  7. Optionally select the Provide custom serial number policy check box to specify a serial number policy for the virtual machines in the cluster, and then select one of the following options:
    • Select Host ID to set the host's UUID as the virtual machine's serial number.
    • Select Vm ID to set the virtual machine's UUID as its serial number.
    • Select Custom serial number, and then specify a custom serial number in the text field.
  8. Click OK.

4.2.5. Updating the MoM Policy on Hosts in a Cluster

The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions at the cluster level are only passed to hosts the next time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up. The following procedure must be performed on each host individually.

Procedure 4.4. Synchronizing MoM Policy on a Host

  1. Click the Clusters tab and select the cluster to which the host belongs.
  2. Click the Hosts tab in the details pane and select the host that requires an updated MoM policy.
  3. Click Sync MoM Policy.
The MoM policy on the host is updated without having to move the host to maintenance mode and back Up.

4.2.6. CPU Profiles

CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect.
4.2.6.1. Creating a CPU Profile
Create a CPU profile. This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs.

Procedure 4.5. Creating a CPU Profile

  1. Click the Clusters resource tab and select a cluster.
  2. Click the CPU Profiles sub tab in the details pane.
  3. Click New.
  4. Enter a name for the CPU profile in the Name field.
  5. Enter a description for the CPU profile in the Description field.
  6. Select the quality of service to apply to the CPU profile from the QoS list.
  7. Click OK.
You have created a CPU profile, and that CPU profile can be applied to virtual machines in the cluster.
4.2.6.2. Removing a CPU Profile
Remove an existing CPU profile from your Red Hat Enterprise Virtualization environment.

Procedure 4.6. Removing a CPU Profile

  1. Click the Clusters resource tab and select a cluster.
  2. Click the CPU Profiles sub tab in the details pane.
  3. Select the CPU profile to remove.
  4. Click Remove.
  5. Click OK.
You have removed a CPU profile, and that CPU profile is no longer available. If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default CPU profile.

4.2.7. Importing an Existing Red Hat Gluster Storage Cluster

You can import a Red Hat Gluster Storage cluster and all hosts belonging to the cluster into Red Hat Enterprise Virtualization Manager.
When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them.

Important

Currently, a Red Hat Gluster Storage node can only be added to a cluster which has its compatibility level set to 3.1, 3.2, or 3.3.

Procedure 4.7. Importing an Existing Red Hat Gluster Storage Cluster to Red Hat Enterprise Virtualization Manager

  1. Select the Clusters resource tab to list all clusters in the results list.
  2. Click New to open the New Cluster window.
  3. Select the Data Center the cluster will belong to from the drop-down menu.
  4. Enter the Name and Description of the cluster.
  5. Select the Enable Gluster Service radio button and the Import existing gluster configuration check box.
    The Import existing gluster configuration field is displayed only if you select Enable Gluster Service radio button.
  6. In the Address field, enter the hostname or IP address of any server in the cluster.
    The host Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field.
  7. Enter the Root Password for the server, and click OK.
  8. The Add Hosts window opens, and a list of hosts that are a part of the cluster displays.
  9. For each host, enter the Name and the Root Password.
  10. If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field.
    Click Apply to set the entered password all hosts.
    Make sure the fingerprints are valid and submit your changes by clicking OK.
The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Red Hat Gluster Storage cluster into Red Hat Enterprise Virtualization Manager.

4.2.8. Explanation of Settings in the Add Hosts Window

The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details.
Table 4.8. Add Gluster Hosts Settings
Field Description
Use a common password Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts.
Name Enter the name of the host.
Hostname/IP This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window.
Root Password Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster.
Fingerprint The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window.

4.2.9. Removing a Cluster

Summary

Move all hosts out of a cluster before removing it.

Note

You cannot remove the Default cluster, as it holds the Blank template. You can however rename the Default cluster and add it to a new data center.

Procedure 4.8. Removing a Cluster

  1. Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.
  2. Ensure there are no hosts in the cluster.
  3. Click Remove to open the Remove Cluster(s) confirmation window.
  4. Click OK
Result

The cluster is removed.

4.2.10. Changing the Cluster Compatibility Version

Red Hat Enterprise Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Enterprise Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Note

To change the cluster compatibility version, you must have first updated all the hosts in your cluster to a level that supports your desired compatibility level.

Procedure 4.9. Changing the Cluster Compatibility Version

  1. From the Administration Portal, click the Clusters tab.
  2. Select the cluster to change from the list displayed.
  3. Click Edit.
  4. Change the Compatibility Version to the desired value.
  5. Click OK to open the Change Cluster Compatibility Version confirmation window.
  6. Click OK to confirm.
You have updated the compatibility version of the cluster. Once you have updated the compatibility version of all clusters in a data center, you can then change the compatibility version of the data center itself.

Warning

Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

4.3. Clusters and Permissions

4.3.1. Managing System Permissions for a Cluster

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A cluster administrator is a system administration role for a specific data center only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment.
The cluster administrator role permits the following actions:
  • Create and remove associated clusters.
  • Add and remove hosts, virtual machines, and pools associated with the cluster.
  • Edit user permissions for virtual machines associated with the cluster.

Note

You can only assign roles and permissions to existing users.
You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator.

4.3.2. Cluster Administrator Roles Explained

Cluster Permission Roles

The table below describes the administrator roles and privileges applicable to cluster administration.

Table 4.9. Red Hat Enterprise Virtualization System Administrator Roles
Role Privileges Notes
ClusterAdmin Cluster Administrator
Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required.
However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required.
NetworkAdmin Network Administrator Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well.

4.3.3. Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 4.10. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down list.
  6. Click OK.
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

4.3.4. Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 4.11. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK.
You have removed the user's role, and the associated permissions, from the resource.

Chapter 5. Logical Networks

5.1. Logical Network Tasks

5.1.1. Using the Networks Tab

The Networks resource tab provides a central location for users to perform logical network-related operations and search for logical networks based on each network's property or association with other resources.
All logical networks in the Red Hat Enterprise Virtualization environment display in the results list of the Networks tab. The New, Edit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers.
Click on each network name and use the Clusters, Hosts, Virtual Machines, Templates, and Permissions tabs in the details pane to perform functions including:
  • Attaching or detaching the networks to clusters and hosts
  • Removing network interfaces from virtual machines and templates
  • Adding and removing permissions for users to access and manage networks
These functions are also accessible through each individual resource tab.

Warning

Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable.

Important

If you plan to use Red Hat Enterprise Virtualization nodes to provide any services, remember that the services will stop if the Red Hat Enterprise Virtualization environment stops operating.
This applies to all services, but you should be especially aware of the hazards of running the following on Red Hat Enterprise Virtualization:
  • Directory Services
  • DNS
  • Storage

5.1.2. Creating a New Logical Network in a Data Center or Cluster

Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 5.1. Creating a New Logical Network in a Data Center or Cluster

  1. Click the Data Centers or Clusters resource tabs, and select a data center or cluster in the results list.
  2. Click the Logical Networks tab of the details pane to list the existing logical networks.
    • From the Data Centers details pane, click New to open the New Logical Network window.
    • From the Clusters details pane, click Add Network to open the New Logical Network window.
  3. Enter a Name, Description, and Comment for the logical network.
  4. Optionally select the Create on external provider check box. Select the External Provider from the drop-down list and provide the IP address of the Physical Network.
    If Create on external provider is selected, the Network Label, VM Network, and MTU options are disabled.
  5. Enter a new label or select an existing label for the logical network in the Network Label text field.
  6. Optionally enable Enable VLAN tagging.
  7. Optionally disable VM Network.
  8. Set the MTU value to Default (1500) or Custom.
  9. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  10. If Create on external provider is selected, the Subnet tab will be visible. From the Subnet tab, select the Create subnet and enter a Name, CIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
  11. From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
  12. Click OK.
You have defined a logical network as a resource required by a cluster or clusters in the data center. If you entered a label for the logical network, it will be automatically added to all host network interfaces with that label.

Note

When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

5.1.3. Editing a Logical Network

Edit the settings of a logical network.

Procedure 5.2. Editing a Logical Network

Important

A logical network cannot be edited or moved to another interface if it is not synchronized with the network configuration on the host. See Section 5.5.2, “Editing Host Network Interfaces and Assigning Logical Networks to Hosts” on how to synchronize your networks.
  1. Click the Data Centers resource tab, and select the data center of the logical network in the results list.
  2. Click the Logical Networks tab in the details pane to list the logical networks in the data center.
  3. Select a logical network and click Edit to open the Edit Logical Network window.
  4. Edit the necessary settings.
  5. Click OK to save the changes.

Note

Multi-host network configuration is available on data centers with 3.1-or-higher compatibility, and automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running.

5.1.4. Removing a Logical Network

You can remove a logical network from the Networks resource tab or the Data Centers resource tab. The following procedure shows you how to remove logical networks associated to a data center. For a working Red Hat Enterprise Virtualization environment, you must have at least one logical network used as the ovirtmgmt management network.

Procedure 5.3. Removing Logical Networks

  1. Click the Data Centers resource tab, and select the data center of the logical network in the results list.
  2. Click the Logical Networks tab in the details pane to list the logical networks in the data center.
  3. Select a logical network and click Remove to open the Remove Logical Network(s) window.
  4. Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from the Manager and from the external provider if the network is provided by an external provider.
  5. Click OK.
The logical network is removed from the Manager and is no longer available.

5.1.5. Viewing or Editing the Gateway for a Logical Network

Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway.
If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host.
Red Hat Enterprise Virtualization handles multiple gateways automatically whenever an interface goes up or down.

Procedure 5.4. Viewing or Editing the Gateway for a Logical Network

  1. Click the Hosts resource tab, and select the desired host.
  2. Click the Network Interfaces tab in the details pane to list the network interfaces attached to the host and their configurations.
  3. Click the Setup Host Networks button to open the Setup Host Networks window.
  4. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.
The Edit Management Network window displays the network name, the boot protocol, and the IP, subnet mask, and gateway addresses. The address information can be manually edited by selecting a Static boot protocol.

5.1.6. Explanation of Settings and Controls in the New Logical Network and Edit Logical Network Windows

5.1.6.1. Logical Network General Settings Explained
The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.
Table 5.1. New Logical Network and Edit Logical Network Settings
Field Name
Description
Name
The name of the logical network. This text field has a 15-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
The description of the logical network. This text field has a 40-character limit.
Comment
A field for adding plain text, human-readable comments regarding the logical network.
Create on external provider
Allows you to create the logical network to an OpenStack Networking instance that has been added to the Manager as an external provider.
External Provider - Allows you to select the external provider on which the logical network will be created.
Enable VLAN tagging
VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled.
VM Network
Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box.
MTU
Choose either Default, which sets the maximum transmission unit (MTU) to the value given in the parenthesis (), or Custom to set a custom MTU for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Custom is selected.
Network Label
Allows you to specify a new label for the network or select from existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label.
5.1.6.2. Logical Network Cluster Settings Explained
The table below describes the settings for the Cluster tab of the New Logical Network window.
Table 5.2. New Logical Network Settings
Field Name
Description
Attach/Detach Network to/from Cluster(s)
Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters.
Name - the name of the cluster to which the settings will apply. This value cannot be edited.
Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster.
Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster.
5.1.6.3. Logical Network vNIC Profiles Settings Explained
The table below describes the settings for the vNIC Profiles tab of the New Logical Network window.
Table 5.3. New Logical Network Settings
Field Name
Description
vNIC Profiles
Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile.
Public - Allows you to specify whether the profile is available to all users.
QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile.

5.1.7. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Specify the traffic type for the logical network to optimize the network traffic flow.

Procedure 5.5. Specifying Traffic Types for Logical Networks

  1. Click the Clusters resource tab, and select a cluster from the results list.
  2. Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.
  3. Click Manage Networks to open the Manage Networks window.
    The Manage Networks window

    Figure 5.1. Manage Networks

  4. Select appropriate check boxes.
  5. Click OK to save the changes and close the window.
You have optimized the network traffic flow by assigning a specific type of traffic to be carried on a specific logical network.

Note

Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration.

5.1.8. Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.
Table 5.4. Manage Networks Settings
Field
Description/Action
Assign
Assigns the logical network to all hosts in the cluster.
Required
A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational.
VM Network
A logical network marked "VM Network" carries network traffic relevant to the virtual machine network.
Display Network
A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller.
Migration Network
A logical network marked "Migration Network" carries virtual machine and storage migration traffic.

5.1.9. Editing the Virtual Function Configuration on a NIC

Single Root I/O Virtualization (SR-IOV) enables a single PCIe endpoint to be used as multiple separate devices. This is achieved through the introduction of two PCIe functions: physical functions (PFs) and virtual functions (VFs). A PCIe card can have between one and eight PFs, but each PF can support many more VFs (dependent on the device).
You can edit the configuration of SR-IOV-capable Network Interface Controllers (NICs) through the Red Hat Enterprise Virtualization Manager, including the number of VFs on each NIC and to specify the virtual networks allowed to access the VFs.
Once VFs have been created, each can be treated as a standalone NIC. This includes having one or more logical networks assigned to them, creating bonded interfaces with them, and to directly assign vNICs to them for direct device passthrough.
A vNIC must have the passthrough property enabled in order to be directly attached to a VF. See Section 5.2.4, “Enabling Passthrough on a vNIC Profile”.

Procedure 5.6. Editing the Virtual Function Configuration on a NIC

  1. Select an SR-IOV-capable host and click the Network Interfaces tab in the details pane.
  2. Click Setup Host Networks to open the Setup Host Networks window.
  3. Select an SR-IOV-capable NIC, marked with a , and click the pencil icon to open the Edit Virtual Functions (SR-IOV) configuration of NIC window.
  4. To edit the number of virtual functions, click the Number of VFs setting drop-down button and edit the Number of VFs text field.

    Important

    Changing the number of VFs will delete all previous VFs on the network interface before creating new VFs. This includes any VFs that have virtual machines directly attached.
  5. The All Networks check box is selected by default, allowing all networks to access the virtual functions. To specify the virtual networks allowed to access the virtual functions, select the Specific networks radio button to list all networks. You can then either select the check box for desired networks, or you can use the Labels text field to automatically select networks based on one or more network labels.
  6. Click OK to close the window. Note that the configuration changes will not take effect until you click the OK button in the Setup Host Networks window.

5.2. Virtual Network Interface Cards

5.2.1. vNIC Profile Overview

A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.

5.2.2. Creating or Editing a vNIC Profile

Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups.

Note

If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing.

Procedure 5.7. Creating or editing a vNIC Profile

  1. Click the Networks resource tab, and select a logical network in the results list.
  2. Select the vNIC Profiles tab in the details pane. If you selected the logical network in tree mode, you can select the vNIC Profiles tab in the results list.
  3. Click New or Edit to open the VM Interface Profile window.
    The VM Interface Profile window

    Figure 5.2. The VM Interface Profile window

  4. Enter the Name and Description of the profile.
  5. Select the relevant Quality of Service policy from the QoS list.
  6. Select the Passthrough check box to enable passthrough of the vNIC and allow direct device assignment of a virtual function. Enabling the passthrough property will disable QoS and port mirroring as these are not compatible. For more information on passthrough, see Section 5.2.4, “Enabling Passthrough on a vNIC Profile”.
  7. Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.
  8. Select a custom property from the custom properties list, which displays Please select a key... by default. Use the + and - buttons to add or remove custom properties.
  9. Click OK.
You have created a vNIC profile. Apply this profile to users and groups to regulate their network bandwidth. Note that if you edited a vNIC profile, you must either restart the virtual machine or hot unplug and then hot plug the vNIC.

Note

The guest operating system must support vNIC hot plug and hot unplug.

5.2.3. Explanation of Settings in the VM Interface Profile Window

Table 5.5. VM Interface Profile Window
Field Name
Description
Network
A drop-down menu of the available networks to apply the vNIC profile.
Name
The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters.
Description
The description of the vNIC profile. This field is recommended but not mandatory.
QoS
A drop-down menu of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC.
Passthrough
A check box to toggle the passthrough property. Passthrough allows a vNIC to connect directly to a virtual function of a host NIC. The passthrough property cannot be edited if the vNIC profile is attached to a virtual machine.
Both QoS and port mirroring are disabled in the vNIC profile if passthrough is enabled.
Port Mirroring
A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default. For further details, see Port Mirroring in the Technical Reference.
Device Custom Properties
A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively.
Allow all users to use this Profile
A check box to toggle the availability of the profile to all users in the environment. It is selected by default.

5.2.4. Enabling Passthrough on a vNIC Profile

The passthrough property of a vNIC profile enables a vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment.
The passthrough property cannot be enabled if the vNIC profile is already attached to a vNIC; this procedure creates a new profile to avoid this. If a vNIC profile has passthrough enabled, QoS and port mirroring are disabled for the profile.
For more information on SR-IOV, direct device assignment, and the hardware considerations for implementing these in Red Hat Enterprise Virtualization, see Hardware Considerations for Implementing SR-IOV.

Procedure 5.8. Enabling Passthrough

  1. Select a logical network from the Networks results list and click the vNIC Profiles tab in the details pane to list all vNIC profiles for that logical network.
  2. Click New to open the VM Interface Profile window.
  3. Enter the Name and Description of the profile.
  4. Select the Passthrough check box. This will disable QoS and Port Mirroring.
  5. If necessary, select a custom property from the custom properties list, which displays Please select a key... by default. Use the + and - buttons to add or remove custom properties.
  6. Click OK to save the profile and close the window.
The vNIC profile is now passthrough-capable. To use this profile to directly attach a virtual machine to a NIC or PCI VF, attach the logical network to the NIC and create a new vNIC on the desired virtual machine that uses the passthrough vNIC profile. For more information on these procedures respectively, see Section 5.5.2, “Editing Host Network Interfaces and Assigning Logical Networks to Hosts”, and Adding a New Network Interface in the Virtual Machine Management Guide.

5.2.5. Removing a vNIC Profile

Remove a vNIC profile to delete it from your virtualized environment.

Procedure 5.9. Removing a vNIC Profile

  1. Click the Networks resource tab, and select a logical network in the results list.
  2. Select the Profiles tab in the details pane to display available vNIC profiles. If you selected the logical network in tree mode, you can select the VNIC Profiles tab in the results list.
  3. Select one or more profiles and click Remove to open the Remove VM Interface Profile(s) window.
  4. Click OK to remove the profile and close the window.

5.2.6. Assigning Security Groups to vNIC Profiles

Note

This feature is only available for users who are integrating with OpenStack Neutron. Security groups cannot be created with Red Hat Enterprise Virtualization Manager. You must create security groups within OpenStack. For more information, see the Red Hat Enterprise Linux OpenStack Platform Administration Guide, available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile.

Note

A security group is identified using the ID of that security group as registered in the OpenStack Networking instance. You can find the IDs of security groups for a given tenant by running the following command on the system on which OpenStack Networking is installed:
# neutron security-group-list

Procedure 5.10. Assigning Security Groups to vNIC Profiles

  1. Click the Networks tab and select a logical network from the results list.
  2. Click the vNIC Profiles tab in the details pane.
  3. Click New, or select an existing vNIC profile and click Edit, to open the VM Interface Profile window.
  4. From the custom properties drop-down list, select SecurityGroups. Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group.
  5. In the text field, enter the ID of the security group to attach to the vNIC profile.
  6. Click OK.
You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group.

5.2.7. User Permissions for vNIC Profiles

Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.

Procedure 5.11. User Permissions for vNIC Profiles

  1. Click the Networks tab and select a logical network from the results list.
  2. Select the vNIC Profiles resource tab to display the vNIC profiles.
  3. Select the Permissions tab in the details pane to show the current user permissions for the profile.
  4. Use the Add button to open the Add Permission to User window, and the Remove button to open the Remove Permission window, to affect user permissions for the vNIC profile.
You have configured user permissions for a vNIC profile.

5.2.8. Configuring vNIC Profiles for UCS Integration

Cisco's Unified Computing System (UCS) is used to manage datacenter aspects such as computing, networking and storage resources.
The vdsm-hook-vmfex-dev hook allows virtual machines to connect to Cisco's UCS-defined port profiles by configuring the vNIC profile. The UCS-defined port profiles contain the properties and settings used to configure virtual interfaces in UCS. The vdsm-hook-vmfex-dev hook is installed by default with VDSM. See Appendix A, VDSM and Hooks for more information.
When a virtual machine that uses the vNIC profile is created, it will use the Cisco vNIC.
The procedure to configure the vNIC profile for UCS integration involves first configuring a custom device property. When configuring the custom device property, any existing value it contained is overwritten. When combining new and existing custom properties, include all of the custom properties in the command used to set the key's value. Multiple custom properties are separated by a semi-colon.

Note

A UCS port profile must be configured in Cisco UCS before configuring the vNIC profile.

Procedure 5.12. Configuring the Custom Device Property

  1. On the Red Hat Enterprise Virtualization Manager, configure the vmfex custom property and set the cluster compatibility level using --cver.
    # engine-config -s CustomDeviceProperties='{type=interface;prop={vmfex=^[a-zA-Z0-9_.-]{2,32}$}}' --cver=3.6
    
  2. Verify that the vmfex custom device property was added.
    # engine-config -g CustomDeviceProperties
    
  3. Restart the engine.
    # service ovirt-engine restart
    
The vNIC profile to configure can belong to a new or existing logical network. See Section 5.1.2, “Creating a New Logical Network in a Data Center or Cluster” for instructions to configure a new logical network.

Procedure 5.13. Configuring a vNIC Profile for UCS Integration

  1. Click the Networks resource tab, and select a logical network in the results list.
  2. Select the vNIC Profiles tab in the details pane. If you selected the logical network in tree mode, you can select the vNIC Profiles tab in the results list.
  3. Click New or Edit to open the VM Interface Profile window.
  4. Enter the Name and Description of the profile.
  5. Select the vmfex custom property from the custom properties list and enter the UCS port profile name.
  6. Click OK.

5.3. External Provider Networks

5.3.1. Importing Networks From External Providers

If an external provider offering networking services has been registered in the Manager, the networks provided by that provider can be imported into the Manager and used by virtual machines.

Procedure 5.14. Importing a Network From an External Provider

  1. Click the Networks tab.
  2. Click the Import button to open the Import Networks window.
    The Import Networks Window

    Figure 5.3. The Import Networks Window

  3. From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list.
  4. Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list.
  5. It is possible to customize the name of the network that you are importing. To customize the name, click on the network's name in the Name column, and change the text.
  6. From the Data Center drop-down list, select the data center into which the networks will be imported.
  7. Optionally, clear the Allow All check box for a network in the Networks to Import list to prevent that network from being available to all users.
  8. Click the Import button.
The selected networks are imported into the target data center and can now be used in the Manager.

Important

External provider discovery and importing are Technology Preview features. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

5.3.2. Limitations to Using External Provider Networks

The following limitations apply to using logical networks imported from an external provider in a Red Hat Enterprise Virtualization environment.
  • Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks.
  • The same logical network can be imported more than once, but only to different data centers.
  • You cannot edit logical networks offered by external providers in the Manager. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the OpenStack Networking instance that provides that logical network.
  • Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers.
  • If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Manager while the logical network is still in use by the virtual machine.
  • Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported.

Important

Logical networks imported from external providers are only compatible with Red Hat Enterprise Linux hosts and cannot be assigned to virtual machines running on Red Hat Enterprise Virtualization Hypervisor hosts.

Important

External provider discovery and importing are Technology Preview features. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

5.3.3. Configuring Subnets on External Provider Logical Networks

5.3.3.1. Configuring Subnets on External Provider Logical Networks
A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the Neutron instance on which the logical network is hosted is responsible for assigning these IP addresses.
While the Red Hat Enterprise Virtualization Manager automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Manager.
5.3.3.2. Adding Subnets to External Provider Logical Networks
Create a subnet on a logical network provided by an external provider.

Procedure 5.15. Adding Subnets to External Provider Logical Networks

  1. Click the Networks tab.
  2. Click the logical network provided by an external provider to which the subnet will be added.
  3. Click the Subnets tab in the details pane.
  4. Click the New button to open the New External Subnet window.
    The New External Subnet Window

    Figure 5.4. The New External Subnet Window

  5. Enter a Name and CIDR for the new subnet.
  6. From the IP Version drop-down menu, select either IPv4 or IPv6.
  7. Click OK.
5.3.3.3. Removing Subnets from External Provider Logical Networks
Remove a subnet from a logical network provided by an external provider.

Procedure 5.16. Removing Subnets from External Provider Logical Networks

  1. Click the Networks tab.
  2. Click the logical network provided by an external provider from which the subnet will be removed.
  3. Click the Subnets tab in the details pane.
  4. Click the subnet to remove.
  5. Click the Remove button and click OK when prompted.

5.4. Logical Networks and Permissions

5.4.1. Managing System Permissions for a Network

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A network administrator is a system administration role that can be applied for a specific network, or for all networks on a data center, cluster, host, virtual machine, or template. A network user can perform limited administration roles, such as viewing and attaching networks on a specific virtual machine or template. You can use the Configure button in the header bar to assign a network administrator for all networks in the environment.
The network administrator role permits the following actions:
  • Create, edit and remove networks.
  • Edit the configuration of the network, including configuring port mirroring.
  • Attach and detach networks from resources including clusters and virtual machines.
The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. You can also change the administrator of a network by removing the existing administrator and adding the new administrator.

5.4.2. Network Administrator and User Roles Explained

Network Permission Roles

The table below describes the administrator and user roles and privileges applicable to network administration.

Table 5.6. Red Hat Enterprise Virtualization Network Administrator and User Roles
Role Privileges Notes
NetworkAdmin Network Administrator for data center, cluster, host, virtual machine, or template. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. Can configure and manage the network of a particular data center, cluster, host, virtual machine, or template. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine.
VnicProfileUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks.

5.4.3. Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 5.17. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down list.
  6. Click OK.
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

5.4.4. Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 5.18. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK.
You have removed the user's role, and the associated permissions, from the resource.

5.5. Hosts and Networking

5.5.1. Refreshing Host Capabilities

When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager.

Procedure 5.19. To Refresh Host Capabilities

  1. Use the resource tabs, tree mode, or the search function to find and select a host in the results list.
  2. Click the Refresh Capabilities button.
The list of network interface cards in the Network Interfaces tab of the details pane for the selected host is updated. Any new network interface cards can now be used in the Manager.

5.5.2. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported.

Important

You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.

Procedure 5.20. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

  1. Click the Hosts resource tab, and select the desired host.
  2. Click the Network Interfaces tab in the details pane.
  3. Click the Setup Host Networks button to open the Setup Host Networks window.
  4. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.
    Alternatively, right-click the logical network and select a network interface from the drop-down menu.
  5. Configure the logical network:
    1. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.
    2. Select a Boot Protocol from None, DHCP, or Static. If you selected Static, enter the IP, Netmask / Routing Prefix, and the Gateway.

      Note

      Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network's gateway instead of the default gateway used by the management network.
    3. To override the default host network quality of service, select Override QoS and enter the desired values in the following fields:
      • Weighted Share: Signifies how much of the logical link's capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
      • Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
      • Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.
      For more information on configuring host network quality of service see Section 2.3, “Host Network Quality of Service”
    4. To configure a network bridge, click the Custom Properties drop-down menu and select bridge_opts. Enter a valid key and value with the following syntax: [key]=[value]. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Section B.1, “Explanation of bridge_opts Parameters”.
      forward_delay=1500 
      gc_timer=3765 
      group_addr=1:80:c2:0:0:0 
      group_fwd_mask=0x0 
      hash_elasticity=4 
      hash_max=512
      hello_time=200 
      hello_timer=70 
      max_age=2000 
      multicast_last_member_count=2 
      multicast_last_member_interval=100 
      multicast_membership_interval=26000 
      multicast_querier=0 
      multicast_querier_interval=25500 
      multicast_query_interval=13000 
      multicast_query_response_interval=1000 
      multicast_query_use_ifaddr=0 
      multicast_router=1 
      multicast_snooping=1 
      multicast_startup_query_count=2 
      multicast_startup_query_interval=3125
    5. To configure ethtool properties, click the Custom Properties drop-down menu and select ethtool_opts. Enter a valid key and value with the following syntax: [key]=[value]. Separate multiple entries with a whitespace character. The ethtool_opts option is not available by default, and you need to add it using the engine configuration tool. See Section B.2, “How to Set Up Red Hat Enterprise Virtualization Manager to Use Ethtool” for more information. See Red Hat Enterprise Linux 6 Deployment Guide or the manual page for more information on ethtool properties.
    6. To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties drop-down menu and select fcoe. Enter a valid key and value with the following syntax: [key]=[value]. At least enable=yes is required. You can also add dcb=[yes|no] and auto_vlan=[yes|no]. Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See Section B.3, “How to Set Up Red Hat Enterprise Virtualization Manager to Use FCoE” for more information.

      Note

      A separate, dedicated logical network is recommended for use with FCoE.
    7. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. A logical network cannot be edited or moved to another interface until it is synchronized.

      Note

      Networks are not considered synchronized if they have one of the following conditions:
      • The VM Network is different from the physical host network.
      • The VLAN identifier is different from the physical host network.
      • A Custom MTU is set on the logical network, and is different from the physical host network.
  6. Select the Verify connectivity between Host and Engine check box to check network connectivity; this action will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box to make the changes persistent when the environment is rebooted.
  8. Click OK.

Note

If not all network interface cards for the host are displayed, click the Refresh Capabilities button to update the list of network interface cards available for that host.

5.5.3. Adding Multiple VLANs to a Single Network Interface Using Logical Networks

Multiple VLANs can be added to a single network interface to separate traffic on the one host.

Important

You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.

Procedure 5.21. Adding Multiple VLANs to a Network Interface using Logical Networks

  1. Click the Hosts resource tab, and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
  5. Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Network window.
    If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
    Select a Boot Protocol from:
    • None,
    • DHCP, or
    • Static,
      Provide the IP and Subnet Mask.
    Click OK.
  6. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box
  8. Click OK.
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational
You have added multiple VLAN-tagged logical networks to a single interface. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.

5.5.4. Adding Network Labels to Host Network Interfaces

Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces.

Procedure 5.22. Adding Network Labels to Host Network Interfaces

  1. Click the Hosts resource tab, and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Click Labels, and right-click [New Label]. Select a physical network interface to label.
  5. Enter a name for the network label in the Label text field.
  6. Click OK.
You have added a network label to a host network interface. Any newly created logical networks with the same label will be automatically assigned to all host network interfaces with that label. Also, removing a label from a logical network will automatically remove that logical network from all host network interfaces with that label.

5.5.5. Bonds

5.5.5.1. Bonding Logic in Red Hat Enterprise Virtualization
The Red Hat Enterprise Virtualization Manager Administration Portal allows you to create bond devices using a graphical interface. There are several distinct bond creation scenarios, each with its own logic.
Two factors that affect bonding logic are:
  • Are either of the devices already carrying logical networks?
  • Are the devices carrying compatible logical networks?
Table 5.7. Bonding Scenarios and Their Results
Bonding Scenario Result
NIC + NIC
The Create New Bond window is displayed, and you can configure a new bond device.
If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
NIC + Bond
The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
Bond + Bond
If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
5.5.5.2. Bonds
A bond is an aggregation of multiple network interface cards into a single software-defined device. Because bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they can provide greater transmission speed than that of a single network interface card. Also, because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance. However, one limitation is that the network interface cards that form a bonded network interface must be of the same make and model to ensure that all network interface cards in the bond support the same options and modes.
The packet dispersal algorithm for a bond is determined by the bonding mode used.

Important

Modes 1, 2, 3 and 4 support both virtual machine (bridged) and non-virtual machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual machine (bridgeless) networks only.
Bonding Modes
Red Hat Enterprise Virtualization uses Mode 4 by default, but supports the following common bonding modes:
Mode 0 (round-robin policy)
Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and end with the last available network interface card in the bond. All subsequent loops then start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges, and is therefore not compatible with virtual machine logical networks.
Mode 1 (active-backup policy)
Sets all network interface cards to a backup state while one network interface card remains active. In the event of failure in the active network interface card, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of the bond in Mode 1 is visible on only one port to prevent any confusion that might otherwise be caused if the MAC address of the bond changed to reflect that of the active network interface card. Mode 1 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
Mode 2 (XOR policy)
Selects the network interface card through which to transmit packets based on the result of an XOR operation on the source and destination MAC addresses modulo network interface card slave count. This calculation ensures that the same network interface card is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Enterprise Virtualization.
Mode 3 (broadcast policy)
Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
Mode 4 (IEEE 802.3ad policy)
Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Enterprise Virtualization.
Mode 5 (adaptive transmit load balancing policy)
Ensures the distribution of outgoing traffic accounts for the load on each network interface card in the bond and that the current network interface card receives all incoming traffic. If the network interface card assigned to receive traffic fails, another network interface card is assigned to the role of receiving incoming traffic. Mode 5 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
Mode 6 (adaptive load balancing policy)
Combines Mode 5 (adaptive transmit load balancing policy) with receive load balancing for IPv4 traffic without any special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
5.5.5.3. Creating a Bond Device Using the Administration Portal
You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability. You can bond multiple network interfaces, pre-existing bond devices, and combinations of the two. A bond can also carry both VLAN tagged and non-VLAN traffic.

Procedure 5.23. Creating a Bond Device using the Administration Portal

  1. Click the Hosts resource tab, and select the host in the results list.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the host.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.
    If the devices are incompatible, the bond operation fails and suggests how to correct the compatibility issue.
  5. Select the Bond Name and Bonding Mode from the drop-down menus.
    Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the Custom option.
  6. Click OK to create the bond and close the Create New Bond window.
  7. Assign a logical network to the newly created bond device.
  8. Optionally choose to Verify connectivity between Host and Engine and Save network configuration.
  9. Click OK accept the changes and close the Setup Host Networks window.
Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab of the details pane for the selected host.
Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.
5.5.5.4. Example Uses of Custom Bonding Options with Host Interfaces
You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

Example 5.1. xmit_hash_policy

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:
mode=4 xmit_hash_policy=layer2+3

Example 5.2. ARP Monitoring

ARP monitor is useful for systems which can't or don't report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 arp_interval=1 arp_ip_target=192.168.0.2

Example 5.3. Primary

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 primary=eth0

5.5.6. Changing the FQDN of a Host

Use the following procedure to change the fully qualified domain name of hypervisor hosts.

Procedure 5.24. Updating the FQDN of a Hypervisor Host

  1. Place the hypervisor into maintenance mode so the virtual machines are live migrated to another hypervisor. See Section 6.5.7, “Moving a Host to Maintenance Mode” for more information. Alternatively, manually shut down or migrate all the virtual machines to another hypervisor. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
    • For RHEL-based hosts:
      • For Red Hat Enterprise Linux 6:
        Edit the /etc/sysconfig/network file, update the host name, and save.
        # vi /etc/sysconfig/network
        HOSTNAME=NEW_FQDN
      • For Red Hat Enterprise Linux 7:
        Use the hostnamectl tool to update the host name. For more options, see Configure Host Names in the Red Hat Enterprise Linux 7 Networking Guide.
        # hostnamectl set-hostname NEW_FQDN
    • For Red Hat Enterprise Virtualization Hypervisors (RHEV-H):
      In the text user interface, select the Network screen, press the right arrow key and enter a new host name in the Hostname field. Select <Save> and press Enter.
  3. Reboot the host.
  4. Re-register the host with the Manager. See Manually Adding a Hypervisor from the Administration Portal in the Installation Guide for more information.

5.5.7. Changing the IP Address of a Red Hat Enterprise Virtualization Hypervisor (RHEV-H)

Procedure 5.25. 

  1. Place the Hypervisor into maintenance mode so the virtual machines are live migrated to another hypervisor. See Section 6.5.7, “Moving a Host to Maintenance Mode” for more information. Alternatively, manually shut down or migrate all the virtual machines to another hypervisor. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
  3. Log in to your Hypervisor as the admin user.
  4. Press F2, select OK, and press Enter to enter the rescue shell.
  5. Modify the IP address by editing the /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt file. For example:
    # vi /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
    ...
    BOOTPROTO=none	
    IPADDR=10.x.x.x
    PREFIX=24
    ...
  6. Restart the network service and verify that the IP address has been updated.
    • For Red Hat Enterprise Linux 6:
      # service network restart
      # ifconfig ovirtmgmt
    • For Red Hat Enterprise Linux 7:
      # systemctl restart network.service
      # ip addr show ovirtmgmt
  7. Type exit to exit the rescue shell and return to the text user interface.
  8. Re-register the host with the Manager. See Manually Adding a Hypervisor from the Administration Portal in the Installation Guide for more information.

Chapter 6. Hosts

6.1. Introduction to Red Hat Enterprise Virtualization Hosts

Red Hat Enterprise Virtualization Hosts, also known as RHEL-based hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM).
KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the Red Hat Enterprise Virtualization Manager. A Red Hat Enterprise Virtualization environment has one or more hosts attached to it.
Red Hat Enterprise Virtualization supports two methods of installing hosts. You can use the Red Hat Enterprise Virtualization Hypervisor (RHEV-H) installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux installation.
Red Hat Enterprise Virtualization hosts take advantage of tuned profiles, which provide virtualization optimizations. For more information on tuned, see the Red Hat Enterprise Linux 6.0 Performance Tuning Guide.
The Red Hat Enterprise Virtualization Hypervisor has security features enabled. Security Enhanced Linux (SELinux) and the iptables firewall are fully configured and on by default. The status of SELinux on a selected host is reported under SELinux mode in the General tab of the details pane. The Manager can open required ports on Red Hat Enterprise Linux hosts when it adds them to the environment.
A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 6.5 or later AMD64/Intel 64 version.
A physical host on the Red Hat Enterprise Virtualization platform:
  • Must belong to only one cluster in the system.
  • Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions.
  • Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation.
  • Has a minimum of 2 GB RAM.
  • Can have an assigned system administrator with system permissions.
Administrators can receive the latest security advisories from the Red Hat Enterprise Virtualization watch list. Subscribe to the Red Hat Enterprise Virtualization watch list to receive new security advisories for Red Hat Enterprise Virtualization products by email. Subscribe by completing this form:

6.2. Red Hat Enterprise Virtualization Hypervisor Hosts

Red Hat Enterprise Virtualization Hypervisor hosts are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. They run stateless, not writing any changes to disk unless explicitly required to.
Red Hat Enterprise Virtualization Hypervisor hosts can be added directly to, and configured by, the Red Hat Enterprise Virtualization Manager. Alternatively a host can be configured locally to connect to the Manager; the Manager then is only used to approve the host to be used in the environment.
Unlike Red Hat Enterprise Linux hosts, Red Hat Enterprise Virtualization Hypervisor hosts cannot be added to clusters that have been enabled for Gluster service for use as Red Hat Gluster Storage nodes.

Important

The Red Hat Enterprise Virtualization Hypervisor is a closed system. Use a Red Hat Enterprise Linux host if additional rpm packages are required for your environment.

Warning

Red Hat strongly recommends not creating untrusted users on Red Hat Enterprise Virtualization Hypervisor hosts, as this can lead to exploitation of local security vulnerabilities.

6.3. Satellite Host Provider Hosts

Hosts provided by a Satellite host provider can also be used as virtualization hosts by the Red Hat Enterprise Virtualization Manager. After a Satellite host provider has been added to the Manager as an external provider, any hosts that it provides can be added to and used in Red Hat Enterprise Virtualization in the same way as Red Hat Enterprise Virtualization Hypervisor hosts and Red Hat Enterprise Linux hosts.

6.4. Red Hat Enterprise Linux Hosts

Red Hat Enterprise Virtualization supports hosts running Red Hat Enterprise Linux Server AMD64/Intel 64 version with Intel VT or AMD-V extensions. For the supported Red Hat Enterprise Linux versions, see Host Compatibility Matrix in the Installation Guide. To use your Red Hat Enterprise Linux machine as a host, you must also attach the Red Hat Enterprise Linux Server entitlement and the Red Hat Enterprise Virtualization entitlement.
Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, creation of bridge, and a reboot of the host. Use the details pane to monitor the process as the host and management system establish a connection.

Important

Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM.

6.5. Host Tasks

6.5.1. Adding a Satellite Host Provider Host

The process for adding a Satellite host provider host is almost identical to that of adding a Red Hat Enterprise Linux host except for the method by which the host is identified in the Manager. The following procedure outlines how to add a host provided by a Satellite host provider.

Procedure 6.1. Adding a Satellite Host Provider Host

  1. Click the Hosts resource tab to list the hosts in the results list.
  2. Click New to open the New Host window.
  3. Use the drop-down menu to select the Host Cluster for the new host.
  4. Select the Foreman/Satellite check box to display the options for adding a Satellite host provider host and select the provider from which the host is to be added.
  5. Select either Discovered Hosts or Provisioned Hosts.
    • Discovered Hosts (default option): Select the host, host group, and compute resources from the drop-down lists.
    • Provisioned Hosts: Select a host from the Providers Hosts drop-down list.
    Any details regarding the host that can be retrieved from the external provider are automatically set, and can be edited as desired.
  6. Enter the Name, Address, and SSH Port (Provisioned Hosts only) of the new host.
  7. Select an authentication method to use with the host.
    • Enter the root user's password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication (Provisioned Hosts only).
  8. You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings.
    1. Optionally disable automatic firewall configuration.
    2. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
  9. You can configure the Power Management, SPM, Console, and Network Provider using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
  10. Click OK to add the host and close the window.
The new host displays in the list of hosts with a status of Installing, and you can view the progress of the installation in the details pane. After installation is complete, the status will update to Reboot. The host must be activated for the status to change to Up.

6.5.2. Configuring Satellite Errata Management for a Host

Red Hat Enterprise Virtualization can be configured to view errata from Red Hat Satellite. This enables the host administrator to receive updates about available errata, and their importance, in the same dashboard used to manage host configuration. For more information about Red Hat Satellite see the Red Hat Satellite User Guide.
Red Hat Enterprise Virtualization 3.6 and onwards supports errata management with Red Hat Satellite 6.1.

Important

Hosts are identified in the Satellite server by their FQDN. Hosts added using an IP address will not be able to report errata. This ensures that an external content host ID does not need to be maintained in Red Hat Enterprise Virtualization.
The Satellite account used to manage the host must have Administrator permissions and a default organization set.

Procedure 6.2. Configuring Satellite Errata Management for a Host

  1. Add the Satellite server as an external provider. See Section 11.2.2, “Adding a Red Hat Satellite Instance for Host Provisioning” for more information.
  2. Associate the required host with the Satellite server.

    Note

    The host must be registered to the Satellite server and have the katello-agent package installed.
    For more information on how to configure a host registration see Configuring a Host for Registration in the Red Hat Satellite User Guide and for more information on how to register a host and install the katello-agent package see Registration in the Red Hat Satellite User Guide
    1. In the Hosts tab, select the host in the results list.
    2. Click Edit to open the Edit Host window.
    3. Check the Use Foreman/Satellite checkbox.
    4. Select the required Satellite server from the drop-down list.
    5. Click OK.
The host is now configured to show the available errata, and their importance, in the same dashboard used to manage host configuration.

6.5.3. Explanation of Settings and Controls in the New Host and Edit Host Windows

6.5.3.1. Host General Settings Explained
These settings apply when editing the details of a host or adding new Red Hat Enterprise Linux hosts and Satellite host provider hosts.
The General settings table contains the information required on the General tab of the New Host or Edit Host window.
Table 6.1. General settings
Field Name
Description
Data Center
The data center to which the host belongs. Red Hat Enterprise Virtualization Hypervisor hosts cannot be added to Gluster-enabled clusters.
Host Cluster
The cluster to which the host belongs.
Use Foreman/Satellite
Select or clear this check box to view or hide options for adding hosts provided by Satellite host providers. The following options are also available:
Discovered Hosts

  • Discovered Hosts - A drop-down list that is populated with the name of Satellite hosts discovered by the engine.
  • Host Groups -A drop-down list of host groups available.
  • Compute Resources - A drop-down list of hypervisors to provide compute resources.

Provisioned Hosts

  • Providers Hosts - A drop-down list that is populated with the name of hosts provided by the selected external provider. The entries in this list are filtered in accordance with any search queries that have been input in the Provider search filter.
  • Provider search filter - A text field that allows you to search for hosts provided by the selected external provider. This option is provider-specific; see provider documentation for details on forming search queries for specific providers. Leave this field blank to view all available hosts.

Name
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Comment
A field for adding plain text, human-readable comments regarding the host.
Address
The IP address, or resolvable hostname of the host.
Password
The password of the host's root user. This can only be given when you add the host; it cannot be edited afterwards.
SSH PublicKey
Copy the contents in the text box to the /root/.known_hosts file on the host to use the Manager's ssh key instead of using a password to authenticate with the host.
Automatically configure host firewall
When adding a new host, the Manager can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter.
Use JSON protocol
This is enabled by default. This is an Advanced Parameter.
SSH Fingerprint
You can fetch the host's SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter.
6.5.3.2. Host Power Management Settings Explained
The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows.
Table 6.2. Power Management Settings
Field Name
Description
Enable Power Management
Enables power management on the host. Select this check box to enable the rest of the fields in the Power Management tab.
Kdump integration
Prevents the host from fencing while performing a kernel crash dump, so that the crash dump is not interrupted. From Red Hat Enterprise Linux 6.6 and 7.1 onwards, Kdump is available by default. If kdump is available on the host, but its configuration is not valid (the kdump service cannot be started), enabling Kdump integration will cause the host (re)installation to fail. If this is the case, see Section 6.6.4, “fence_kdump Advanced Configuration”.
Disable policy control of power management
Power management is controlled by the Scheduling Policy of the host's cluster. If power management is enabled and the defined low utilization value is reached, the Manager will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Select this check box to disable policy control.
Agents by Sequential Order
Lists the host's fence agents. Fence agents can be sequential, concurrent, or a mix of both.
  • If fence agents are used sequentially, the primary agent is used first to stop or start a host, and if it fails, the secondary agent is used.
  • If fence agents are used concurrently, both fence agents have to respond to the Stop command for the host to be stopped; if one agent responds to the Start command, the host will go up.
Fence agents are sequential by default. Use the up and down buttons to change the sequence in which the fence agents are used.
To make two fence agents concurrent, select one fence agent from the Concurrent with drop-down list next to the other fence agent. Additional fence agents can be added to the group of concurrent fence agents by selecting the group from the Concurrent with drop-down list next to the additional fence agent.
Add Fence Agent
Click the plus (+) button to add a new fence agent. The Edit fence agent window opens. See the table below for more information on the fields in this window.
Power Management Proxy Preference
By default, specifies that the Manager will search for a fencing proxy within the same cluster as the host, and if no fencing proxy is found, the Manager will search in the same dc (data center). Use the up and down buttons to change the sequence in which these resources are used. This field is available under Advanced Parameters.
The following table contains the information required in the Edit fence agent window.
Table 6.3. Edit fence agent Settings
Field Name
Description
Address
The address to access your host's power management device. Either a resolvable hostname or an IP address.
User Name
User account with which to access the power management device. You can set up a user on the device, or use the default user.
Password
Password for the user accessing the power management device.
Type
The type of power management device in your host.
Choose one of the following:
  • apc - APC MasterSwitch network power switch. Not for use with APC 5.x power switch devices.
  • apc_snmp - Use with APC 5.x power switch devices.
  • bladecenter - IBM Bladecenter Remote Supervisor Adapter.
  • cisco_ucs - Cisco Unified Computing System.
  • drac5 - Dell Remote Access Controller for Dell computers.
  • drac7 - Dell Remote Access Controller for Dell computers.
  • eps - ePowerSwitch 8M+ network power switch.
  • hpblade - HP BladeSystem.
  • ilo, ilo2, ilo3, ilo4 - HP Integrated Lights-Out.
  • ipmilan - Intelligent Platform Management Interface and Sun Integrated Lights Out Management devices.
  • rsa - IBM Remote Supervisor Adapter.
  • rsb - Fujitsu-Siemens RSB management interface.
  • wti - WTI Network Power Switch.
SSH Port
The port number used by the power management device to communicate with the host.
Slot
The number used to identify the blade of the power management device.
Service Profile
The service profile name used to identify the blade of the power management device. This field appears instead of Slot when the device type is cisco_ucs.
Options
Power management device specific options. Enter these as 'key=value'. See the documentation of your host's power management device for the options available.
For Red Hat Enterprise Linux 7 hosts, if you are using cisco_ucs as the power management device, you also need to append ssl_insecure=1 to the Options field.
Secure
Select this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on the power management agent.
6.5.3.3. SPM Priority Settings Explained
The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window.
Table 6.4. SPM settings
Field Name
Description
SPM Priority
Defines the likelihood that the host will be given the role of Storage Pool Manager (SPM). The options are Low, Normal, and High priority. Low priority means that there is a reduced likelihood of the host being assigned the role of SPM, and High priority means there is an increased likelihood. The default setting is Normal.
6.5.3.4. Host Console Settings Explained
The Console settings table details the information required on the Console tab of the New Host or Edit Host window.
Table 6.5. Console settings
Field Name
Description
Override display address
Select this check box to override the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, the machine returns a public IP or FQDN (which is resolved in the external network to the public IP).
Display address
The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP.

6.5.4. Configuring Host Power Management Settings

Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal.
It is necessary to configure host power management in order to utilize host high availability and virtual machine high availability.

Important

Ensure that your host is in maintenance mode before configuring power management settings. Otherwise, all running virtual machines on that host will be stopped ungracefully upon restarting the host, which can cause disruptions in production environments. A warning dialog will appear if you have not correctly set your host to maintenance mode.

Procedure 6.3. Configuring Power Management Settings

  1. In the Hosts tab, select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the Power Management tab to display the Power Management settings.
  4. Select the Enable Power Management check box to enable the fields.
  5. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.

    Important

    When you enable Kdump integration on an existing host, the host must be reinstalled for kdump to be configured. See Section 6.5.10, “Reinstalling Virtualization Hosts”.
  6. Optionally, select the Disable policy control of power management check box if you do not want your host's power management to be controlled by the Scheduling Policy of the host's cluster.
  7. Click the plus (+) button to add a new power management device. The Edit fence agent window opens.
  8. Enter the Address, User Name, and Password of the power management device into the appropriate fields.
  9. Select the power management device Type from the drop-down list.
  10. Enter the SSH Port number used by the power management device to communicate with the host.
  11. Enter the Slot number used to identify the blade of the power management device.
  12. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries.
  13. Select the Secure check box to enable the power management device to connect securely to the host.
  14. Click Test to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification.
  15. Click OK to close the Edit fence agent window.
  16. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host's cluster and dc (datacenter) for a fencing proxy.
  17. Click OK.
The Power Management drop-down menu is now enabled in the Administration Portal.

6.5.5. Configuring Host Storage Pool Manager Settings

The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host's available resources, it is important to prioritize hosts that can afford the resources.
The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority.

Procedure 6.4. Configuring SPM settings

  1. Click the Hosts resource tab, and select a host from the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the SPM tab to display the SPM Priority settings.
  4. Use the radio buttons to select the appropriate SPM priority for the host.
  5. Click OK to save the settings and close the window.
You have configured the SPM priority of the host.

6.5.6. Editing a Resource

Edit the properties of a resource.

Procedure 6.5. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

6.5.7. Moving a Host to Maintenance Mode

Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. Hosts should be placed into maintenance mode before any event that might cause VDSM to stop working properly, such as a reboot, or issues with networking or storage.
When a host is placed into maintenance mode the Red Hat Enterprise Virtualization Manager attempts to migrate all running virtual machines to alternative hosts. The standard prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.

Procedure 6.6. Placing a Host into Maintenance Mode

  1. Click the Hosts resource tab, and select the desired host.
  2. Click Maintenance to open the Maintenance Host(s) confirmation window.
  3. Optionally, enter a Reason for moving the host into maintenance mode in the Maintenance Host(s) confirmation window. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the host is activated again.

    Note

    The host maintenance Reason field will only appear if it has been enabled in the cluster settings. See Section 4.2.2.1, “General Cluster Settings Explained” for more information.
  4. Click OK to initiate maintenance mode.
All running virtual machines are migrated to alternative hosts. If the host is the Storage Pool Manager (SPM), the SPM role is migrated to another host. The Status field of the host changes to Preparing for Maintenance, and finally Maintenance when the operation completes successfully. VDSM does not stop while the host is in maintenance mode.

Note

If migration fails on any virtual machine, click Activate on the host to stop the operation placing it into maintenance mode, then click Cancel Migration on the virtual machine to stop the migration.

6.5.8. Activating a Host from Maintenance Mode

A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used. Activation may fail if the host is not ready; ensure that all tasks are complete before attempting to activate the host.

Procedure 6.7. Activating a Host from Maintenance Mode

  1. Click the Hosts resources tab and select the host.
  2. Click Activate.
The host status changes to Unassigned, and finally Up when the operation is complete. Virtual machines can now run on the host. Virtual machines that were migrated off the host when it was placed into maintenance mode are not automatically migrated back to the host when it is activated, but can be migrated manually. If the host was the Storage Pool Manager (SPM) before being placed into maintenance mode, the SPM role does not return automatically when the host is activated.

6.5.9. Removing a Host

Remove a host from your virtualized environment.

Procedure 6.8. Removing a host

  1. In the Administration Portal, click the Hosts resource tab and select the host in the results list.
  2. Place the host into maintenance mode.
  3. Click Remove to open the Remove Host(s) confirmation window.
  4. Select the Force Remove check box if the host is part of a Red Hat Gluster Storage cluster and has volume bricks on it, or if the host is non-responsive.
  5. Click OK.
Your host has been removed from the environment and is no longer visible in the Hosts tab.

6.5.10. Reinstalling Virtualization Hosts

Reinstall Red Hat Enterprise Virtualization Hypervisors and Red Hat Enterprise Linux hosts from the Administration Portal. Use this procedure to reinstall hypervisors from the same version of the hypervisor ISO image from which it is currently installed; the procedure reinstalls VDSM on Red Hat Enterprise Linux Hosts. This includes stopping and restarting the hypervisor. If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host reinstalls are performed at a time when the hypervisor's usage is relatively low.
The cluster to which the hypervisor belongs must have sufficient memory reserve in order for its hosts to perform maintenance. Moving a host with live virtual machines to maintenance in a cluster that lacks sufficient memory causes the virtual machine migration operation to hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance.

Important

Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure 6.9. Reinstalling Red Hat Enterprise Virtualization Hypervisors and Red Hat Enterprise Linux Hosts

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Maintenance. If migration is enabled at cluster level, any virtual machines running on the host are migrated to other hosts. If the host is the SPM, this function is moved to another host. The status of the host changes as it enters maintenance mode.
  3. Click Reinstall to open the Install Host window.
  4. Click OK to reinstall the host.
Once successfully reinstalled, the host displays a status of Up. Any virtual machines that were migrated off the host, are at this point able to be migrated back to it.

Important

After a Red Hat Enterprise Virtualization Hypervisor is successfully registered to the Red Hat Enterprise Virtualization Manager and then reinstalled, it may erroneously appear in the Administration Portal with the status of Install Failed. Click Activate, and the Hypervisor will change to an Up status and be ready for use.

6.5.11. Customizing Hosts with Tags

You can use tags to store information about your hosts. You can then search for hosts based on tags.

Procedure 6.10. Customizing hosts with tags

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Assign Tags to open the Assign Tags window.
    Assign Tags Window

    Figure 6.1. Assign Tags Window

  3. The Assign Tags window lists all available tags. Select the check boxes of applicable tags.
  4. Click OK to assign the tags and close the window.
You have added extra, searchable information about your host as tags.

6.5.12. Viewing Host Errata

Errata for each host can be viewed after the Red Hat Enterprise Virtualization host has been configured to receive errata information from the Red Hat Satellite server. For more information on configuring a host to receive errata information see Section 6.5.2, “Configuring Satellite Errata Management for a Host”

Procedure 6.11. Viewing Host Errata

  1. Click the Hosts resource tab, and select a host from the results list.
  2. Click the General tab in the details pane.
  3. Click the Errata sub-tab in the General tab.

6.5.13. Viewing the Health Status of a Host

Hosts have an external health status in addition to their regular Status. The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the host's Name as one of the following icons:
  • OK: No icon
  • Info:
  • Warning:
  • Error:
  • Failure:
To view further details about the host's health status, select the host and click the Events sub-tab.
The host's health status can also be viewed using the REST API. A GET request on a host will include the external_status element, which contains the health status.
You can set a host's health status in the REST API via the events collection. For more information, see Adding Events in the REST API Guide.

6.5.14. Viewing Host Devices

You can view the host devices for each host in the details pane. If the host has been configured for direct device assignment, these devices can be directly attached to virtual machines for improved performance.
For more information on the hardware requirements for direct device assignment, see Additional Hardware Considerations for Using Device Assignment in Red Hat Enterprise Virtualization Hardware Considerations for Implementing SR-IOV.
For more information on configuring the host for direct device assignment, see Configuring a Hypervisor Host for PCI Passthrough in the Installation Guide.
For more information on attaching host devices to virtual machines, see Host Devices in the Virtual Machine Management Guide.

Procedure 6.12. Viewing Host Devices

  1. Use the Hosts resource tab, tree mode, or the search function to find and select a host from the results list.
  2. Click the Host Devices tab in the details pane.
The details pane lists the details of the host devices, including whether the device is attached to a virtual machine, and currently in use by that virtual machine.

6.5.15. Preparing Host and Guest Systems for GPU Passthrough

The Graphics Processing Unit (GPU) device from a host can be directly assigned to a virtual machine. Before this can be achieved, both the host and the virtual machine require amendments to their grub configuration files. Both machines also require reboot for the changes to take effect.
This procedure is relevant for hosts with either x86_64 or ppc64le architecture.
For more information on the hardware requirements for direct device assignment, see PCI Device Requirements in the Installation Guide.
This procedure assumes that the server has been configured properly to allow for device assignment. See Configuring a Hypervisor Host for PCI Passthrough in the Installation Guide.

Procedure 6.13. Preparing a Host for GPU Passthrough

  1. Log in to the host server and find the device vendor ID:product ID. In this example, the IDs are 10de:13ba and 10de:0fbc.
    # lspci -nn
    ...
    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107GL [Quadro K2200] [10de:13ba] (rev a2)
    01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbc] (rev a1)
    ...
  2. Edit the grub configuration file and append pci-stub.ids=xxxx:xxxx to the end of the GRUB_CMDLINE_LINUX line.
    # vi /etc/default/grub
    ...
    GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... pci-stub.ids=10de:13ba,10de:0fbc"
    ...
    Blacklist the corresponding drivers on the host. In this example, nVidia's nouveau driver is being blacklisted by an additional amendment to the GRUB_CMDLINE_LINUX line.
    # vi /etc/default/grub
    ...
    GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... pci-stub.ids=10de:13ba,10de:0fbc rdblacklist=nouveau"
    ...
    Save the grub configuration file.
  3. Refresh the grub.cfg file and reboot the server for these changes to take effect:
    # grub2-mkconfig -o /boot/grub2/grub.cfg
    # reboot
  4. Confirm the device is bound to the pci-stub driver with the lspci command:
    # lspci -nnk
    ...
    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107GL [Quadro K2200] [10de:13ba] (rev a2)
            Subsystem: NVIDIA Corporation Device [10de:1097]
            Kernel driver in use: pci-stub
    01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbc] (rev a1)
            Subsystem: NVIDIA Corporation Device [10de:1097]
            Kernel driver in use: pci-stub
    ...
Proceed to the next procedure to configure GPU passthrough on the guest system side.

Procedure 6.14. Preparing a Guest Virtual Machine for GPU Passthrough

    • For Linux
      1. Only proprietary GPU drivers are supported. Black list the corresponding open source driver in the grub configuration file. For example:
        $ vi /etc/default/grub
        ...
        GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... rdblacklist=nouveau"
        ...
      2. Locate the GPU BusID. In this example, is BusID is 00:09.0.
        # lspci | grep VGA
        00:09.0 VGA compatible controller: NVIDIA Corporation GK106GL [Quadro K4000] (rev a1)
      3. Edit the /etc/X11/xorg.conf file and append the following content:
        Section "Device"
        Identifier "Device0"
        Driver "nvidia"
        VendorName "NVIDIA Corporation"
        BusID "PCI:0:9:0"
        EndSection
      4. Restart the virtual machine.
    • For Windows
      1. Download and install the corresponding drivers for the device. For example, for Nvidia drivers, go to NVIDIA Driver Downloads.
      2. Restart the virtual machine.
The host GPU can now be directly assigned to the prepared virtual machine. For more information on assigning host devices to virtual machines, see Host Devices in the Virtual Machine Management Guide.

6.6. Host Resilience

6.6.1. Host High Availability

The Red Hat Enterprise Virtualization Manager uses fencing to keep the hosts in a cluster responsive. A Non Responsive host is different from a Non Operational host. Non Operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. Non Responsive hosts cannot be communicated with by the Manager.
If a host with a power management device loses communication with the Manager, it can be fenced (rebooted) from the Administration Portal. All the virtual machines running on that host are stopped, and highly available virtual machines are started on a different host.
All power management operations are done using a proxy host, as opposed to directly by the Red Hat Enterprise Virtualization Manager. At least two hosts are required for power management operations.
Fencing allows a cluster to react to unexpected host failures as well as enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host's power management device and test their correctness from time to time.
Hosts can be fenced automatically using the power management parameters, or manually by right-clicking on a host and using the options on the menu. In a fencing operation, an unresponsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains unresponsive pending manual intervention and troubleshooting.
If the host is required to run virtual machines that are highly available, power management must be enabled and configured.

6.6.2. Power Management by Proxy in Red Hat Enterprise Virtualization

The Red Hat Enterprise Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy.
You can select between:
  • Any host in the same cluster as the host requiring fencing.
  • Any host in the same data center as the host requiring fencing.
A viable fencing proxy host has a status of either UP or Maintenance.

6.6.3. Setting Fencing Parameters on a Host

The parameters for host fencing are set using the Power Management fields on the New Host or Edit Host windows. Power management enables the system to fence a troublesome host using an additional interface such as a Remote Access Card (RAC).
All power management operations are done using a proxy host, as opposed to directly by the Red Hat Enterprise Virtualization Manager. At least two hosts are required for power management operations.

Procedure 6.15. Setting fencing parameters on a host

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the Power Management tab.
    Power Management Settings

    Figure 6.2. Power Management Settings

  4. Select the Enable Power Management check box to enable the fields.
  5. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.

    Important

    When you enable Kdump integration on an existing host, the host must be reinstalled for kdump to be configured. See Section 6.5.10, “Reinstalling Virtualization Hosts”.
  6. Optionally, select the Disable policy control of power management check box if you do not want your host's power management to be controlled by the Scheduling Policy of the host's cluster.
  7. Click the plus (+) button to add a new power management device. The Edit fence agent window opens.
    Edit fence agent

    Figure 6.3. Edit fence agent

  8. Enter the Address, User Name, and Password of the power management device.
  9. Select the power management device Type from the drop-down list.

    Note

    In Red Hat Enterprise Virtualization 3.5 and above, you can use a custom power management device. For more information on how to set up a custom power management device, see https://access.redhat.com/articles/1238743.
  10. Enter the SSH Port number used by the power management device to communicate with the host.
  11. Enter the Slot number used to identify the blade of the power management device.
  12. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries.
  13. Select the Secure check box to enable the power management device to connect securely to the host.
  14. Click the Test button to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification.

    Warning

    Power management parameters (userid, password, options, etc) are tested by Red Hat Enterprise Virtualization Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without the corresponding change in Red Hat Enterprise Virtualization Manager, fencing is likely to fail when most needed.
  15. Click OK to close the Edit fence agent window.
  16. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host's cluster and dc (datacenter) for a fencing proxy.
  17. Click OK.
You are returned to the list of hosts. Note that the exclamation mark next to the host's name has now disappeared, signifying that power management has been successfully configured.

6.6.4. fence_kdump Advanced Configuration

kdump

The kdump service is available by default on new Red Hat Enterprise Linux 6.6 and later or 7.1 and later hosts and Hypervisors. On older hosts, Kdump integration cannot be enabled; these hosts must be upgraded in order to use this feature.

Select a host to view the status of the kdump service in the General tab of the details pane:
  • Enabled: kdump is configured properly and the kdump service is running.
  • Disabled: the kdump service is not running (in this case kdump integration will not work properly).
  • Unknown: happens only for hosts with an older VDSM version that does not report kdump status.
For more information on installing and using kdump, see the Kernel Crash Dump Guide for Red Hat Enterprise Linux 7, or the kdump Crash Recovery Service section of the Deployment Guide for Red Hat Enterprise Linux 6.
fence_kdump

Enabling Kdump integration in the Power Management tab of the New Host or Edit Host window configures a standard fence_kdump setup. If the environment's network configuration is simple and the Manager's FQDN is resolvable on all hosts, the default fence_kdump settings are sufficient for use.

However, there are some cases where advanced configuration of fence_kdump is necessary. Environments with more complex networking may require manual changes to the configuration of the Manager, fence_kdump listener, or both. For example, if the Manager's FQDN is not resolvable on all hosts with Kdump integration enabled, you can set a proper host name or IP address using engine-config:
engine-config -s FenceKdumpDestinationAddress=A.B.C.D
The following example cases may also require configuration changes:
  • The Manager has two NICs, where one of these is public-facing, and the second is the preferred destination for fence_kdump messages.
  • You need to execute the fence_kdump listener on a different IP or port.
  • You need to set a custom interval for fence_kdump notification messages, to prevent possible packet loss.
Customized fence_kdump detection settings are recommended for advanced users only, as changes to the default configuration are only necessary in more complex networking setups. For configuration options for the fence_kdump listener see Section 6.6.4.1, “fence_kdump listener Configuration”. For configuration of kdump on the Manager see Section 6.6.4.2, “Configuring fence_kdump on the Manager”.
6.6.4.1. fence_kdump listener Configuration
Edit the configuration of the fence_kdump listener. This is only necessary in cases where the default configuration is not sufficient.

Procedure 6.16. Manually Configuring the fence_kdump Listener

  1. Create a new file (for example, my-fence-kdump.conf) in /etc/ovirt-engine/ovirt-fence-kdump-listener.conf.d/
  2. Enter your customization with the syntax OPTION=value and save the file.

    Important

    The edited values must also be changed in engine-config as outlined in the fence_kdump Listener Configuration Options table in Section 6.6.4.2, “Configuring fence_kdump on the Manager”.
  3. Restart the fence_kdump listener:
    # service ovirt-fence-kdump-listener restart
The following options can be customized if required:
Table 6.6. fence_kdump Listener Configuration Options
Variable Description Default Note
LISTENER_ADDRESS Defines the IP address to receive fence_kdump messages on. 0.0.0.0 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationAddress in engine-config.
LISTENER_PORT Defines the port to receive fence_kdump messages on. 7410 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationPort in engine-config.
HEARTBEAT_INTERVAL Defines the interval in seconds of the listener's heartbeat updates. 30 If the value of this parameter is changed, it must be half the size or smaller than the value of FenceKdumpListenerTimeout in engine-config.
SESSION_SYNC_INTERVAL Defines the interval in seconds to synchronize the listener's host kdumping sessions in memory to the database. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KdumpStartedTimeout in engine-config.
REOPEN_DB_CONNECTION_INTERVAL Defines the interval in seconds to reopen the database connection which was previously unavailable. 30 -
KDUMP_FINISHED_TIMEOUT Defines the maximum timeout in seconds after the last received message from kdumping hosts after which the host kdump flow is marked as FINISHED. 60 If the value of this parameter is changed, it must be double the size or higher than the value of FenceKdumpMessageInterval in engine-config.
6.6.4.2. Configuring fence_kdump on the Manager
Edit the Manager's kdump configuration. This is only necessary in cases where the default configuration is not sufficient. The current configuration values can be found using:
# engine-config -g OPTION

Procedure 6.17. Manually Configuring Kdump with engine-config

  1. Edit kdump's configuration using the engine-config command:
    # engine-config -s OPTION=value

    Important

    The edited values must also be changed in the fence_kdump listener configuration file as outlined in the Kdump Configuration Options table. See Section 6.6.4.1, “fence_kdump listener Configuration”.
  2. Restart the ovirt-engine service:
    # service ovirt-engine restart
  3. Reinstall all hosts with Kdump integration enabled, if required (see the table below).
The following options can be configured using engine-config:
Table 6.7. Kdump Configuration Options
Variable Description Default Note
FenceKdumpDestinationAddress Defines the hostname(s) or IP address(es) to send fence_kdump messages to. If empty, the Manager's FQDN is used. Empty string (Manager FQDN is used) If the value of this parameter is changed, it must match the value of LISTENER_ADDRESS in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.
FenceKdumpDestinationPort Defines the port to send fence_kdump messages to. 7410 If the value of this parameter is changed, it must match the value of LISTENER_PORT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.
FenceKdumpMessageInterval Defines the interval in seconds between messages sent by fence_kdump. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KDUMP_FINISHED_TIMEOUT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.
FenceKdumpListenerTimeout Defines the maximum timeout in seconds since the last heartbeat to consider the fence_kdump listener alive. 90 If the value of this parameter is changed, it must be double the size or higher than the value of HEARTBEAT_INTERVAL in the fence_kdump listener configuration file.
KdumpStartedTimeout Defines the maximum timeout in seconds to wait until the first message from the kdumping host is received (to detect that host kdump flow has started). 30 If the value of this parameter is changed, it must be double the size or higher than the value of SESSION_SYNC_INTERVAL in the fence_kdump listener configuration file, and FenceKdumpMessageInterval.

6.6.5. Soft-Fencing Hosts

Hosts can sometimes become non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue.
Red Hat Enterprise Virtualization 3.3 introduced "soft-fencing over SSH". Prior to Red Hat Enterprise Virtualization 3.3, non-responsive hosts were fenced only by external fencing devices. In Red Hat Enterprise Virtualization 3.3 and above, the fencing process has been expanded to include "SSH Soft Fencing", a process where the Manager attempts to restart VDSM via SSH on non-responsive hosts. If the Manager fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured.
Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Manager and the host times out, the following happens:
  1. On the first network failure, the status of the host changes to "connecting".
  2. The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running virtual machines on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula).
  3. If the host does not respond when that interval has elapsed, vdsm restart is executed via SSH.
  4. If vdsm restart does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes to Non Responsive and, if power management is configured, fencing is handed off to the external fencing agent.

Note

Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured.

6.6.6. Using Host Power Management Functions

Summary

When power management has been configured for a host, you can access a number of options from the Administration Portal interface. While each power management device has its own customizable options, they all support the basic options to start, stop, and restart a host.

Procedure 6.18. Using Host Power Management Functions

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Power Management drop-down menu.
  3. Select one of the following options:
    • Restart: This option stops the host and waits until the host's status changes to Down. When the agent has verified that the host is down, the highly available virtual machines are restarted on another host in the cluster. The agent then restarts this host. When the host is ready for use its status displays as Up.
    • Start: This option starts the host and lets it join a cluster. When it is ready for use its status displays as Up.
    • Stop: This option powers off the host. Before using this option, ensure that the virtual machines running on the host have been migrated to other hosts in the cluster. Otherwise the virtual machines will crash and only the highly available virtual machines will be restarted on another host. When the host has been stopped its status displays as Non-Operational.

    Important

    When two fencing agents are defined on a host, they can be used concurrently or sequentially. For concurrent agents, both agents have to respond to the Stop command for the host to be stopped; and when one agent responds to the Start command, the host will go up. For sequential agents, to start or stop a host, the primary agent is used first; if it fails, the secondary agent is used.
  4. Selecting one of the above options opens a confirmation window. Click OK to confirm and proceed.
Result

The selected action is performed.

6.6.7. Manually Fencing or Isolating a Non Responsive Host

Summary

If a host unpredictably goes into a non-responsive state, for example, due to a hardware failure; it can significantly affect the performance of the environment. If you do not have a power management device, or it is incorrectly configured, you can reboot the host manually.

Warning

Do not use the Confirm host has been rebooted option unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption.

Procedure 6.19. Manually fencing or isolating a non-responsive host

  1. On the Hosts tab, select the host. The status must display as non-responsive.
  2. Manually reboot the host. This could mean physically entering the lab and rebooting the host.
  3. On the Administration Portal, right-click the host entry and select the Confirm Host has been rebooted button.
  4. A message displays prompting you to ensure that the host has been shut down or rebooted. Select the Approve Operation check box and click OK.
Result

You have manually rebooted your host, allowing highly available virtual machines to be started on active hosts. You confirmed your manual fencing action in the Administrator Portal, and the host is back online.

6.7. Hosts and Permissions

6.7.1. Managing System Permissions for a Host

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A host administrator is a system administration role for a specific host only. This is useful in clusters with multiple hosts, where each host requires a system administrator. You can use the Configure button in the header bar to assign a host administrator for all hosts in the environment.
The host administrator role permits the following actions:
  • Edit the configuration of the host.
  • Set up the logical networks.
  • Remove the host.
You can also change the system administrator of a host by removing the existing system administrator and adding the new system administrator.

6.7.2. Host Administrator Roles Explained

Host Permission Roles

The table below describes the administrator roles and privileges applicable to host administration.

Table 6.8. Red Hat Enterprise Virtualization System Administrator Roles
Role Privileges Notes
HostAdmin Host Administrator Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host.

6.7.3. Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 6.20. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down list.
  6. Click OK.
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

6.7.4. Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 6.21. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK.
You have removed the user's role, and the associated permissions, from the resource.

Chapter 7. Storage

Red Hat Enterprise Virtualization uses a centralized storage system for virtual machine disk images, ISO files and snapshots. Storage networking can be implemented using:
  • Network File System (NFS)
  • GlusterFS exports
  • Other POSIX compliant file systems
  • Internet Small Computer System Interface (iSCSI)
  • Local storage attached directly to the virtualization hosts
  • Fibre Channel Protocol (FCP)
  • Parallel NFS (pNFS)
Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated.
As a Red Hat Enterprise Virtualization system administrator, you need to create, configure, attach and maintain storage for the virtualized enterprise. You should be familiar with the storage types and their use. Read your storage array vendor's guides, and see the Red Hat Enterprise Linux Storage Administration Guide for more information on the concepts, protocols, requirements or general usage of storage.
Red Hat Enterprise Virtualization enables you to assign and manage storage using the Administration Portal's Storage tab. The Storage results list displays all the storage domains, and the details pane shows general information about the domain.
To add storage domains you must be able to successfully access the Administration Portal, and there must be at least one host connected with a status of Up.
Red Hat Enterprise Virtualization has three types of storage domains:
  • Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain.
    The data domain cannot be shared across data centers. Data domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, provided they are all shared, rather than local, domains.
    You must attach a data domain to a data center before you can attach domains of other types to it.
  • ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center's need for physical media. An ISO domain can be shared across different data centers. ISO domains can only be NFS-based. Only one ISO domain can be added to a data center.
  • Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and Red Hat Enterprise Virtualization environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time. Export domains can only be NFS-based. Only one export domain can be added to a data center.

Important

Only commence configuring and attaching storage for your Red Hat Enterprise Virtualization environment once you have determined the storage needs of your data center(s).

7.1. Understanding Storage Domains

A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), or ISO files. A storage domain can be made of either block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems).
On NFS, all virtual disks, templates, and snapshots are files.
On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See Red Hat Enterprise Linux Logical Volume Manager Administration Guide for more information on LVM.
Virtual disks can have one of two formats, either QCOW2 or RAW. The type of storage can be either Sparse or Preallocated. Snapshots are always sparse but can be taken for disks created either as RAW or sparse.
Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.

7.2. Preparing and Adding NFS Storage

7.2.1. Preparing NFS Storage

Set up NFS shares that will serve as a data domain on a Red Hat Enterprise Linux server. It is not necessary to create an ISO domain if one was created during the Red Hat Enterprise Virtualization Manager installation procedure.
For information on the setup and configuration of NFS on Red Hat Enterprise Linux, see Network File System (NFS) in the Red Hat Enterprise Linux 6 Storage Administration Guide or Network File System (NFS) in the Red Hat Enterprise Linux 7 Storage Administration Guide.
Specific system user accounts and system user groups are required by Red Hat Enterprise Virtualization so the Manager can store data in the storage domains represented by the exported directories.

Procedure 7.1. Configuring the Required System User Accounts and System User Groups

  1. Create the group kvm:
    # groupadd kvm -g 36
  2. Create the user vdsm in the group kvm:
    # useradd vdsm -u 36 -g 36
  3. Set the ownership of your exported directories to 36:36, which gives vdsm:kvm ownership:
    # chown -R 36:36 /exports/data
    # chown -R 36:36 /exports/export
  4. Change the mode of the directories so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users:
    # chmod 0755 /exports/data
    # chmod 0755 /exports/export
For more information on the required system users and groups see Appendix G, System Accounts.

7.2.2. Attaching NFS Storage

Attach an NFS storage domain to the data center in your Red Hat Enterprise Virtualization environment. This storage domain provides storage for virtualized guest images and ISO boot media. This procedure assumes that you have already exported shares. You must create the data domain before creating the export domain. Use the same procedure to create the export domain, selecting Export / NFS in the Domain Function / Storage Type list.
  1. In the Red Hat Enterprise Virtualization Manager Administration Portal, click the Storage resource tab.
  2. Click New Domain.
    The New Domain Window

    Figure 7.1. The New Domain Window

  3. Enter a Name for the storage domain.
  4. Accept the default values for the Data Center, Domain Function, Storage Type, Format, and Use Host lists.
  5. Enter the Export Path to be used for the storage domain.
    The export path should be in the format of 192.168.0.10:/data or domain.example.com:/data.
  6. Optionally, you can configure the advanced parameters.
    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    4. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
  7. Click OK.
    The new NFS data domain is displayed in the Storage tab with a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center.

7.2.3. Increasing NFS Storage

To increase the amount of NFS storage, you can either create a new storage domain and add it to an existing data center, or increase the available free space on the NFS server. For the former option, see Section 7.2.2, “Attaching NFS Storage”. The following procedure explains how to increase the available free space on the existing NFS server.

Procedure 7.2. Increasing an Existing NFS Storage Domain

  1. Click the Storage resource tab and select an NFS storage domain.
  2. In the details pane, click the Data Center tab and click the Maintenance button to place the storage domain into maintenance mode. This unmounts the existing share and makes it possible to resize the storage domain.
  3. On the NFS server, resize the storage. For Red Hat Enterprise Linux 6 systems, see Red Hat Enterprise Linux 6 Storage Administration Guide. For Red Hat Enterprise Linux 7 systems, see Red Hat Enterprise Linux 7 Storage Administration Guide.
  4. In the details pane, click the Data Center tab and click the Activate button to mount the storage domain.

7.3. Preparing and Adding Local Storage

7.3.1. Preparing Local Storage

A local storage domain can be set up on a host. When you set up a host to use local storage, the host automatically gets added to a new data center and cluster that no other hosts can be added to. Multiple host clusters require that all hosts have access to all storage domains, which is not possible with local storage. Virtual machines created in a single host cluster cannot be migrated, fenced or scheduled. For more information on the required system users and groups see Appendix G, System Accounts.

Important

On Red Hat Enterprise Virtualization Hypervisors the only path permitted for use as local storage is /data/images. This directory already exists with the correct permissions on Hypervisor installations. The steps in this procedure are only required when preparing local storage on Red Hat Enterprise Linux virtualization hosts.

Procedure 7.3. Preparing Local Storage

  1. On the virtualization host, create the directory to be used for the local storage.
    # mkdir -p /data/images
  2. Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36).
    # chown 36:36 /data /data/images
    # chmod 0755 /data /data/images
Your local storage is ready to be added to the Red Hat Enterprise Virtualization environment.

7.3.2. Adding Local Storage

Storage local to your host has been prepared. Now use the Manager to add it to the host.
Adding local storage to a host in this manner causes the host to be put in a new data center and cluster. The local storage configuration window combines the creation of a data center, a cluster, and storage into a single process.

Procedure 7.4. Adding Local Storage

  1. Click the Hosts resource tab, and select a host in the results list.
  2. Click Maintenance to open the Maintenance Host(s) confirmation window.
  3. Click OK to initiate maintenance mode.
  4. Click Configure Local Storage to open the Configure Local Storage window.
    Configure Local Storage Window

    Figure 7.2. Configure Local Storage Window

  5. Click the Edit buttons next to the Data Center, Cluster, and Storage fields to configure and name the local storage domain.
  6. Set the path to your local storage in the text entry field.
  7. If applicable, select the Optimization tab to configure the memory optimization policy for the new local storage cluster.
  8. Click OK to save the settings and close the window.
Your host comes online in a data center of its own.

7.4. Adding POSIX Compliant File System Storage

Red Hat Enterprise Virtualization 3.1 and higher supports the use of POSIX (native) file systems for storage. POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP.
Any POSIX compliant filesystem used as a storage domain in Red Hat Enterprise Virtualization MUST support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Enterprise Virtualization.

Important

Do not mount NFS storage by creating a POSIX compliant file system Storage Domain. Always create an NFS Storage Domain instead.

7.4.1. Attaching POSIX Compliant File System Storage

You want to use a POSIX compliant file system that is not exposed using NFS, iSCSI, or FCP as a storage domain.

Procedure 7.5. Attaching POSIX Compliant File System Storage

  1. Click the Storage resource tab to list the existing storage domains in the results list.
  2. Click New Domain to open the New Domain window.
    POSIX Storage

    Figure 7.3. POSIX Storage

  3. Enter the Name for the storage domain.
  4. Select the Data Center to be associated with the storage domain. The Data Center selected must be of type POSIX (POSIX compliant FS). Alternatively, select (none).
  5. Select Data / POSIX compliant FS from the Domain Function / Storage Type drop-down menu.
    If applicable, select the Format from the drop-down menu.
  6. Select a host from the Use Host drop-down menu. Only hosts within the selected data center will be listed. The host that you select will be used to connect the storage domain.
  7. Enter the Path to the POSIX file system, as you would normally provide it to the mount command.
  8. Enter the VFS Type, as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types.
  9. Enter additional Mount Options, as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options.
  10. Optionally, you can configure the advanced parameters.
    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    4. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
  11. Click OK to attach the new Storage Domain and close the window.

7.5. Adding Block Storage

7.5.1. Adding iSCSI Storage

Red Hat Enterprise Virtualization platform supports iSCSI storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
For information on the setup and configuration of iSCSI on Red Hat Enterprise Linux, see iSCSI Target Creation in the Red Hat Enterprise Linux 6 Storage Administration Guide or Online Storage Management in the Red Hat Enterprise Linux 7 Storage Administration Guide.

Procedure 7.6. Adding iSCSI Storage

  1. Click the Storage resource tab to list the existing storage domains in the results list.
  2. Click the New Domain button to open the New Domain window.
  3. Enter the Name of the new storage domain.
    New iSCSI Domain

    Figure 7.4. New iSCSI Domain

  4. Use the Data Center drop-down menu to select an data center.
  5. Use the drop-down menus to select the Domain Function and the Storage Type. The storage domain types that are not compatible with the chosen domain function are not available.
  6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.

    Important

    All communication to the storage domain is through the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  7. The Red Hat Enterprise Virtualization Manager is able to map either iSCSI targets to LUNs, or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when iSCSI is selected as the storage type. If the target that you are adding storage from is not listed then you can use target discovery to find it, otherwise proceed to the next step.

    iSCSI Target Discovery

    1. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.

      Note

      LUNs used externally to the environment are also displayed.
      You can use the Discover Targets options to add LUNs on many targets, or multiple paths to the same LUNs.
    2. Enter the fully qualified domain name or IP address of the iSCSI host in the Address field.
    3. Enter the port to connect to the host on when browsing for targets in the Port field. The default is 3260.
    4. If the Challenge Handshake Authentication Protocol (CHAP) is being used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.

      Note

      It is now possible to use the REST API to define specific credentials to each iSCSI target per host. See Defining Credentials to an iSCSI Target in the REST API Guide for more information.
    5. Click the Discover button.
    6. Select the target to use from the discovery results and click the Login button.
      Alternatively, click the Login All to log in to all of the discovered targets.

      Important

      If more than one path access is required, ensure to discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.
  8. Click the + button next to the desired target. This will expand the entry and display all unused LUNs attached to the target.
  9. Select the check box for each LUN that you are using to create the storage domain.
  10. Optionally, you can configure the advanced parameters.
    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    4. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
  11. Click OK to create the storage domain and close the window.
If you have configured multiple storage connection paths to the same target, follow the procedure in Section 7.5.2, “Configuring iSCSI Multipathing” to complete iSCSI bonding.

7.5.2. Configuring iSCSI Multipathing

The iSCSI Multipathing enables you to create and manage groups of logical networks and iSCSI storage connections. To prevent host downtime due to network path failure, configure multiple network paths between hosts and iSCSI storage. Once configured, the Manager connects each host in the data center to each bonded target via NICs/VLANs related to logical networks of the same iSCSI Bond. You can also specify which networks to use for storage traffic, instead of allowing hosts to route traffic through a default network. This option is only available in the Administration Portal after at least one iSCSI storage domain has been attached to a data center.

Prerequisites

  • Ensure you have created an iSCSI storage domain and discovered and logged into all the paths to the iSCSI target(s).
  • Ensure you have created Non-Required logical networks to bond with the iSCSI storage connections. You can configure multiple logical networks or bond networks to allow network failover.

Procedure 7.7. Configuring iSCSI Multipathing

  1. Click the Data Centers tab and select a data center from the results list.
  2. In the details pane, click the iSCSI Multipathing tab.
  3. Click Add.
  4. In the Add iSCSI Bond window, enter a Name and a Description for the bond.
  5. Select the networks to be used for the bond from the Logical Networks list. The networks must be Non-Required networks.

    Note

    To change a network's Required designation, from the Administration Portal, select a network, click the Cluster tab, and click the Manage Networks button.
  6. Select the storage domain to be accessed via the chosen networks from the Storage Targets list. Ensure to select all paths to the same target.
  7. Click OK.
All hosts in the data center are connected to the selected iSCSI target through the selected logical networks.

7.5.3. Adding FCP Storage

Red Hat Enterprise Virtualization platform supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
Red Hat Enterprise Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information regarding the setup and configuration of FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.
The following procedure shows you how to attach existing FCP storage to your Red Hat Enterprise Virtualization environment as a data domain. For more information on other supported storage types, see Chapter 7, Storage.

Procedure 7.8. Adding FCP Storage

  1. Click the Storage resource tab to list all storage domains.
  2. Click New Domain to open the New Domain window.
  3. Enter the Name of the storage domain.
    Adding FCP Storage

    Figure 7.5. Adding FCP Storage

  4. Use the Data Center drop-down menu to select an FCP data center.
    If you do not yet have an appropriate FCP data center, select (none).
  5. Use the drop-down menus to select the Domain Function and the Storage Type. The storage domain types that are not compatible with the chosen data center are not available.
  6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.

    Important

    All communication to the storage domain is through the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  7. The New Domain window automatically displays known targets with unused LUNs when Data / Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
  8. Optionally, you can configure the advanced parameters.
    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    4. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
  9. Click OK to create the storage domain and close the window.
The new FCP data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.

7.5.4. Increasing iSCSI or FCP Storage

There are multiple ways to increase iSCSI or FCP storage size:
  • Create a new storage domain with new LUNs and add it to an existing datacenter. See Section 7.5.1, “Adding iSCSI Storage”.
  • Create new LUNs and add them to an existing storage domain.
  • Expand the storage domain by resizing the underlying LUNs.
For information about creating, configuring, or resizing iSCSI storage on Red Hat Enterprise Linux 6 systems, see Red Hat Enterprise Linux 6 Storage Administration Guide. For Red Hat Enterprise Linux 7 systems, see Red Hat Enterprise Linux 7 Storage Administration Guide.
The following procedure explains how to expand storage area network (SAN) storage by adding a new LUN to an existing storage domain.

Procedure 7.9. Increasing an Existing iSCSI or FCP Storage Domain

  1. Create a new LUN on the SAN.
  2. Click the Storage resource tab and select an iSCSI or FCP domain.
  3. Click the Edit button.
  4. Click on Targets > LUNs, and click the Discover Targets expansion button.
  5. Enter the connection information for the storage server and click the Discover button to initiate the connection.
  6. Click on LUNs > Targets and select the check box of the newly available LUN.
  7. Click OK to add the LUN to the selected storage domain.
This will increase the storage domain by the size of the added LUN.
When expanding the storage domain by resizing the underlying LUNs, the LUNs must also be refreshed in the Red Hat Enterprise Virtualization Administration Portal.

Procedure 7.10. Refreshing the LUN Size

  1. Click the Storage resource tab and select an iSCSI or FCP domain.
  2. Click the Edit button.
  3. Click on LUNs > Targets.
  4. In the Additional Size column, click the Add Additional_Storage_Size button of the LUN to refresh.
  5. Click OK to refresh the LUN to indicate the new storage size.

7.5.5. Unusable LUNs in Red Hat Enterprise Virtualization

In certain circumstances, the Red Hat Enterprise Virtualization Manager will not allow you to use a LUN to create a storage domain or virtual machine hard disk.
  • LUNs that are already part of the current Red Hat Enterprise Virtualization environment are automatically prevented from being used.
    Unusable LUNs in the Red Hat Enterprise Virtualization Administration Portal

    Figure 7.6. Unusable LUNs in the Red Hat Enterprise Virtualization Administration Portal

  • LUNs that are already being used by the SPM host will also display as in use. You can choose to forcefully over ride the contents of these LUNs, but the operation is not guaranteed to succeed.

7.6. Importing Existing Storage Domains

7.6.1. Overview of Importing Existing Storage Domains

In addition to adding new storage domains that contain no data, you can also import existing storage domains and access the data they contain. The ability to import storage domains allows you to recover data in the event of a failure in the Manager database, and to migrate data from one data center or environment to another.
The following is an overview of importing each storage domain type:
Data
Importing an existing data storage domain allows you to access all of the virtual machines and templates that the data storage domain contains. After you import the storage domain, you must manually import each virtual machine and template into the destination data center. The process for importing the virtual machines and templates that a data storage domain contains is similar to that for an export storage domain. However, because data storage domains contain all the virtual machines and templates in a given data center, importing data storage domains is recommended for data recovery or large-scale migration of virtual machines between data centers or environments.

Important

You can only import existing data storage domains that were attached to data centers with a compatibility level of 3.5 or higher.
ISO
Importing an existing ISO storage domain allows you to access all of the ISO files and virtual diskettes that the ISO storage domain contains. No additional action is required after importing the storage domain to access these resources; you can attach them to virtual machines as required.
Export
Importing an existing export storage domain allows you to access all of the virtual machine images and templates that the export storage domain contains. Because export domains are designed for exporting and importing virtual machine images and templates, importing export storage domains is recommended method of migrating small numbers of virtual machines and templates inside an environment or between environments. For information on exporting and importing virtual machines and templates to and from export storage domains, see Exporting and Importing Virtual Machines and Templates in the Virtual Machine Management Guide.

7.6.2. Importing Storage Domains

Import a storage domain that was previously attached to a data center in the same environment or in a different environment. This procedure assumes the storage domain is no longer attached to any data center in any environment, to avoid data corruption. To import and attach an existing data storage domain to a data center, the target data center must be initialized, and must have a compatibility level of 3.5 or higher.

Procedure 7.11. Importing a Storage Domain

  1. Click the Storage resource tab.
  2. Click Import Domain.
    The Import Pre-Configured Domain window

    Figure 7.7. The Import Pre-Configured Domain window

  3. Select the data center to which to attach the storage domain from the Data Center drop-down list.
  4. Enter a name for the storage domain.
  5. Select the Domain Function and Storage Type from the appropriate drop-down lists.
  6. Select a host from the Use host drop-down list.

    Important

    All communication to the storage domain is through the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  7. Enter the details of the storage domain.

    Note

    The fields for specifying the details of the storage domain change in accordance with the value you select in the Domain Function / Storage Type list. These options are the same as those available for adding a new storage domain. For more information on these options, see Section 7.1, “Understanding Storage Domains”.
  8. Select the Activate Domain in Data Center check box to activate the storage domain after attaching it to the selected data center.
  9. Click OK.
The storage domain is imported, and is displayed in the Storage tab. You can now import virtual machines and templates from the storage domain to the data center.

7.6.3. Migrating Storage Domains between Data Centers in the Same Environment

Migrate a storage domain from one data center to another in the same Red Hat Enterprise Virtualization environment to allow the destination data center to access the data contained in the storage domain. This procedure involves detaching the storage domain from one data center, and attaching it to a different data center.

Procedure 7.12. Migrating a Storage Domain between Data Centers in the Same Environment

  1. Shut down all virtual machines running on the required storage domain.
  2. Click the Storage resource tab and select the storage domain from the results list.
  3. Click the Data Center tab in the details pane.
  4. Click Maintenance, then click OK to move the storage domain to maintenance mode.
  5. Click Detach, then click OK to detach the storage domain from the source data center.
  6. Click Attach.
  7. Select the destination data center and click OK.
The storage domain is attached to the destination data center and is automatically activated. You can now import virtual machines and templates from the storage domain to the destination data center.

7.6.4. Migrating Storage Domains between Data Centers in Different Environments

Migrate a storage domain from one Red Hat Enterprise Virtualization environment to another to allow the destination environment to access the data contained in the storage domain. This procedure involves removing the storage domain from one Red Hat Enterprise Virtualization environment, and importing it into a different environment. To import and attach an existing data storage domain to a data center, the target data center must have a compatibility level of 3.5 or higher.

Procedure 7.13. Migrating a Storage Domain between Data Centers in Different Environments

  1. Log in to the Administration Portal of the source environment.
  2. Shut down all virtual machines running on the required storage domain.
  3. Click the Storage resource tab and select the storage domain from the results list.
  4. Click the Data Center tab in the details pane.
  5. Click Maintenance, then click OK to move the storage domain to maintenance mode.
  6. Click Detach, then click OK to detach the storage domain from the source data center.
  7. Click Remove.
  8. In the Remove Storage(s) window, ensure the Format Domain, i.e. Storage Content will be lost! check box is not selected. This step preserves the data in the storage domain for later use.
  9. Click OK to remove the storage domain from the source environment.
  10. Log in to the Administration Portal of the destination environment.
  11. Click the Storage resource tab.
  12. Click Import Domain.
    The Import Pre-Configured Domain window

    Figure 7.8. The Import Pre-Configured Domain window

  13. Select the destination data center from the Data Center drop-down list.
  14. Enter a name for the storage domain.
  15. Select the Domain Function and Storage Type from the appropriate drop-down lists.
  16. Select a host from the Use Host drop-down list.
  17. Enter the details of the storage domain.

    Note

    The fields for specifying the details of the storage domain change in accordance with the value you select in the Storage Type drop-down list. These options are the same as those available for adding a new storage domain. For more information on these options, see Section 7.1, “Understanding Storage Domains”.
  18. Select the Activate Domain in Data Center check box to automatically activate the storage domain when it is attached.
  19. Click OK.
The storage domain is attached to the destination data center in the new Red Hat Enterprise Virtualization environment and is automatically activated. You can now import virtual machines and templates from the imported storage domain to the destination data center.

7.6.5. Importing Virtual Machines from Imported Data Storage Domains

Import a virtual machine from a data storage domain you have imported into your Red Hat Enterprise Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated.

Procedure 7.14. Importing Virtual Machines from an Imported Data Storage Domain

  1. Click the Storage resource tab.
  2. Click the imported data storage domain.
  3. Click the VM Import tab in the details pane.
  4. Select one or more virtual machines to import.
  5. Click Import.
  6. Select the cluster into which the virtual machines are imported from the Cluster list.
  7. Click OK.
You have imported one or more virtual machines into your environment. The imported virtual machines no longer appear in the list under the VM Import tab.

7.6.6. Importing Templates from Imported Data Storage Domains

Import a template from a data storage domain you have imported into your Red Hat Enterprise Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated.

Procedure 7.15. Importing Templates from an Imported Data Storage Domain

  1. Click the Storage resource tab.
  2. Click the imported data storage domain.
  3. Click the Template Import tab in the details pane.
  4. Select one or more templates to import.
  5. Click Import.
  6. Select the cluster into which the templates are imported from the Cluster list.
  7. Click OK.
You have imported one or more templates into your environment. The imported templates no longer appear in the list under the Template Import tab.

7.7. Storage Tasks

7.7.1. Populating the ISO Storage Domain

An ISO storage domain is attached to a data center. ISO images must be uploaded to it. Red Hat Enterprise Virtualization provides an ISO uploader tool that ensures that the images are uploaded into the correct directory path, with the correct user permissions.
The creation of ISO images from physical media is not described in this document. It is assumed that you have access to the images required for your environment.

Procedure 7.16. Populating the ISO Storage Domain

  1. Copy the required ISO image to a temporary directory on the system running Red Hat Enterprise Virtualization Manager.
  2. Log in to the system running Red Hat Enterprise Virtualization Manager as the root user.
  3. Use the engine-iso-uploader command to upload the ISO image. This action will take some time. The amount of time varies depending on the size of the image being uploaded and available network bandwidth.

    Example 7.1. ISO Uploader Usage

    In this example the ISO image RHEL6.iso is uploaded to the ISO domain called ISODomain using NFS. The command will prompt for an administrative user name and password. The user name must be provided in the form user name@domain.
    # engine-iso-uploader --iso-domain=ISODomain upload RHEL6.iso
The ISO image is uploaded and appears in the ISO storage domain specified. It is also available in the list of available boot media when creating virtual machines in the data center to which the storage domain is attached.

7.7.2. Moving Storage Domains to Maintenance Mode

Detaching and removing storage domains requires that they be in maintenance mode. This is required to redesignate another data domain as the master data domain.
Expanding iSCSI domains by adding more LUNs can only be done when the domain is active.

Procedure 7.17. Moving storage domains to maintenance mode

  1. Shut down all the virtual machines running on the storage domain.
  2. Click the Storage resource tab and select a storage domain.
  3. Click the Data Centers tab in the details pane.
  4. Click Maintenance to open the Storage Domain maintenance confirmation window.
  5. Click OK to initiate maintenance mode. The storage domain is deactivated and has an Inactive status in the results list.
You can now edit, detach, remove, or reactivate the inactive storage domains from the data center.

Note

You can also activate, detach and place domains into maintenance mode using the Storage tab on the details pane of the data center it is associated with.

7.7.3. Editing Storage Domains

You can edit storage domain parameters through the Administration Portal. Depending on the state of the storage domain, either active or inactive, different fields are available for editing. Fields such as Data Center, Domain Function, Storage Type, and Format cannot be changed.
  • Active: When the storage domain is in an active state, the Name, Description, Comment, Warning Low Space Indicator (%), Critical Space Action Blocker (GB), and Wipe After Delete fields can be edited. The Name field can only be edited while the storage domain is active. All other fields can also be edited while the storage domain is inactive.
  • Inactive: When the storage domain is in maintenance mode or unattached, thus in an inactive state, you can edit all fields except Name, Data Center, Domain Function, Storage Type, and Format. The storage domain must be inactive to edit storage connections, mount options, and other advanced parameters. This is only supported for NFS, POSIX, and Local storage types.

    Note

    iSCSI storage connections cannot be edited via the Administration Portal, but can be edited via the REST API. See Updating an iSCSI Storage Connection.

Procedure 7.18. Editing an Active Storage Domain

  1. Click the Storage tab and select a storage domain.
  2. Click Edit.
  3. Edit the available fields as required.
  4. Click OK.

Procedure 7.19. Editing an Inactive Storage Domain

  1. Click the Storage tab and select a storage domain.
  2. If the storage domain is active, click the Data Center tab in the details pane and click Maintenance.
  3. Click Edit.
  4. Edit the storage path and other details as required. The new connection details must be of the same storage type as the original connection.
  5. Click OK.
  6. Click the Data Center tab in the details pane and click Activate.

7.7.4. Activating Storage Domains from Maintenance Mode

If you have been making changes to a data center's storage, you have to put storage domains into maintenance mode. Activate a storage domain to resume using it.
  1. Click the Storage resource tab and select an inactive storage domain in the results list.
  2. Click the Data Centers tab in the details pane.
  3. Select the appropriate storage domain and click Activate.

    Important

    If you attempt to activate the ISO domain before activating the data domain, an error message displays and the domain is not activated.

7.7.5. Removing a Storage Domain

You have a storage domain in your data center that you want to remove from the virtualized environment.

Procedure 7.20. Removing a Storage Domain

  1. Click the Storage resource tab and select the appropriate storage domain in the results list.
  2. Move the domain into maintenance mode to deactivate it.
  3. Detach the domain from the data center.
  4. Click Remove to open the Remove Storage confirmation window.
  5. Select a host from the list.
  6. Click OK to remove the storage domain and close the window.
The storage domain is permanently removed from the environment.

7.7.6. Destroying a Storage Domain

A storage domain encountering errors may not be able to be removed through the normal procedure. Destroying a storage domain will forcibly remove the storage domain from the virtualized environment without reference to the export directory.
When the storage domain is destroyed, you are required to manually fix the export directory of the storage domain before it can be used again.

Procedure 7.21. Destroying a Storage Domain

  1. Use the Storage resource tab, tree mode, or the search function to find and select the appropriate storage domain in the results list.
  2. Right-click the storage domain and select Destroy to open the Destroy Storage Domain confirmation window.
  3. Select the Approve operation check box and click OK to destroy the storage domain and close the window.
The storage domain has been destroyed. Manually clean the export directory for the storage domain to recycle it.

7.7.7. Detaching a Storage Domain from a Data Center

Detach a storage domain from the data center to migrate virtual machines and templates to another data center.

Procedure 7.22. Detaching a Storage Domain from the Data Center

  1. Click the Storage resource tab, and select a storage domain from the results list.
  2. Click the Data Centers tab in the details pane and select the storage domain.
  3. Click Maintenance to open the Maintenance Storage Domain(s) confirmation window.
  4. Click OK to initiate maintenance mode.
  5. Click Detach to open the Detach Storage confirmation window.
  6. Click OK to detach the storage domain.
The storage domain has been detached from the data center, ready to be attached to another data center.

7.7.8. Attaching a Storage Domain to a Data Center

Attach a storage domain to a data center.

Procedure 7.23. Attaching a Storage Domain to a Data Center

  1. Click the Storage resource tab, and select a storage domain from the results list.
  2. Click the Data Centers tab in the details pane.
  3. Click Attach to open the Attach to Data Center window.
  4. Select the radio button of the appropriate data center.
  5. Click OK to attach the storage domain.
The storage domain is attached to the data center and is automatically activated.

7.7.9. Disk Profiles

Disk profiles define the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are created based on storage profiles defined under data centers, and must be manually assigned to individual virtual disks for the profile to take effect.
7.7.9.1. Creating a Disk Profile
Create a disk profile. This procedure assumes you have already defined one or more storage quality of service entries under the data center to which the storage domain belongs.

Procedure 7.24. Creating a Disk Profile

  1. Click the Storage resource tab and select a data storage domain.
  2. Click the Disk Profiles sub tab in the details pane.
  3. Click New.
  4. Enter a name for the disk profile in the Name field.
  5. Enter a description for the disk profile in the Description field.
  6. Select the quality of service to apply to the disk profile from the QoS list.
  7. Click OK.
You have created a disk profile, and that disk profile can be applied to new virtual disks hosted in the data storage domain.
7.7.9.2. Removing a Disk Profile
Remove an existing disk profile from your Red Hat Enterprise Virtualization environment.

Procedure 7.25. Removing a Disk Profile

  1. Click the Storage resource tab and select a data storage domain.
  2. Click the Disk Profiles sub tab in the details pane.
  3. Select the disk profile to remove.
  4. Click Remove.
  5. Click OK.
You have removed a disk profile, and that disk profile is no longer available. If the disk profile was assigned to any virtual disks, the disk profile is removed from those virtual disks.

7.7.10. Viewing the Health Status of a Storage Domain

Storage domains have an external health status in addition to their regular Status. The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the storage domain's Name as one of the following icons:
  • OK: No icon
  • Info:
  • Warning:
  • Error:
  • Failure:
To view further details about the storage domain's health status, select the storage domain and click the Events sub-tab.
The storage domain's health status can also be viewed using the REST API. A GET request on a storage domain will include the external_status element, which contains the health status.
You can set a storage domain's health status in the REST API via the events collection. For more information, see Adding Events in the REST API Guide.

7.8. Storage and Permissions

7.8.1. Managing System Permissions for a Storage Domain

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A storage administrator is a system administration role for a specific storage domain only. This is useful in data centers with multiple storage domains, where each storage domain requires a system administrator. Use the Configure button in the header bar to assign a storage administrator for all storage domains in the environment.
The storage domain administrator role permits the following actions:
  • Edit the configuration of the storage domain.
  • Move the storage domain into maintenance mode.
  • Remove the storage domain.

Note

You can only assign roles and permissions to existing users.
You can also change the system administrator of a storage domain by removing the existing system administrator and adding the new system administrator.

7.8.2. Storage Administrator Roles Explained

Storage Domain Permission Roles

The table below describes the administrator roles and privileges applicable to storage domain administration.

Table 7.1. Red Hat Enterprise Virtualization System Administrator Roles
Role Privileges Notes
StorageAdmin Storage Administrator Can create, delete, configure and manage a specific storage domain.
GlusterAdmin Gluster Storage Administrator Can create, delete, configure and manage Gluster storage volumes.

7.8.3. Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 7.26. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down list.
  6. Click OK.
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

7.8.4. Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 7.27. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK.
You have removed the user's role, and the associated permissions, from the resource.

Chapter 8. Working with Red Hat Gluster Storage

8.1. Red Hat Gluster Storage Nodes

8.1.1. Adding Red Hat Gluster Storage Nodes

Add Red Hat Gluster Storage nodes to Gluster-enabled clusters and incorporate GlusterFS volumes and bricks into your Red Hat Enterprise Virtualization environment.
This procedure presumes that you have a Gluster-enabled cluster of the appropriate Compatibility Version and a Red Hat Gluster Storage node already set up. For information on setting up a Red Hat Gluster Storage node, see the Red Hat Gluster Storage Installation Guide. For more information on the compatibility matrix, see the Configuring Red Hat Enterprise Virtualization with Red Hat Storage Guide.

Procedure 8.1. Adding a Red Hat Gluster Storage Node

  1. Click the Hosts resource tab to list the hosts in the results list.
  2. Click New to open the New Host window.
  3. Use the drop-down menus to select the Data Center and Host Cluster for the Red Hat Gluster Storage node.
  4. Enter the Name, Address, and SSH Port of the Red Hat Gluster Storage node.
  5. Select an authentication method to use with the Red Hat Gluster Storage node.
    • Enter the root user's password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the Red Hat Gluster Storage node to use public key authentication.
  6. Click OK to add the node and close the window.
You have added a Red Hat Gluster Storage node to your Red Hat Enterprise Virtualization environment. You can now use the volume and brick resources of the node in your environment.

8.1.2. Removing a Red Hat Gluster Storage Node

Remove a Red Hat Gluster Storage node from your Red Hat Enterprise Virtualization environment.

Procedure 8.2. Removing a Red Hat Gluster Storage Node

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the Red Hat Gluster Storage node in the results list.
  2. Click Maintenance to open the Maintenance Host(s) confirmation window.
  3. Click OK to move the host to maintenance mode.
  4. Click Remove to open the Remove Host(s) confirmation window.
  5. Select the Force Remove check box if the node has volume bricks on it, or if the node is non-responsive.
  6. Click OK to remove the node and close the window.
Your Red Hat Gluster Storage node has been removed from the environment and is no longer visible in the Hosts tab.

8.2. Using Red Hat Gluster Storage as a Storage Domain

8.2.1. Introduction to Red Hat Gluster Storage (GlusterFS) Volumes

Red Hat Gluster Storage volumes combine storage from more than one Red Hat Gluster Storage server into a single global namespace. A volume is a collection of bricks, where each brick is a mountpoint or directory on a Red Hat Gluster Storage Server in the trusted storage pool.
Most of the management operations of Red Hat Gluster Storage happen on the volume.
You can use the Administration Portal to create and start new volumes. You can monitor volumes in your Red Hat Gluster Storage cluster from the Volumes tab.
While volumes can be created and managed from the Administration Portal, bricks must be created on the individual Red Hat Gluster Storage nodes before they can be added to volumes using the Administration Portal

8.2.2. Gluster Storage Terminology

Table 8.1. Data Center Properties
Term
Definition
Brick
A brick is the GlusterFS basic unit of storage, represented by an export directory on a server in the trusted storage pool. A Brick is expressed by combining a server with an export directory in the following format:
SERVER:EXPORT
For example:
myhostname:/exports/myexportdir/
Block Storage
Block special files or block devices correspond to devices through which the system moves data in the form of blocks. These device nodes often represent addressable devices such as hard disks, CD-ROM drives, or memory-regions. Red Hat Gluster Storage supports XFS file system with extended attributes.
Cluster
A trusted pool of linked computers, working together closely thus in many respects forming a single computer. In Red Hat Gluster Storage terminology a cluster is called a trusted storage pool.
Client
The machine that mounts the volume (this may also be a server).
Distributed File System
A file system that allows multiple clients to concurrently access data spread across multiple servers/bricks in a trusted storage pool. Data sharing among multiple locations is fundamental to all distributed file systems.
Geo-Replication
Geo-replication provides a continuous, asynchronous, and incremental replication service from site to another over Local Area Networks (LAN), Wide Area Network (WAN), and across the Internet.
glusterd
The Gluster management daemon that needs to run on all servers in the trusted storage pool.
Metadata
Metadata is data providing information about one or more other pieces of data.
N-way Replication
Local synchronous data replication typically deployed across campus or Amazon Web Services Availability Zones.
Namespace
Namespace is an abstract container or environment created to hold a logical grouping of unique identifiers or symbols. Each Red Hat Gluster Storage trusted storage pool exposes a single namespace as a POSIX mount point that contains every file in the trusted storage pool.
POSIX
Portable Operating System Interface (for Unix) is the name of a family of related standards specified by the IEEE to define the application programming interface (API), along with shell and utilities interfaces for software compatible with variants of the UNIX operating system. Red Hat Gluster Storage exports a fully POSIX compatible file system.
RAID
Redundant Array of Inexpensive Disks (RAID) is a technology that provides increased storage reliability through redundancy, combining multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent.
RRDNS
Round Robin Domain Name Service (RRDNS) is a method to distribute load across application servers. RRDNS is implemented by creating multiple A records with the same name and different IP addresses in the zone file of a DNS server.
Server
The machine (virtual or bare-metal) which hosts the actual file system in which data will be stored.
Scale-Up Storage
Increases the capacity of the storage device, but only in a single dimension. An example might be adding additional disk capacity to a single computer in a trusted storage pool.
Scale-Out Storage
Increases the capability of a storage device in multiple dimensions. For example adding a server to a trusted storage pool increases CPU, disk capacity, and throughput for the trusted storage pool.
Subvolume
A subvolume is a brick after being processed by at least one translator.
Translator
A translator connects to one or more subvolumes, does something with them, and offers a subvolume connection.
Trusted Storage Pool
A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone.
User Space
Applications running in user space do not directly interact with hardware, instead using the kernel to moderate access. User Space applications are generally more portable than applications in kernel space. Gluster is a user space application.
Virtual File System (VFS)
VFS is a kernel software layer that handles all system calls related to the standard Linux file system. It provides a common interface to several kinds of file systems.
Volume File
The volume file is a configuration file used by GlusterFS process. The volume file will usually be located at: /var/lib/glusterd/vols/VOLNAME.
Volume
A volume is a logical collection of bricks. Most of the Gluster management operations happen on the volume.

8.2.3. Attaching a Red Hat Gluster Storage Volume as a Storage Domain

Add a Red Hat Gluster Storage volume to the Red Hat Enterprise Virtualization Manager to be used directly as a storage domain. This differs from adding a Red Hat Storage Gluster node, which enables control over the volumes and bricks of the node from within the Red Hat Enterprise Virtualization Manager, and does not require a Gluster-enabled cluster.
The host requires the glusterfs, glusterfs-fuse, and glusterfs-cli packages to be installed in order to mount the volume. The glusterfs-cli package is available from the rh-common-rpms channel on the Customer Portal.
For information on setting up a Red Hat Gluster Storage node, see the Red Hat Gluster Storage Installation Guide. For more information on preparing a host to be used with Red Hat Storage Gluster volumes, see the Configuring Red Hat Enterprise Virtualization with Red Hat Gluster Storage Guide. For more information on the compatibility matrix, see the Configuring Red Hat Enterprise Virtualization with Red Hat Storage Guide.

Procedure 8.3. Adding a Red Hat Gluster Storage Volume as a Storage Domain

  1. Click the Storage resource tab to list the existing storage domains in the results list.
  2. Click New Domain to open the New Domain window.
    Red Hat Gluster Storage

    Figure 8.1. Red Hat Gluster Storage

  3. Enter the Name for the storage domain.
  4. Select the Data Center to be associated with the storage domain.
  5. Select Data from the Domain Function drop-down list.
  6. Select GlusterFS from the Storage Type drop-down list.
  7. Select a host from the Use Host drop-down list. Only hosts within the selected data center will be listed. To mount the volume, the host that you select must have the glusterfs and glusterfs-fuse packages installed.
  8. In the Path field, enter the IP address or FQDN of the Red Hat Gluster Storage server and the volume name separated by a colon.
  9. Enter additional Mount Options, as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options.
  10. Optionally, you can configure the advanced parameters.
    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    4. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
  11. Click OK to mount the volume as a storage domain and close the window.

8.2.4. Creating a Storage Volume

You can create new volumes using the Administration Portal. When creating a new volume, you must specify the bricks that comprise the volume and specify whether the volume is to be distributed, replicated, or striped.
You must create brick directories or mountpoints before you can add them to volumes.

Important

It is recommended that you use replicated volumes, where bricks exported from different hosts are combined into a volume. Replicated volumes create copies of files across multiple bricks in the volume, preventing data loss when a host is fenced.

Procedure 8.4. Creating A Storage Volume

  1. Click the Volumes resource tab to list existing volumes in the results list.
  2. Click New to open the New Volume window.
  3. Use the drop-down menus to select the Data Center and Volume Cluster.
  4. Enter the Name of the volume.
  5. Use the drop-down menu to select the Type of the volume.
  6. If active, select the appropriate Transport Type check box.
  7. Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Red Hat Gluster Storage nodes.
  8. If active, use the Gluster, NFS, and CIFS check boxes to select the appropriate access protocols used for the volume.
  9. Enter the volume access control as a comma-separated list of IP addresses or hostnames in the Allow Access From field.
    You can use the * wildcard to specify ranges of IP addresses or hostnames.
  10. Select the Optimize for Virt Store option to set the parameters to optimize your volume for virtual machine storage. Select this if you intend to use this volume as a storage domain.
  11. Click OK to create the volume. The new volume is added and displays on the Volume tab.
You have added a Red Hat Gluster Storage volume. You can now use it for storage.

8.2.5. Adding Bricks to a Volume

Summary

You can expand your volumes by adding new bricks. You need to add at least one brick to a distributed volume, multiples of two bricks to replicated volumes, and multiples of four bricks to striped volumes when expanding your storage space.

Procedure 8.5. Adding Bricks to a Volume

  1. On the Volumes tab on the navigation pane, select the volume to which you want to add bricks.
  2. Click the Bricks tab from the Details pane.
  3. Click Add Bricks to open the Add Bricks window.
  4. Use the Server drop-down menu to select the server on which the brick resides.
  5. Enter the path of the Brick Directory. The directory must already exist.
  6. Click Add. The brick appears in the list of bricks in the volume, with server addresses and brick directory names.
  7. Click OK.
Result

The new bricks are added to the volume and the bricks display in the volume's Bricks tab.

8.2.6. Explanation of Settings in the Add Bricks Window

Table 8.2. Add Bricks Tab Properties
Field Name
Description
Volume Type
Displays the type of volume. This field cannot be changed; it was set when you created the volume.
Server
The server where the bricks are hosted.
Brick Directory
The brick directory or mountpoint.

8.2.7. Optimizing Red Hat Gluster Storage Volumes to Store Virtual Machine Images

Optimize a Red Hat Gluster Storage volume to store virtual machine images using the Administration Portal.
To optimize a volume for storing virtual machines, the Manager sets a number of virtualization-specific parameters for the volume.

Important

Red Hat Gluster Storage currently supports Red Hat Enterprise Virtualization 3.3 and above. All Gluster clusters and hosts must be attached to data centers that are compatible with versions higher than 3.3.
Volumes can be optimized to store virtual machines during creation by selecting the Optimize for Virt Store check box, or after creation using the Optimize for Virt Store button from the Volumes resource tab.

Important

If a volume is replicated across three or more nodes, ensure the volume is optimized for virtual storage to avoid data inconsistencies across the nodes.
An alternate method is to access one of the Red Hat Gluster Storage nodes and set the volume group to virt. This sets the cluster.quorum-type parameter to auto, and the cluster.server-quorum-type parameter to server.
# gluster volume set VOLUME_NAME group virt
Verify the status of the volume by listing the volume information:
# gluster volume info VOLUME_NAME

8.2.8. Starting Volumes

Summary

After a volume has been created or an existing volume has been stopped, it needs to be started before it can be used.

Procedure 8.6. Starting Volumes

  1. In the Volumes tab, select the volume to be started.
    You can select multiple volumes to start by using Shift or Ctrl key.
  2. Click the Start button.
The volume status changes to Up.
Result

You can now use your volume for virtual machine storage.

8.2.9. Tuning Volumes

Summary

Tuning volumes allows you to affect their performance. To tune volumes, you add options to them.

Procedure 8.7. Tuning Volumes

  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume that you want to tune, and click the Volume Options tab from the Details pane.
    The Volume Options tab displays a list of options set for the volume.
  3. Click Add to set an option. The Add Option dialog box displays. Select the Option Key from the drop down list and enter the option value.
  4. Click OK.
    The option is set and displays in the Volume Options tab.
Result

You have tuned the options for your storage volume.

8.2.10. Editing Volume Options

Summary

You have tuned your volume by adding options to it. You can change the options for your storage volume.

Procedure 8.8. Editing Volume Options

  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume that you want to edit, and click the Volume Options tab from the Details pane.
    The Volume Options tab displays a list of options set for the volume.
  3. Select the option you want to edit. Click Edit. The Edit Option dialog box displays. Enter a new value for the option.
  4. Click OK.
    The edited option displays in the Volume Options tab.
Result

You have changed the options on your volume.

8.2.11. Reset Volume Options

Summary

You can reset options to revert them to their default values.

  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume and click the Volume Options tab from the Details pane.
    The Volume Options tab displays a list of options set for the volume.
  3. Select the option you want to reset. Click Reset. A dialog box displays, prompting to confirm the reset option.
  4. Click OK.
    The selected option is reset.

Note

You can reset all volume options by clicking Reset All button. A dialog box displays, prompting to confirm the reset option. Click OK. All volume options are reset for the selected volume.
Result

You have reset volume options to default.

8.2.12. Removing Bricks from a Volume

Summary

You can shrink volumes, as needed, while the cluster is online and available. For example, you might need to remove a brick that has become inaccessible in a distributed volume due to hardware or network failure.

Procedure 8.9. Removing Bricks from a Volume

  1. On the Volumes tab on the navigation pane, select the volume from which you wish to remove bricks.
  2. Click the Bricks tab from the Details pane.
  3. Select the bricks you wish to remove. Click Remove Bricks.
  4. A window opens, prompting to confirm the deletion. Click OK to confirm.
Result

The bricks are removed from the volume.

8.2.13. Stopping Red Hat Gluster Storage Volumes

After a volume has been started, it can be stopped.

Procedure 8.10. Stopping Volumes

  1. In the Volumes tab, select the volume to be stopped.
    You can select multiple volumes to stop by using Shift or Ctrl key.
  2. Click Stop.

8.2.14. Deleting Red Hat Gluster Storage Volumes

You can delete a volume or multiple volumes from your cluster.
  1. In the Volumes tab, select the volume to be deleted.
  2. Click Remove. A dialog box displays, prompting to confirm the deletion. Click OK.

8.2.15. Rebalancing Volumes

Summary

If a volume has been expanded or shrunk by adding or removing bricks to or from that volume, the data on the volume must be rebalanced amongst the servers.

Procedure 8.11. Rebalancing a Volume

  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume to rebalance.
  3. Click Rebalance.
Result

The selected volume is rebalanced.

8.3. Clusters and Gluster Hooks

8.3.1. Managing Gluster Hooks

Gluster hooks are volume life cycle extensions. You can manage Gluster hooks from the Manager. The content of the hook can be viewed if the hook content type is Text.
Through the Manager, you can perform the following:
  • View a list of hooks available in the hosts.
  • View the content and status of hooks.
  • Enable or disable hooks.
  • Resolve hook conflicts.

8.3.2. Listing Hooks

Summary

List the Gluster hooks in your environment.

Procedure 8.12. Listing a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
Result

You have listed the Gluster hooks in your environment.

8.3.3. Viewing the Content of Hooks

Summary

View the content of a Gluster hook in your environment.

Procedure 8.13. Viewing the Content of a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select a hook with content type Text and click the View Content button to open the Hook Content window.
Result

You have viewed the content of a hook in your environment.

8.3.4. Enabling or Disabling Hooks

Summary

Toggle the activity of a Gluster hook by enabling or disabling it.

Procedure 8.14. Enabling or Disabling a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select a hook and click one of the Enable or Disable buttons. The hook is enabled or disabled on all nodes of the cluster.
Result

You have toggled the activity of a Gluster hook in your environment.

8.3.5. Refreshing Hooks

Summary

By default, the Manager checks the status of installed hooks on the engine and on all servers in the cluster and detects new hooks by running a periodic job every hour. You can refresh hooks manually by clicking the Sync button.

Procedure 8.15. Refreshing a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Click the Sync button.
Result

The hooks are synchronized and updated in the details pane.

8.3.6. Resolving Conflicts

The hooks are displayed in the Gluster Hooks sub-tab of the Cluster tab. Hooks causing a conflict are displayed with an exclamation mark. This denotes either that there is a conflict in the content or the status of the hook across the servers in the cluster, or that the hook script is missing in one or more servers. These conflicts can be resolved via the Manager. The hooks in the servers are periodically synchronized with engine database and the following conflicts can occur for the hooks:
  • Content Conflict - the content of the hook is different across servers.
  • Missing Conflict - one or more servers of the cluster do not have the hook.
  • Status Conflict - the status of the hook is different across servers.
  • Multiple Conflicts - a hook has a combination of two or more of the aforementioned conflicts.

8.3.7. Resolving Content Conflicts

Summary

A hook that is not consistent across the servers and engine will be flagged as having a conflict. To resolve the conflict, you must select a version of the hook to be copied across all servers and the engine.

Procedure 8.16. Resolving a Content Conflict

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Select the engine or a server from the list of sources to view the content of that hook and establish which version of the hook to copy.

    Note

    The content of the hook will be overwritten in all servers and in the engine.
  5. Use the Use content from drop-down menu to select the preferred server or the engine.
  6. Click OK to resolve the conflict and close the window.
Result

The hook from the selected server is copied across all servers and the engine to be consistent across the environment.

8.3.8. Resolving Missing Hook Conflicts

Summary

A hook that is not present on all the servers and the engine will be flagged as having a conflict. To resolve the conflict, either select a version of the hook to be copied across all servers and the engine, or remove the missing hook entirely.

Procedure 8.17. Resolving a Missing Hook Conflict

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Select any source with a status of Enabled to view the content of the hook.
  5. Select the appropriate radio button, either Copy the hook to all the servers or Remove the missing hook. The latter will remove the hook from the engine and all servers.
  6. Click OK to resolve the conflict and close the window.
Result

Depending on your chosen resolution, the hook has either been removed from the environment entirely, or has been copied across all servers and the engine to be consistent across the environment.

8.3.9. Resolving Status Conflicts

Summary

A hook that does not have a consistent status across the servers and engine will be flagged as having a conflict. To resolve the conflict, select a status to be enforced across all servers in the environment.

Procedure 8.18. Resolving a Status Conflict

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Set Hook Status to Enable or Disable.
  5. Click OK to resolve the conflict and close the window.
Result

The selected status for the hook is enforced across the engine and the servers to be consistent across the environment.

8.3.10. Resolving Multiple Conflicts

Summary

A hook may have a combination of two or more conflicts. These can all be resolved concurrently or independently through the Resolve Conflicts window. This procedure will resolve all conflicts for the hook so that it is consistent across the engine and all servers in the environment.

Procedure 8.19. Resolving Multiple Conflicts

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Choose a resolution to each of the affecting conflicts, as per the appropriate procedure.
  5. Click OK to resolve the conflicts and close the window.
Result

You have resolved all of the conflicts so that the hook is consistent across the engine and all servers.

8.3.11. Managing Gluster Sync

The Gluster Sync feature periodically fetches the latest cluster configuration from GlusterFS and syncs the same with the engine DB. This process can be performed through the Manager. When a cluster is selected, the user is provided with the option to import hosts or detach existing hosts from the selected cluster. You can perform Gluster Sync if there is a host in the cluster.

Note

The Manager continuously monitors if hosts are added to or removed from the storage cluster. If the addition or removal of a host is detected, an action item is shown in the General tab for the cluster, where you can either to choose to Import the host into or Detach the host from the cluster.

Chapter 9. Pools

9.1. Introduction to Virtual Machine Pools

A virtual machine pool is a group of virtual machines that are all clones of the same template and that can be used on demand by any user in a given group. Virtual machine pools allow administrators to rapidly configure a set of generalized virtual machines for users.
Users access a virtual machine pool by taking a virtual machine from the pool. When a user takes a virtual machine from a pool, they are provided with any one of the virtual machines in the pool if any are available. That virtual machine will have the same operating system and configuration as that of the template on which the pool was based, but users may not receive the same member of the pool each time they take a virtual machine. Users can also take multiple virtual machines from the same virtual machine pool depending on the configuration of that pool.
Virtual machines in a virtual machine pool are stateless, meaning that data is not persistent across reboots. However, if a user configures console options for a virtual machine taken from a virtual machine pool, those options will be set as the default for that user for that virtual machine pool.
In principle, virtual machines in a pool are started when taken by a user, and shut down when the user is finished. However, virtual machine pools can also contain pre-started virtual machines. Pre-started virtual machines are kept in an up state, and remain idle until they are taken by a user. This allows users to start using such virtual machines immediately, but these virtual machines will consume system resources even while not in use due to being idle.

Note

Virtual machines taken from a pool are not stateless when accessed from the Administration Portal. This is because administrators need to be able to write changes to the disk if necessary.

9.2. Virtual Machine Pool Tasks

9.2.1. Creating a Virtual Machine Pool

You can create a virtual machine pool that contains multiple virtual machines that have been created based on a common template.

Procedure 9.1. Creating a Virtual Machine Pool

  1. Click the Pools tab.
  2. Click the New button to open the New Pool window.
  3. Use the drop down-list to select the Cluster or use the selected default.
  4. Use the Template drop-down menu to select the required template and version or use the selected default. A template provides standard settings for all the virtual machines in the pool.
  5. Use the Operating System drop-down list to select an Operating System or use the default provided by the template.
  6. Use the Optimized for drop-down list to optimize virtual machines for either Desktop use or Server use.
  7. Enter a Name and Description, any Comments, and the Number of VMs for the pool.
  8. Enter the number of virtual machines to be prestarted in the Prestarted VMs field.
  9. Select the Maximum number of VMs per user that a single user is allowed to run in a session. The minimum is one.
  10. Select the Delete Protection check box to enable delete protection.
  11. Optionally, click the Show Advanced Options button and perform the following steps:
    1. Click the Type tab and select a Pool Type:
      • Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. The virtual machine reverts to the original base image after the administrator returns it to the pool.
      • Automatic - When the virtual machine is shut down, it automatically reverts to its base image and is returned to the virtual machine pool.
    2. Select the Console tab. At the bottom of the tab window, select the Override SPICE Proxy check box to enable the Overridden SPICE proxy address text field. Specify the address of a SPICE proxy to override the global SPICE proxy.
  12. Click OK.
You have created and configured a virtual machine pool with the specified number of identical virtual machines. You can view these virtual machines in the Virtual Machines resource tab, or in the details pane of the Pools resource tab; a virtual machine in a pool is distinguished from independent virtual machines by its icon.

9.2.2. Explanation of Settings and Controls in the New Pool and Edit Pool Windows

9.2.2.1. New Pool and Edit Pool General Settings Explained
The following table details the information required on the General tab of the New Pool and Edit Pool windows that are specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine window.
Table 9.1. General settings
Field Name
Description
Template
The template on which the virtual machine pool is based.
Description
A meaningful description of the virtual machine pool.
Comment
A field for adding plain text human-readable comments regarding the virtual machine pool.
Prestarted VMs
Allows you to specify the number of virtual machines in the virtual machine pool that will be started before they are taken and kept in that state to be taken by users. The value of this field must be between 0 and the total number of virtual machines in the virtual machine pool.
Number of VMs/Increase number of VMs in pool by
Allows you to specify the number of virtual machines to be created and made available in the virtual machine pool. In the edit window it allows you to increase the number of virtual machines in the virtual machine pool by the specified number. By default, the maximum number of virtual machines you can create in a pool is 1000. This value can be configured using the MaxVmsInPool key of the engine-config command.
Maximum number of VMs per user
Allows you to specify the maximum number of virtual machines a single user can take from the virtual machine pool at any one time. The value of this field must be between 1 and 32,767.
Delete Protection
Allows you to prevent the virtual machines in the pool from being deleted.
9.2.2.2. New and Edit Pool Type Settings Explained
The following table details the information required on the Type tab of the New Pool and Edit Pool windows.
Table 9.2. Type settings
Field Name
Description
Pool Type
This drop-down menu allows you to specify the type of the virtual machine pool. The following options are available:
  • Automatic: After a user finishes using a virtual machine taken from a virtual machine pool, that virtual machine is automatically returned to the virtual machine pool.
  • Manual: After a user finishes using a virtual machine taken from a virtual machine pool, that virtual machine is only returned to the virtual machine pool when an administrator manually returns the virtual machine.
9.2.2.3. New Pool and Edit Pool Console Settings Explained
The following table details the information required on the Console tab of the New Pool or Edit Pool window that is specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine and Edit Virtual Machine windows.
Table 9.3. Console settings
Field Name
Description
Override SPICE proxy
Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the User Portal) is outside of the network where the hypervisors reside.
Overridden SPICE proxy address
The proxy by which the SPICE client will connect to virtual machines. This proxy overrides both the global SPICE proxy defined for the Red Hat Enterprise Virtualization environment and the SPICE proxy defined for the cluster to which the virtual machine pool belongs, if any. The address must be in the following format:
protocol://[host]:[port]

9.2.3. Editing a Virtual Machine Pool

9.2.3.1. Editing a Virtual Machine Pool
After a virtual machine pool has been created, its properties can be edited. The properties available when editing a virtual machine pool are identical to those available when creating a new virtual machine pool except that the Number of VMs property is replaced by Increase number of VMs in pool by.

Procedure 9.2. Editing a Virtual Machine Pool

  1. Click the Pools resource tab, and select a virtual machine pool from the results list.
  2. Click Edit to open the Edit Pool window.
  3. Edit the properties of the virtual machine pool.
  4. Click Ok.
9.2.3.2. Prestarting Virtual Machines in a Pool
The virtual machines in a virtual machine pool are powered down by default. When a user requests a virtual machine from a pool, a machine is powered up and assigned to the user. In contrast, a prestarted virtual machine is already running and waiting to be assigned to a user, decreasing the amount of time a user has to wait before being able to access a machine. When a prestarted virtual machine is shut down it is returned to the pool and restored to its original state. The maximum number of prestarted virtual machines is the number of virtual machines in the pool.
Prestarted virtual machines are suitable for environments in which users require immediate access to virtual machines which are not specifically assigned to them. Only automatic pools can have prestarted virtual machines.

Procedure 9.3. Prestarting Virtual Machines in a Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Enter the number of virtual machines to be prestarted in the Prestarted VMs field.
  4. Select the Pool tab. Ensure Pool Type is set to Automatic.
  5. Click OK.
You have set a number of prestarted virtual machines in a pool. The prestarted machines are running and available for use.
9.2.3.3. Adding Virtual Machines to a Virtual Machine Pool
If you require more virtual machines than originally provisioned in a virtual machine pool, add more machines to the pool.

Procedure 9.4. Adding Virtual Machines to a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Enter the number of additional virtual machines to add in the Increase number of VMs in pool by field.
  4. Click OK.
You have added more virtual machines to the virtual machine pool.
9.2.3.4. Detaching Virtual Machines from a Virtual Machine Pool
You can detach virtual machines from a virtual machine pool. Detaching a virtual machine removes it from the pool to become an independent virtual machine.

Procedure 9.5. Detaching Virtual Machines from a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Ensure the virtual machine has a status of Down because you cannot detach a running virtual machine.
    Click the Virtual Machines tab in the details pane to list the virtual machines in the pool.
  3. Select one or more virtual machines and click Detach to open the Detach Virtual Machine(s) confirmation window.
  4. Click OK to detach the virtual machine from the pool.

Note

The virtual machine still exists in the environment and can be viewed and accessed from the Virtual Machines resource tab. Note that the icon changes to denote that the detached virtual machine is an independent virtual machine.
You have detached a virtual machine from the virtual machine pool.

9.2.4. Removing a Virtual Machine Pool

You can remove a virtual machine pool from a data center. You must first either delete or detach all of the virtual machines in the pool. Detaching virtual machines from the pool will preserve them as independent virtual machines.

Procedure 9.6. Removing a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Remove to open the Remove Pool(s) confirmation window.
  3. Click OK to remove the pool.
You have removed the pool from the data center.

9.3. Pools and Permissions

9.3.1. Managing System Permissions for a Virtual Machine Pool

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources.
The virtual machine pool administrator role permits the following actions:
  • Create, edit, and remove pools.
  • Add and detach virtual machines from the pool.

Note

You can only assign roles and permissions to existing users.

9.3.2. Virtual Machine Pool Administrator Roles Explained

Pool Permission Roles

The table below describes the administrator roles and privileges applicable to pool administration.

Table 9.4. Red Hat Enterprise Virtualization System Administrator Roles
Role Privileges Notes
VmPoolAdmin System Administrator role of a virtual pool. Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine.
ClusterAdmin Cluster Administrator Can use, create, delete, manage all virtual machine pools in a specific cluster.

9.3.3. Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 9.7. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down list.
  6. Click OK.
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

9.3.4. Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 9.8. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK.
You have removed the user's role, and the associated permissions, from the resource.

9.4. Trusted Compute Pools

Trusted compute pools are secure clusters based on Intel Trusted Execution Technology (Intel TXT). Trusted clusters only allow hosts that are verified by Intel's OpenAttestation, which measures the integrity of the host's hardware and software against a White List database. Trusted hosts and the virtual machines running on them can be assigned tasks that require higher security. For more information on Intel TXT, trusted systems, and attestation, see https://software.intel.com/en-us/articles/intel-trusted-execution-technology-intel-txt-enabling-guide.
Creating a trusted compute pool involves the following steps:
  • Configuring the Manager to communicate with an OpenAttestation server.
  • Creating a trusted cluster that can only run trusted hosts.
  • Adding trusted hosts to the trusted cluster. Hosts must be running the OpenAttestation agent to be verified as trusted by the OpenAttestation sever.
For information on installing an OpenAttestation server, installing the OpenAttestation agent on hosts, and creating a White List database, see https://github.com/OpenAttestation/OpenAttestation/wiki.

9.4.1. Connecting an OpenAttestation Server to the Manager

Before you can create a trusted cluster, the Red Hat Enterprise Virtualization Manager must be configured to recognize the OpenAttestation server. Use engine-config to add the OpenAttestation server's FQDN or IP address:
# engine-config -s AttestationServer=attestationserver.example.com
The following settings can also be changed if required:
Table 9.5. OpenAttestation Settings for engine-config
Option
Default Value
Description
AttestationServer
oat-server
The FQDN or IP address of the OpenAttestation server. This must be set for the Manager to communicate with the OpenAttestation server.
AttestationPort
8443
The port used by the OpenAttestation server to communicate with the Manager.
AttestationTruststore
TrustStore.jks
The trust store used for securing communication with the OpenAttestation server.
AttestationTruststorePass
password
The password used to access the trust store.
AttestationFirstStageSize
10
Used for quick initialization. Changing this value without good reason is not recommended.
SecureConnectionWithOATServers
true
Enables or disables secure communication with OpenAttestation servers.
PollUri
AttestationService/resources/PollHosts
The URI used for accessing the OpenAttestation service.

9.4.2. Creating a Trusted Cluster

Trusted clusters communicate with an OpenAttestation server to assess the security of hosts. When a host is added to a trusted cluster, the OpenAttestation server measures the host's hardware and software against a White List database. Virtual machines can be migrated between trusted hosts in the trusted cluster, allowing for high availability in a secure environment.

Procedure 9.9. Creating a Trusted Cluster

  1. Select the Clusters tab.
  2. Click New.
  3. Enter a Name for the cluster.
  4. Select the Enable Virt Service radio button.
  5. In the Scheduling Policy tab, select the Enable Trusted Service check box.
  6. Click OK.

9.4.3. Adding a Trusted Host

Red Hat Enterprise Linux hosts can be added to trusted clusters and measured against a White List database by the OpenAttestation server. Hosts must meet the following requirements to be trusted by the OpenAttestation server:
  • Intel TXT is enabled in the BIOS.
  • The OpenAttestation agent is installed and running.
  • Software running on the host matches the OpenAttestation server's White List database.

Procedure 9.10. Adding a Trusted Host

  1. Select the Hosts tab.
  2. Click New.
  3. Select a trusted cluster from the Host Cluster drop-down list.
  4. Enter a Name for the host.
  5. Enter the Address of the host.
  6. Enter the host's root Password.
  7. Click OK.
After the host is added to the trusted cluster, it is assessed by the OpenAttestation server. If a host is not trusted by the OpenAttestation server, it will move to a Non Operational state and should be removed from the trusted cluster.

Chapter 10. Virtual Machine Disks

10.1. Understanding Virtual Machine Storage

Red Hat Enterprise Virtualization supports three storage types: NFS, iSCSI and FCP.
In each type, a host known as the Storage Pool Manager (SPM) manages access between hosts and storage. The SPM host is the only node that has full access within the storage pool; the SPM can modify the storage domain metadata, and the pool's metadata. All other hosts can only access virtual machine hard disk image data.
By default in an NFS, local, or POSIX compliant data center, the SPM creates the virtual disk using a thin provisioned format as a file in a file system.
In iSCSI and other block-based data centers, the SPM creates a volume group on top of the Logical Unit Numbers (LUNs) provided, and makes logical volumes to use as virtual machine disks. Virtual machine disks on block-based storage are preallocated by default.
If the virtual disk is preallocated, a logical volume of the specified size in GB is created. The virtual machine can be mounted on a Red Hat Enterprise Linux server using kpartx, vgscan, vgchange or mount to investigate the virtual machine's processes or problems.
If the virtual disk is thinly provisioned, a 1 GB logical volume is created. The logical volume is continuously monitored by the host on which the virtual machine is running. As soon as the usage nears a threshold the host notifies the SPM, and the SPM extends the logical volume by 1 GB. The host is responsible for resuming the virtual machine after the logical volume has been extended. If the virtual machine goes into a paused state it means that the SPM could not extend the disk in time. This occurs if the SPM is too busy or if there is not enough storage space.
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (QCOW2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-I/O intensive virtual machines. The preallocated format is recommended for virtual machines with high I/O writes. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible.

10.2. Understanding Virtual Disks

Red Hat Enterprise Virtualization features Preallocated (thick provisioned) and Sparse (thin provisioned) storage options.
  • Preallocated
    A preallocated virtual disk allocates all the storage required for a virtual machine up front. For example, a 20 GB preallocated logical volume created for the data partition of a virtual machine will take up 20 GB of storage space immediately upon creation.
  • Sparse
    A sparse allocation allows an administrator to define the total storage to be assigned to the virtual machine, but the storage is only allocated when required.
    For example, a 20 GB thin provisioned logical volume would take up 0 GB of storage space when first created. When the operating system is installed it may take up the size of the installed file, and would continue to grow as data is added up to a maximum of 20 GB size.
The size of a disk is listed in the Disks sub-tab for each virtual machine and template. The Virtual Size of a disk is the total amount of disk space that the virtual machine can use; it is the number that you enter in the Size(GB) field when a disk is created or edited. The Actual Size of a disk is the amount of disk space that has been allocated to the virtual machine so far. Preallocated disks show the same value for both fields. Sparse disks may show a different value in the Actual Size field from the value in the Virtual Size field, depending on how much of the disk space has been allocated.

Note

When creating a Cinder virtual disk, the format and type of the disk are handled internally by Cinder and are not managed by Red Hat Enterprise Virtualization.
The possible combinations of storage types and formats are described in the following table.
Table 10.1. Permitted Storage Combinations
Storage Format Type Note
NFS or iSCSI/FCP RAW or QCOW2 Sparse or Preallocated  
NFS RAW Preallocated A file with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.
NFS RAW Sparse A file with an initial size which is close to zero, and has no formatting.
NFS QCOW2 Sparse A file with an initial size which is close to zero, and has QCOW2 formatting. Subsequent layers will be QCOW2 formatted.
SAN RAW Preallocated A block device with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.
SAN QCOW2 Sparse A block device with an initial size which is much smaller than the size defined for the virtual disk (currently 1 GB), and has QCOW2 formatting for which space is allocated as needed (currently in 1 GB increments).

10.3. Settings to Wipe Virtual Disks After Deletion

The wipe_after_delete flag, viewed in the Administration Portal as the Wipe After Delete check box will replace used data with zeros when a virtual disk is deleted. If it is set to false, which is the default, deleting the disk will open up those blocks for re-use but will not wipe the data. It is, therefore, possible for this data to be recovered because the blocks have not been returned to zero.
The wipe_after_delete flag only works on block storage. On file storage, for example NFS, the option does nothing because the file system will ensure that no data exists.
Enabling wipe_after_delete for virtual disks is more secure, and is recommended if the virtual disk has contained any sensitive data. This is a more intensive operation and users may experience degradation in performance and prolonged delete times.

Note

The wipe after delete functionality is not the same as secure delete, and cannot guarantee that the data is removed from the storage, just that new disks created on same storage will not expose data from old disks.
The wipe_after_delete flag default can be changed to true during the setup process (see Configuring the Red Hat Enterprise Virtualization Manager in the Installation Guide), or by using the engine configuration tool on the Red Hat Enterprise Virtualization Manager. Restart the engine for the setting change to take effect.

Procedure 10.1. Setting SANWipeAfterDelete to Default to True Using the Engine Configuration Tool

  1. Run the engine configuration tool with the --set action:
    # engine-config --set SANWipeAfterDelete=true
    
  2. Restart the engine for the change to take effect:
    # service ovirt-engine restart
    
The /var/log/vdsm/vdsm.log file located on the Red Hat Enterprise Virtualization host can be checked to confirm that a virtual disk was successfully wiped and deleted.
For a successful wipe, the log file will contain the entry, storage_domain_id/volume_id was zeroed and will be deleted. For example:
a9cb0625-d5dc-49ab-8ad1-72722e82b0bf/a49351a7-15d8-4932-8d67-512a369f9d61 was zeroed and will be deleted
For a successful deletion, the log file will contain the entry, finished with VG:storage_domain_id LVs: list_of_volume_ids, img: image_id. For example:
finished with VG:a9cb0625-d5dc-49ab-8ad1-72722e82b0bf LVs: {'a49351a7-15d8-4932-8d67-512a369f9d61': ImgsPar(imgs=['11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d'], parent='00000000-0000-0000-0000-000000000000')}, img: 11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d
An unsuccessful wipe will display a log message zeroing storage_domain_id/volume_id failed. Zero and remove this volume manually, and an unsuccessful delete will display Remove failed for some of VG: storage_domain_id zeroed volumes: list_of_volume_ids.

10.4. Shareable Disks in Red Hat Enterprise Virtualization

Some applications require storage to be shared between servers. Red Hat Enterprise Virtualization allows you to mark virtual machine hard disks as Shareable and attach those disks to virtual machines. That way a single virtual disk can be used by multiple cluster-aware guests.
Shared disks are not to be used in every situation. For applications like clustered database servers, and other highly available services, shared disks are appropriate. Attaching a shared disk to multiple guests that are not cluster-aware is likely to cause data corruption because their reads and writes to the disk are not coordinated.
You cannot take a snapshot of a shared disk. Virtual disks that have snapshots taken of them cannot later be marked shareable.
You can mark a disk shareable either when you create it, or by editing the disk later.

10.5. Read Only Disks in Red Hat Enterprise Virtualization

Some applications require administrators to share data with read-only rights. You can do this when creating or editing a disk attached to a virtual machine via the Disks tab in the details pane of the virtual machine and selecting the Read Only check box. That way, a single disk can be read by multiple cluster-aware guests, while an administrator maintains writing privileges.
You cannot change the read-only status of a disk while the virtual machine is running.

Important

Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual machine disks that contain such file systems (e.g. EXT3, EXT4, or XFS).

10.6. Virtual Disk Tasks

10.6.1. Creating Floating Virtual Disks

You can create a virtual disk that does not belong to any virtual machines. You can then attach this disk to a single virtual machine, or to multiple virtual machines if the disk is shareable.
Image disk creation is managed entirely by the Manager. Direct LUN disks require externally prepared targets that already exist. Cinder disks require access to an instance of OpenStack Volume that has been added to the Red Hat Enterprise Virtualization environment using the External Providers window; see Section 11.2.5, “Adding an OpenStack Volume (Cinder) Instance for Storage Management” for more information.

Procedure 10.2. Creating Floating Virtual Disks

  1. Select the Disks resource tab.
  2. Click New.
    Add Virtual Disk Window

    Figure 10.1. Add Virtual Disk Window

  3. Use the radio buttons to specify whether the virtual disk will be an Image, Direct LUN, or Cinder disk.
  4. Select the options required for your virtual disk. The options change based on the disk type selected. See Section 10.6.2, “Explanation of Settings in the New Virtual Disk Window” for more details on each option for each disk type.
  5. Click OK.

10.6.2. Explanation of Settings in the New Virtual Disk Window

Table 10.2. New Virtual Disk Settings: Image
Field Name
Description
Size(GB)
The size of the new virtual disk in GB.
Alias
The name of the virtual disk, limited to 40 characters.
Description
A description of the virtual disk. This field is recommended but not mandatory.
Interface
The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and higher include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special driver