検索

このコンテンツは選択した言語では利用できません。

Technical Reference

download PDF
Red Hat Virtualization 4.3

The technical architecture of Red Hat Virtualization environments

Red Hat Virtualization Documentation Team

Red Hat Customer Content Services

Abstract

This document describes the concepts, components, and technologies used in a Red Hat Virtualization environment.

Chapter 1. Introduction

1.1. Red Hat Virtualization Manager

The Red Hat Virtualization Manager provides centralized management for a virtualized environment. A number of different interfaces can be used to access the Red Hat Virtualization Manager. Each interface facilitates access to the virtualized environment in a different manner.

Figure 1.1. Red Hat Virtualization Manager Architecture

984

The Red Hat Virtualization Manager provides graphical interfaces and an Application Programming Interface (API). Each interface connects to the Manager, an application delivered by an embedded instance of the Red Hat JBoss Enterprise Application Platform. There are a number of other components which support the Red Hat Virtualization Manager in addition to Red Hat JBoss Enterprise Application Platform.

1.2. Red Hat Virtualization Host

A Red Hat Virtualization environment has one or more hosts attached to it. A host is a server that provides the physical hardware that virtual machines make use of.

Red Hat Virtualization Host (RHVH) runs an optimized operating system installed using a special, customized installation media specifically for creating virtualization hosts.

Red Hat Enterprise Linux hosts are servers running a standard Red Hat Enterprise Linux operating system that has been configured after installation to permit use as a host.

Both methods of host installation result in hosts that interact with the rest of the virtualized environment in the same way, and so, will both referred to as hosts.

Figure 1.2. Host Architecture

983
Kernel-based Virtual Machine (KVM)
The Kernel-based Virtual Machine (KVM) is a loadable kernel module that provides full virtualization through the use of the Intel VT or AMD-V hardware extensions. Though KVM itself runs in kernel space, the guests running upon it run as individual QEMU processes in user space. KVM allows a host to make its physical hardware available to virtual machines.
QEMU
QEMU is a multi-platform emulator used to provide full system emulation. QEMU emulates a full system, for example a PC, including one or more processors, and peripherals. QEMU can be used to launch different operating systems or to debug system code. QEMU, working in conjunction with KVM and a processor with appropriate virtualization extensions, provides full hardware assisted virtualization.
Red Hat Virtualization Manager Host Agent, VDSM
In Red Hat Virtualization, VDSM initiates actions on virtual machines and storage. It also facilitates inter-host communication. VDSM monitors host resources such as memory, storage, and networking. Additionally, VDSM manages tasks such as virtual machine creation, statistics accumulation, and log collection. A VDSM instance runs on each host and receives management commands from the Red Hat Virtualization Manager using the re-configurable port 54321.
VDSM-REG
VDSM uses VDSM-REG to register each host with the Red Hat Virtualization Manager. VDSM-REG supplies information about itself and its host using port 80 or port 443.
libvirt
Libvirt facilitates the management of virtual machines and their associated virtual devices. When Red Hat Virtualization Manager initiates virtual machine life-cycle commands (start, stop, reboot), VDSM invokes libvirt on the relevant host machines to execute them.
Storage Pool Manager, SPM

The Storage Pool Manager (SPM) is a role assigned to one host in a data center. The SPM host has sole authority to make all storage domain structure metadata changes for the data center. This includes creation, deletion, and manipulation of virtual disks, snapshots, and templates. It also includes allocation of storage for sparse block devices on a Storage Area Network(SAN). The role of SPM can be migrated to any host in a data center. As a result, all hosts in a data center must have access to all the storage domains defined in the data center.

Red Hat Virtualization Manager ensures that the SPM is always available. In case of storage connectivity errors, the Manager re-assigns the SPM role to another host.

Guest Operating System

Guest operating systems do not need to be modified to be installed on virtual machines in a Red Hat Virtualization environment. The guest operating system, and any applications on the guest, are unaware of the virtualized environment and run normally.

Red Hat provides enhanced device drivers that allow faster and more efficient access to virtualized devices. You can also install the Red Hat Virtualization Guest Agent on guests, which provides enhanced guest information to the management console.

1.3. Components that Support the Manager

Red Hat JBoss Enterprise Application Platform
Red Hat JBoss Enterprise Application Platform is a Java application server. It provides a framework to support efficient development and delivery of cross-platform Java applications. The Red Hat Virtualization Manager is delivered using Red Hat JBoss Enterprise Application Platform.
Important

The version of the Red Hat JBoss Enterprise Application Platform bundled with Red Hat Virtualization Manager is not to be used to serve other applications. It has been customized for the specific purpose of serving the Red Hat Virtualization Manager. Using the Red Hat JBoss Enterprise Application Platform that is included with the Manager for additional purposes adversely affects its ability to service the Red Hat Virtualization environment.

Gathering Reports and Historical Data

The Red Hat Virtualization Manager includes a data warehouse that collects monitoring data about hosts, virtual machines, and storage. A number of pre-defined reports are available. Customers can analyze their environments and create reports using any query tools that support SQL.

The Red Hat Virtualization Manager installation process creates two databases. These databases are created on a Postgres instance which is selected during installation.

  • The engine database is the primary data store used by the Red Hat Virtualization Manager. Information about the virtualization environment like its state, configuration, and performance are stored in this database.
  • The ovirt_engine_history database contains configuration information and statistical metrics which are collated over time from the engine operational database. The configuration data in the engine database is examined every minute, and changes are replicated to the ovirt_engine_history database. Tracking the changes to the database provides information on the objects in the database. This enables you to analyze and enhance the performance of your Red Hat Virtualization environment and resolve difficulties.

    For more information on generating reports based on the ovirt_engine_history database see the History Database in the Red Hat Virtualization Data Warehouse Guide.

Important

The replication of data to the ovirt_engine_history database is performed by the RHEVM History Service, ovirt-engine-dwhd.

Directory services
Directory services provide centralized network-based storage of user and organizational information. Types of information stored include application settings, user profiles, group data, policies, and access control. The Red Hat Virtualization Manager supports Active Directory, Identity Management (IdM), OpenLDAP, and Red Hat Directory Server 9. There is also a local, internal domain for administration purposes only. This internal domain has only one user: the admin user.

1.4. Storage

Red Hat Virtualization uses a centralized storage system for virtual disks, templates, snapshots, and ISO files. Storage is logically grouped into storage pools, which are comprised of storage domains. A storage domain is a combination of storage capacity and metadata that describes the internal structure of the storage. There are three types of storage domain; data, export, and ISO.

The data storage domain is the only one required by each data center. A data storage domain is exclusive to a single data center. Export and ISO domains are optional. Storage domains are shared resources, and must be accessible to all hosts in a data center.

Storage networking can be implemented using Network File System (NFS), Internet Small Computer System Interface (iSCSI), GlusterFS, Fibre Channel Protocol (FCP), or any POSIX compliant networked filesystem.

On NFS (and other POSIX-compliant filesystems) domains, all virtual disks, templates, and snapshots are simple files.

On SAN (iSCSI/FCP) domains, block devices are aggregated by Logical Volume Manager (LVM)into a Volume Group (VG). Each virtual disk, template and snapshot is a Logical Volume (LV) on the VG. See the Red Hat Enterprise Linux Logical Volume Manager Administration Guide for more information on LVM.

Data storage domain
Data domains hold the virtual hard disk images of all the virtual machines running in the environment. Templates and snapshots of the virtual machines are also stored in the data domain. A data domain cannot be shared across data centers.
Export storage domain
An export domain is a temporary storage repository that is used to copy and move images between data centers and Red Hat Virtualization environments. The export domain can be used to back up virtual machines and templates. An export domain can be moved between data centers, but can only be active in one data center at a time.
ISO storage domain
ISO domains store ISO files, which are logical CD-ROMs used to install operating systems and applications for the virtual machines. As a logical entity that replaces a library of physical CD-ROMs or DVDs, an ISO domain removes the data center’s need for physical media. An ISO domain can be shared across different data centers.

1.5. Network

The Red Hat Virtualization network architecture facilitates connectivity between the different elements of the Red Hat Virtualization environment. The network architecture not only supports network connectivity, it also allows for network segregation.

Figure 1.3. Network Architecture

180

Networking is defined in Red Hat Virtualization in several layers. The underlying physical networking infrastructure must be in place and configured to allow connectivity between the hardware and the logical components of the Red Hat Virtualization environment.

Networking Infrastructure Layer

The Red Hat Virtualization network architecture relies on some common hardware and software devices:

  • Network Interface Controllers (NICs) are physical network interface devices that connect a host to the network.
  • Virtual NICs (VNICs) are logical NICs that operate using the host’s physical NICs. They provide network connectivity to virtual machines.
  • Bonds bind multiple NICs into a single interface.
  • Bridges are a packet-forwarding technique for packet-switching networks. They form the basis of virtual machine logical networks.
Logical Networks

Logical networks allow segregation of network traffic based on environment requirements. The types of logical network are:

  • logical networks that carry virtual machine network traffic,
  • logical networks that do not carry virtual machine network traffic,
  • optional logical networks,
  • and required networks.

All logical networks can either be required or optional.

A logical network that carries virtual machine network traffic is implemented at the host level as a software bridge device. By default, one logical network is defined during the installation of the Red Hat Virtualization Manager: the ovirtmgmt management network.

Other logical networks that can be added by an administrator are: a dedicated storage logical network, and a dedicated display logical network. Logical networks that do not carry virtual machine traffic do not have an associated bridge device on hosts. They are associated with host network interfaces directly.

Red Hat Virtualization segregates management-related network traffic from migration-related network traffic. This makes it possible to use a dedicated network (without routing) for live migration, and ensures that the management network (ovirtmgmt) does not lose its connection to hypervisors during migrations.

Explanation of logical networks on different layers
Logical networks have different implications for each layer of the virtualization environment.

Data Center Layer

Logical networks are defined at the data center level. Each data center has the ovirtmgmt management network by default. Further logical networks are optional but recommended. Designation as a VM Network and a custom MTU can be set at the data center level. A logical network that is defined for a data center must also be added to the clusters that use the logical network.

Cluster Layer

Logical networks are made available from a data center, and must be added to the clusters that will use them. Each cluster is connected to the management network by default. You can optionally add to a cluster logical networks that have been defined for the cluster’s parent data center. When a required logical network has been added to a cluster, it must be implemented for each host in the cluster. Optional logical networks can be added to hosts as needed.

Host Layer

Virtual machine logical networks are implemented for each host in a cluster as a software bridge device associated with a given network interface. Non-virtual machine logical networks do not have associated bridges, and are associated with host network interfaces directly. Each host has the management network implemented as a bridge using one of its network devices as a result of being included in a Red Hat Virtualization environment. Further required logical networks that have been added to a cluster must be associated with network interfaces on each host to become operational for the cluster.

Virtual Machine Layer

Logical networks can be made available to virtual machines in the same way that a network can be made available to a physical machine. A virtual machine can have its virtual NIC connected to any virtual machine logical network that has been implemented on the host that runs it. The virtual machine then gains connectivity to any other devices or destinations that are available on the logical network it is connected to.

Example 1.1. Management Network

The management logical network, named ovirtmgmt, is created automatically when the Red Hat Virtualization Manager is installed. The ovirtmgmt network is dedicated to management traffic between the Red Hat Virtualization Manager and hosts. If no other specifically purposed bridges are set up, ovirtmgmt is the default bridge for all traffic.

1.6. Data Centers

A data center is the highest level of abstraction in Red Hat Virtualization. A data center contains three types of information:

Storage
This includes storage types, storage domains, and connectivity information for storage domains. Storage is defined for a data center, and available to all clusters in the data center. All host clusters within a data center have access to the same storage domains.
Logical networks
This includes details such as network addresses, VLAN tags and STP support. Logical networks are defined for a data center, and are optionally implemented at the cluster level.
Clusters
Clusters are groups of hosts with compatible processor cores, either AMD or Intel processors. Clusters are migration domains; virtual machines can be live-migrated to any host within a cluster, and not to other clusters. One data center can hold multiple clusters, and each cluster can contain multiple hosts.

Chapter 2. Storage

2.1. Storage Domains Overview

A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), ISO files, and metadata about themselves. A storage domain can be made of either block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems).

On NAS, all virtual disks, templates, and snapshots are files.

On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See the Red Hat Enterprise Linux Logical Volume Manager Administration Guide for more information on LVM.

Virtual disks can have one of two formats, either QCOW2 or raw. The type of storage can be either sparse or preallocated. Snapshots are always sparse but can be taken for disks of either format.

Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.

2.2. Types of Storage Backing Storage Domains

Storage domains can be implemented using block based and file based storage.

File Based Storage

The file based storage types supported by Red Hat Virtualization are NFS, GlusterFS, other POSIX compliant file systems, and storage local to hosts.

File based storage is managed externally to the Red Hat Virtualization environment.

NFS storage is managed by a Red Hat Enterprise Linux NFS server, or other third party network attached storage server.

Hosts can manage their own local storage file systems.

Block Based Storage

Block storage uses unformatted block devices. Block devices are aggregated into volume groups by the Logical Volume Manager (LVM). An instance of LVM runs on all hosts, unaware of the instances running on other hosts. VDSM adds clustering logic on top of LVM by scanning volume groups for changes. When changes are detected, VDSM updates individual hosts by telling them to refresh their volume group information. The hosts divide the volume group into logical volumes, writing logical volume metadata to disk. If more storage capacity is added to an existing storage domain, the Red Hat Virtualization Manager causes VDSM on each host to refresh volume group information.

A Logical Unit Number (LUN) is an individual block device. One of the supported block storage protocols, iSCSI or Fibre Channel, is used to connect to a LUN. The Red Hat Virtualization Manager manages software iSCSI connections to the LUNs. All other block storage connections are managed externally to the Red Hat Virtualization environment. Any changes in a block based storage environment, such as the creation of logical volumes, extension or deletion of logical volumes and the addition of a new LUN are handled by LVM on a specially selected host called the Storage Pool Manager. Changes are then synced by VDSM which storage metadata refreshes across all hosts in the cluster.

2.3. Storage Domain Types

Red Hat Virtualization supports three types of storage domains, as well as the storage types that each storage domain supports.

  • The Data Storage Domain stores the hard disk images of all virtual machines in the Red Hat Virtualization environment. Disk images may contain an installed operating system or data stored or generated by a virtual machine. Data storage domains support NFS, iSCSI, FCP, GlusterFS and POSIX compliant storage. A data domain cannot be shared between multiple data centers.
  • The Export Storage Domain provides transitory storage for hard disk images and virtual machine templates being transferred between data centers. Additionally, export storage domains store backed up copies of virtual machines. Export storage domains support NFS storage. Multiple data centers can access a single export storage domain but only one data center can use it at a time.
  • The ISO Storage Domain stores ISO files, also called images. ISO files are representations of physical CDs or DVDs. In the Red Hat Virtualization environment the common types of ISO files are operating system installation disks, application installation disks, and guest agent installation disks. These images can be attached to virtual machines and booted in the same way that physical disks are inserted into a disk drive and booted. ISO storage domains allow all hosts within the data center to share ISOs, eliminating the need for physical optical media.

2.4. Storage Formats for Virtual Disks

QCOW2 Formatted Virtual Machine Storage

QCOW2 is a storage format for virtual disks. QCOW stands for QEMU copy-on-write. The QCOW2 format decouples the physical storage layer from the virtual layer by adding a mapping between logical and physical blocks. Each logical block is mapped to its physical offset, which enables storage over-commitment and virtual machine snapshots, where each QCOW volume only represents changes made to an underlying virtual disk.

The initial mapping points all logical blocks to the offsets in the backing file or volume. When a virtual machine writes data to a QCOW2 volume after a snapshot, the relevant block is read from the backing volume, modified with the new information and written into a new snapshot QCOW2 volume. Then the map is updated to point to the new place.

Raw

The raw storage format has a performance advantage over QCOW2 in that no formatting is applied to virtual disks stored in the raw format. Virtual machine data operations on virtual disks stored in raw format require no additional work from hosts. When a virtual machine writes data to a given offset in its virtual disk, the I/O is written to the same offset on the backing file or logical volume.

Raw format requires that the entire space of the defined image be preallocated unless using externally managed thin provisioned LUNs from a storage array.

2.5. Virtual Disk Storage Allocation Policies

Preallocated Storage
All of the storage required for a virtual disk is allocated prior to virtual machine creation. If a 20 GB disk image is created for a virtual machine, the disk image uses 20 GB of storage domain capacity. Preallocated disk images cannot be enlarged. Preallocating storage can mean faster write times because no storage allocation takes place during runtime, at the cost of flexibility. Allocating storage this way reduces the capacity of the Red Hat Virtualization Manager to overcommit storage. Preallocated storage is recommended for virtual machines used for high intensity I/O tasks with less tolerance for latency in storage. Generally, server virtual machines fit this description.
Note

If thin provisioning functionality provided by your storage back-end is being used, preallocated storage should still be selected from the Administration Portal when provisioning storage for virtual machines.

Sparsely Allocated Storage
The upper size limit for a virtual disk is set at virtual machine creation time. Initially, the disk image does not use any storage domain capacity. Usage grows as the virtual machine writes data to disk, until the upper limit is reached. Capacity is not returned to the storage domain when data in the disk image is removed. Sparsely allocated storage is appropriate for virtual machines with low or medium intensity I/O tasks with some tolerance for latency in storage. Generally, desktop virtual machines fit this description.
Note

If thin provisioning functionality is provided by your storage back-end, it should be used as the preferred implementation of thin provisioning. Storage should be provisioned from the graphical user interface as preallocated, leaving thin provisioning to the back-end solution.

2.6. Storage Metadata Versions in Red Hat Virtualization

Red Hat Virtualization stores information about storage domains as metadata on the storage domains themselves. Each major release of Red Hat Virtualization has seen improved implementations of storage metadata.

V1 metadata (Red Hat Virtualization 2.x series)

  • Each storage domain contains metadata describing its own structure, and all of the names of physical volumes that are used to back virtual disks.
  • Master domains additionally contain metadata for all the domains and physical volume names in the storage pool. The total size of this metadata is limited to 2 KB, limiting the number of storage domains that can be in a pool.
  • Template and virtual machine base images are read only.
  • V1 metadata is applicable to NFS, iSCSI, and FC storage domains.

V2 metadata (Red Hat Enterprise Virtualization 3.0)

  • All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual disk volumes is still stored in a logical volume on the domains.
  • Physical volume names are no longer included in the metadata.
  • Template and virtual machine base images are read only.
  • V2 metadata is applicable to iSCSI, and FC storage domains.

V3 metadata (Red Hat Enterprise Virtualization 3.1 and later)

  • All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual disk volumes is still stored in a logical volume on the domains.
  • Virtual machine and template base images are no longer read only. This change enables live snapshots, live storage migration, and clone from snapshot.
  • Support for unicode metadata is added, for non-English volume names.
  • V3 metadata is applicable to NFS, GlusterFS, POSIX, iSCSI, and FC storage domains.

V4 metadata (Red Hat Virtualization 4.1 and later)

  • Support for QCOW2 compat levels - the QCOW image format includes a version number to allow introducing new features that change the image format so that it is incompatible with earlier versions. Newer QEMU versions (1.7 and above) support QCOW2 version 3, which is not backwards compatible, but introduces improvements such as zero clusters and improved performance.
  • A new xleases volume to support VM leases - this feature adds the ability to acquire a lease per virtual machine on shared storage without attaching the lease to a virtual machine disk.

    A VM lease offers two important capabilities:

    • Avoiding split-brain.
    • Starting a VM on another host if the original host becomes non-responsive, which improves the availability of HA VMs.

V5 metadata (Red Hat Virtualization 4.3 and later)

  • Support for 4K (4096 byte) block storage.
  • Support for variable SANLOCK allignments.
  • Support for new properties:

    • BLOCK_SIZE - stores the block size of the storage domain in bytes.
    • ALIGNMENT - determines the formatting and size of the xlease volume. (1MB to 8MB). Determined by the maximum number of host to be supported (value provided by the user) and disk block size.

      For example: a 512b block size and support for 2000 hosts results in a 1MB xlease volume.

      A 4K block size with 2000 hosts results in a 8MB xlease volume.

      The default value of maximum hosts is 250, resulting in an xlease volume of 1MB for 4K disks.

  • Deprecated properties:

    • The LOGBLKSIZE, PHYBLKSIZE, MTIME, and POOL_UUID fields were removed from the storage domain metadata.
    • The SIZE (size in blocks) field was replaced by CAP (size in bytes).
Note
  • You cannot boot from a 4K format disk, as the boot disk always uses a 512 byte emulation.
  • The nfs format always uses 512 bytes.

2.7. Storage Domain Autorecovery in Red Hat Virtualization

Hosts in a Red Hat Virtualization environment monitor storage domains in their data centers by reading metadata from each domain. A storage domain becomes inactive when all hosts in a data center report that they cannot access the storage domain.

Rather than disconnecting an inactive storage domain, the Manager assumes that the storage domain has become inactive temporarily, because of a temporary network outage for example. Once every 5 minutes, the Manager attempts to re-activate any inactive storage domains.

Administrator intervention may be required to remedy the cause of the storage connectivity interruption, but the Manager handles re-activating storage domains as connectivity is restored.

2.8. The Storage Pool Manager

Red Hat Virtualization uses metadata to describe the internal structure of storage domains. Structural metadata is written to a segment of each storage domain. Hosts work with the storage domain metadata based on a single writer, and multiple readers configuration. Storage domain structural metadata tracks image and snapshot creation and deletion, and volume and domain extension.

The host that can make changes to the structure of the data domain is known as the Storage Pool Manager (SPM). The SPM coordinates all metadata changes in the data center, such as creating and deleting disk images, creating and merging snapshots, copying images between storage domains, creating templates and storage allocation for block devices. There is one SPM for every data center. All other hosts can only read storage domain structural metadata.

A host can be manually selected as the SPM, or it can be assigned by the Red Hat Virtualization Manager. The Manager assigns the SPM role by causing a potential SPM host to attempt to assume a storage-centric lease. The lease allows the SPM host to write storage metadata. It is storage-centric because it is written to the storage domain rather than being tracked by the Manager or hosts. Storage-centric leases are written to a special logical volume in the master storage domain called leases. Metadata about the structure of the data domain is written to a special logical volume called metadata. The leases logical volume protects the metadata logical volume from changes.

The Manager uses VDSM to issue the spmStart command to a host, causing VDSM on that host to attempt to assume the storage-centric lease. If the host is successful it becomes the SPM and retains the storage-centric lease until the Red Hat Virtualization Manager requests that a new host assume the role of SPM.

The Manager moves the SPM role to another host if:

  • The SPM host can not access all storage domains, but can access the master storage domain
  • The SPM host is unable to renew the lease because of a loss of storage connectivity or the lease volume is full and no write operation can be performed
  • The SPM host crashes

Figure 2.1. The Storage Pool Manager Exclusively Writes Structural Metadata.

992

2.9. Storage Pool Manager Selection Process

If a host has not been manually assigned the Storage Pool Manager (SPM) role, the SPM selection process is initiated and managed by the Red Hat Virtualization Manager.

First, the Red Hat Virtualization Manager requests that VDSM confirm which host has the storage-centric lease.

The Red Hat Virtualization Manager tracks the history of SPM assignment from the initial creation of a storage domain onward. The availability of the SPM role is confirmed in three ways:

  • The "getSPMstatus" command: the Manager uses VDSM to check with the host that had SPM status last and receives one of "SPM", "Contending", or "Free".
  • The metadata volume for a storage domain contains the last host with SPM status.
  • The metadata volume for a storage domain contains the version of the last host with SPM status.

If an operational, responsive host retains the storage-centric lease, the Red Hat Virtualization Manager marks that host SPM in the administrator portal. No further action is taken.

If the SPM host does not respond, it is considered unreachable. If power management has been configured for the host, it is automatically fenced. If not, it requires manual fencing. The Storage Pool Manager role cannot be assigned to a new host until the previous Storage Pool Manager is fenced.

When the SPM role and storage-centric lease are free, the Red Hat Virtualization Manager assigns them to a randomly selected operational host in the data center.

If the SPM role assignment fails on a new host, the Red Hat Virtualization Manager adds the host to a list containing hosts the operation has failed on, marking these hosts as ineligible for the SPM role. This list is cleared at the beginning of the next SPM selection process so that all hosts are again eligible.

The Red Hat Virtualization Manager continues request that the Storage Pool Manager role and storage-centric lease be assumed by a randomly selected host that is not on the list of failed hosts until the SPM selection succeeds.

Each time the current SPM is unresponsive or unable to fulfill its responsibilities, the Red Hat Virtualization Manager initiates the Storage Pool Manager selection process.

2.10. Exclusive Resources and Sanlock in Red Hat Virtualization

Certain resources in the Red Hat Virtualization environment must be accessed exclusively.

The SPM role is one such resource. If more than one host were to become the SPM, there would be a risk of data corruption as the same data could be changed from two places at once.

Prior to Red Hat Enterprise Virtualization 3.1, SPM exclusivity was maintained and tracked using a VDSM feature called safelease. The lease was written to a special area on all of the storage domains in a data center. All of the hosts in an environment could track SPM status in a network-independent way. The VDSM’s safe lease only maintained exclusivity of one resource: the SPM role.

Sanlock provides the same functionality, but treats the SPM role as one of the resources that can be locked. Sanlock is more flexible because it allows additional resources to be locked.

Applications that require resource locking can register with Sanlock. Registered applications can then request that Sanlock lock a resource on their behalf, so that no other application can access it. For example, instead of VDSM locking the SPM status, VDSM now requests that Sanlock do so.

Locks are tracked on disk in a lockspace. There is one lockspace for every storage domain. In the case of the lock on the SPM resource, each host’s liveness is tracked in the lockspace by the host’s ability to renew the hostid it received from the Manager when it connected to storage, and to write a timestamp to the lockspace at a regular interval. The ids logical volume tracks the unique identifiers of each host, and is updated every time a host renews its hostid. The SPM resource can only be held by a live host.

Resources are tracked on disk in the leases logical volume. A resource is said to be taken when its representation on disk has been updated with the unique identifier of the process that has taken it. In the case of the SPM role, the SPM resource is updated with the hostid that has taken it.

The Sanlock process on each host only needs to check the resources once to see that they are taken. After an initial check, Sanlock can monitor the lockspaces until timestamp of the host with a locked resource becomes stale.

Sanlock monitors the applications that use resources. For example, VDSM is monitored for SPM status and hostid. If the host is unable to renew it’s hostid from the Manager, it loses exclusivity on all resources in the lockspace. Sanlock updates the resource to show that it is no longer taken.

If the SPM host is unable to write a timestamp to the lockspace on the storage domain for a given amount of time, the host’s instance of Sanlock requests that the VDSM process release its resources. If the VDSM process responds, its resources are released, and the SPM resource in the lockspace can be taken by another host.

If VDSM on the SPM host does not respond to requests to release resources, Sanlock on the host kills the VDSM process. If the kill command is unsuccessful, Sanlock escalates by attempting to kill VDSM using sigkill. If the sigkill is unsuccessful, Sanlock depends on the watchdog daemon to reboot the host.

Every time VDSM on the host renews its hostid and writes a timestamp to the lockspace, the watchdog daemon receives a pet. When VDSM is unable to do so, the watchdog daemon is no longer being petted. After the watchdog daemon has not received a pet for a given amount of time, it reboots the host. This final level of escalation, if reached, guarantees that the SPM resource is released, and can be taken by another host.

2.11. Thin Provisioning and Storage Over-Commitment

The Red Hat Virtualization Manager provides provisioning policies to optimize storage usage within the virtualization environment. A thin provisioning policy allows you to over-commit storage resources, provisioning storage based on the actual storage usage of your virtualization environment.

Storage over-commitment is the allocation of more storage to virtual machines than is physically available in the storage pool. Generally, virtual machines use less storage than what has been allocated to them. Thin provisioning allows a virtual machine to operate as if the storage defined for it has been completely allocated, when in fact only a fraction of the storage has been allocated.

Note

While the Red Hat Virtualization Manager provides its own thin provisioning function, you should use the thin provisioning functionality of your storage back-end if it provides one.

To support storage over-commitment, VDSM defines a threshold which compares logical storage allocation with actual storage usage. This threshold is used to make sure that the data written to a disk image is smaller than the logical volume that backs the disk image. QEMU identifies the highest offset written to in a logical volume, which indicates the point of greatest storage use. VDSM monitors the highest offset marked by QEMU to ensure that the usage does not cross the defined threshold. So long as VDSM continues to indicate that the highest offset remains below the threshold, the Red Hat Virtualization Manager knows that the logical volume in question has sufficient storage to continue operations.

When QEMU indicates that usage has risen to exceed the threshold limit, VDSM communicates to the Manager that the disk image will soon reach the size of it’s logical volume. The Red Hat Virtualization Manager requests that the SPM host extend the logical volume. This process can be repeated as long as the data storage domain for the data center has available space. When the data storage domain runs out of available free space, you must manually add storage capacity to expand it.

2.12. Logical Volume Extension

The Red Hat Virtualization Manager uses thin provisioning to overcommit the storage available in a storage pool, and allocates more storage than is physically available. Virtual machines write data as they operate. A virtual machine with a thinly-provisioned disk image will eventually write more data than the logical volume backing its disk image can hold. When this happens, logical volume extension is used to provide additional storage and facilitate the continued operations for the virtual machine.

Red Hat Virtualization provides a thin provisioning mechanism over LVM. When using QCOW2 formatted storage, Red Hat Virtualization relies on the host system process qemu-kvm to map storage blocks on disk to logical blocks in a sequential manner. This allows, for example, the definition of a logical 100 GB disk backed by a 1 GB logical volume. When qemu-kvm crosses a usage threshold set by VDSM, the local VDSM instance makes a request to the SPM for the logical volume to be extended by another one gigabyte. VDSM on the host running a virtual machine in need of volume extension notifies the SPM VDSM that more space is required. The SPM extends the logical volume and the SPM VDSM instance causes the host VDSM to refresh volume group information and recognize that the extend operation is complete. The host can continue operations.

Logical Volume extension does not require that a host know which other host is the SPM; it could even be the SPM itself. The storage extension communication is done via a storage mailbox. The storage mailbox is a dedicated logical volume on the data storage domain. A host that needs the SPM to extend a logical volume writes a message in an area designated to that particular host in the storage mailbox. The SPM periodically reads the incoming mail, performs requested logical volume extensions, and writes a reply in the outgoing mail. After sending the request, a host monitors its incoming mail for responses every two seconds. When the host receives a successful reply to its logical volume extension request, it refreshes the logical volume map in device mapper to recognize the newly allocated storage.

When the physical storage available to a storage pool is nearly exhausted, multiple images can run out of usable storage with no means to replenish their resources. A storage pool that exhausts its storage causes QEMU to return an enospc error, which indicates that the device no longer has any storage available. At this point, running virtual machines are automatically paused and manual intervention is required to add a new LUN to the volume group.

When a new LUN is added to the volume group, the Storage Pool Manager automatically distributes the additional storage to logical volumes that need it. The automatic allocation of additional resources allows the relevant virtual machines to automatically continue operations uninterrupted or resume operations if stopped.

2.13. The Effect of Storage Domain Actions on Storage Capacity

Power on, power off, and reboot a stateless virtual machine
These three processes affect the copy-on-write (COW) layer in a stateless virtual machine. For more information, see the Stateless row of the Virtual Machine General Settings table in the Virtual Machine Management Guide.
Create a storage domain

Creating a block storage domain results in files with the same names as the seven LVs shown below, and initially should take less capacity.

ids              64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-ao---- 128.00m
inbox            64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 128.00m
leases           64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a-----   2.00g
master           64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-ao----   1.00g
metadata         64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 512.00m
outbox           64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a----- 128.00m
xleases          64f87b0f-88d6-49e9-b797-60d36c9df497 -wi-a-----   1.00g
Delete a storage domain
Deleting a storage domain frees up capacity on the disk by the same of amount of capacity the process deleted.
Migrate a storage domain
Migrating a storage domain does not use additional storage capacity. For more information about migrating storage domains, see Migrating Storage Domains Between Data Centers in the Same Environment in the Administration Guide.
Move a virtual disk to other storage domain

Migrating a virtual disk requires enough free space to be available on the target storage domain. You can see the target domain’s approximate free space in the Administration Portal.

The storage types in the move process affect the visible capacity. For example, if you move a preallocated disk from block storage to file storage, the resulting free space may be considerably smaller than the initial free space.

Live migrating a virtual disk to another storage domain also creates a snapshot, which is automatically merged after the migration is complete. To learn more about moving virtual disks, see Moving a Virtual Disk in the Administration Guide.

Pause a storage domain
Pausing a storage domain does not use any additional storage capacity.
Create a snapshot of a virtual machine

Creating a snapshot of a virtual machine can affect the storage domain capacity.

  • Creating a live snapshot uses memory snapshots by default and generates two additional volumes per virtual machine. The first volume is the sum of the memory, video memory, and 200 MB of buffer. The second volume contains the virtual machine configuration, which is several MB in size. When using block storage, rounding up occurs to the nearest unit Red Hat Virtualization can provide.
  • Creating an offline snapshot initially consumes 1 GB of block storage and is dynamic up to the size of the disk.
  • Cloning a snapshot creates a new disk the same size as the original disk.
  • Committing a snapshot removes all child volumes, depending on where in the chain the commit occurs.
  • Deleting a snapshot eventually removes the child volume for each disk and is only supported with a running virtual machine.
  • Previewing a snapshot creates a temporary volume per disk, so sufficient capacity must be available to allow the creation of the preview.
  • Undoing a snapshot preview removes the temporary volume created by the preview.
Attach and remove direct LUNs
Attaching and removing direct LUNs does not affect the storage domain since they are not a storage domain component. For more information, see Overview of Live Storage Migration in the Administration Guide.

Chapter 3. Network

3.1. Network Architecture

Red Hat Virtualization networking can be discussed in terms of basic networking, networking within a cluster, and host networking configurations. Basic networking terms cover the basic hardware and software elements that facilitate networking. Networking within a cluster includes network interactions among cluster level objects such as hosts, logical networks and virtual machines. Host networking configurations covers supported configurations for networking within a host.

A well designed and built network ensures, for example, that high bandwidth tasks receive adequate bandwidth, that user interactions are not crippled by latency, and virtual machines can be successfully migrated within a migration domain. A poorly built network can cause, for example, unacceptable latency, and migration and cloning failures resulting from network flooding.

An alternative method of managing your network is by integrating with Cisco Application Centric Infrastructure (ACI), by configuring Red Hat Virtualization on Cisco’s Application Policy Infrastructure Controller (APIC) version 3.1(1) and later according to the Cisco’s documentation. On the Red Hat Virtualization side, all that is required is connecting the hosts' NICs to the network and the virtual machines' vNICs to the required network. The remaining configuration tasks are managed by Cisco ACI.

3.2. Introduction: Basic Networking Terms

Red Hat Virtualization provides networking functionality between virtual machines, virtualization hosts, and wider networks using:

  • A Network Interface Controller (NIC)
  • A Bridge
  • A Bond
  • A Virtual NIC
  • A Virtual LAN (VLAN)

NICs, bridges, and VNICs allow for network communication between hosts, virtual machines, local area networks, and the Internet. Bonds and VLANs are optionally implemented to enhance security, fault tolerance, and network capacity.

3.3. Network Interface Controller

The NIC (Network Interface Controller) is a network adapter or LAN adapter that connects a computer to a computer network. The NIC operates on both the physical and data link layers of the machine and allows network connectivity. All virtualization hosts in a Red Hat Virtualization environment have at least one NIC, though it is more common for a host to have two or more NICs.

One physical NIC can have multiple Virtual NICs (VNICs) logically connected to it. A virtual NIC acts as a physical network interface for a virtual machine. To distinguish between a VNIC and the NIC that supports it, the Red Hat Virtualization Manager assigns each VNIC a unique MAC address.

3.4. Bridge

A Bridge is a software device that uses packet forwarding in a packet-switched network. Bridging allows multiple network interface devices to share the connectivity of one NIC and appear on a network as separate physical devices. The bridge examines a packet’s source addresses to determine relevant target addresses. Once the target address is determined, the bridge adds the location to a table for future reference. This allows a host to redirect network traffic to virtual machine associated VNICs that are members of a bridge.

In Red Hat Virtualization a logical network is implemented using a bridge. It is the bridge rather than the physical interface on a host that receives an IP address. The IP address associated with the bridge is not required to be within the same subnet as the virtual machines that use the bridge for connectivity. If the bridge is assigned an IP address on the same subnet as the virtual machines that use it, the host is addressable within the logical network by virtual machines. As a rule it is not recommended to run network exposed services on a virtualization host. Guests are connected to a logical network by their VNICs, and the host is connected to remote elements of the logical network using its NIC. Each guest can have the IP address of its VNIC set independently, by DHCP or statically. Bridges can connect to objects outside the host, but such a connection is not mandatory.

Custom properties can be defined for both the bridge and the Ethernet connection. VDSM passes the network definition and custom properties to the setup network hook script.

3.5. Bonds

A bond is an aggregation of multiple network interface cards into a single software-defined device. Because bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they can provide greater transmission speed than that of a single network interface card. Also, because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance. However, one limitation is that the network interface cards that form a bonded network interface must be of the same make and model to ensure that all network interface cards in the bond support the same options and modes.

The packet dispersal algorithm for a bond is determined by the bonding mode used.

Important

Modes 1, 2, 3 and 4 support both virtual machine (bridged) and non-virtual machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual machine (bridgeless) networks only.

3.6. Bonding Modes

Red Hat Virtualization uses Mode 4 by default, but supports the following common bonding modes:

Mode 0 (round-robin policy)
Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and end with the last available network interface card in the bond. All subsequent loops then start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges, and is therefore not compatible with virtual machine logical networks.
Mode 1 (active-backup policy)
Sets all network interface cards to a backup state while one network interface card remains active. In the event of failure in the active network interface card, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of the bond in Mode 1 is visible on only one port to prevent any confusion that might otherwise be caused if the MAC address of the bond changed to reflect that of the active network interface card. Mode 1 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 2 (XOR policy)
Selects the network interface card through which to transmit packets based on the result of an XOR operation on the source and destination MAC addresses modulo network interface card slave count. This calculation ensures that the same network interface card is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Virtualization.
Mode 3 (broadcast policy)
Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 4 (IEEE 802.3ad policy)
Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Virtualization.
Mode 5 (adaptive transmit load balancing policy)
Ensures the distribution of outgoing traffic accounts for the load on each network interface card in the bond and that the current network interface card receives all incoming traffic. If the network interface card assigned to receive traffic fails, another network interface card is assigned to the role of receiving incoming traffic. Mode 5 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
Mode 6 (adaptive load balancing policy)
Combines Mode 5 (adaptive transmit load balancing policy) with receive load balancing for IPv4 traffic without any special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.

3.7. Switch Configuration for Bonding

Switch configurations vary per the requirements of your hardware. Refer to the deployment and networking configuration guides for your operating system.

Important

For every type of switch it is important to set up the switch bonding with the Link Aggregation Control Protocol (LACP) protocol and not the Cisco Port Aggregation Protocol (PAgP) protocol.

3.8. Virtual Network Interface Cards

Virtual network interface cards (vNICs) are virtual network interfaces that are based on the physical NICs of a host. Each host can have multiple NICs, and each NIC can be a base for multiple vNICs.

When you attach a vNIC to a virtual machine, the Red Hat Virtualization Manager creates several associations between the virtual machine to which the vNIC is being attached, the vNIC itself, and the physical host NIC on which the vNIC is based. Specifically, when a vNIC is attached to a virtual machine, a new vNIC and MAC address are created on the physical host NIC on which the vNIC is based. Then, the first time the virtual machine starts after that vNIC is attached, libvirt assigns the vNIC a PCI address. The MAC address and PCI address are then used to obtain the name of the vNIC (for example, eth0) in the virtual machine.

The process for assigning MAC addresses and associating those MAC addresses with PCI addresses is slightly different when creating virtual machines based on templates or snapshots:

  • If PCI addresses have already been created for a template or snapshot, the vNICs on virtual machines created based on that template or snapshot are ordered in accordance with those PCI addresses. MAC addresses are then allocated to the vNICs in that order.
  • If PCI addresses have not already been created for a template, the vNICs on virtual machines created based on that template are ordered alphabetically. MAC addresses are then allocated to the vNICs in that order.
  • If PCI addresses have not already been created for a snapshot, the Red Hat Virtualization Manager allocates new MAC addresses to the vNICs on virtual machines based on that snapshot.

Once created, vNICs are added to a network bridge device. The network bridge devices are how virtual machines are connected to virtual machine logical networks.

Running the ip addr show command on a virtualization host shows all of the vNICs that are associated with virtual machines on that host. Also visible are any network bridges that have been created to back logical networks, and any NICs used by the host.

[root@rhev-host-01 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff
    inet6 fe80::221:86ff:fea2:85cd/64 scope link
       valid_lft forever preferred_lft forever
3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 00:21:6b:cc:14:6c brd ff:ff:ff:ff:ff:ff
5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 4a:d5:52:c2:7f:4b brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
7: bond4: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
8: bond1: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: bond2: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
10: bond3: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
11: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff
    inet 10.64.32.134/23 brd 10.64.33.255 scope global ovirtmgmt
    inet6 fe80::221:86ff:fea2:85cd/64 scope link
       valid_lft forever preferred_lft forever

The console output from the command shows several devices: one loop back device (lo), one Ethernet device (eth0), one wireless device (wlan0), one VDSM dummy device (;vdsmdummy;), five bond devices (bond0, bond4, bond1, bond2, bond3), and one network bridge (ovirtmgmt).

vNICs are all members of a network bridge device and logical network. Bridge membership can be displayed using the brctl show command:

[root@rhev-host-01 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
ovirtmgmt		8000.e41f13b7fdd4	no		vnet002
							vnet001
							vnet000
							eth0

The console output from the brctl show command shows that the virtio vNICs are members of the ovirtmgmt bridge. All of the virtual machines that the vNICs are associated with are connected to the ovirtmgmt logical network. The eth0 NIC is also a member of the ovirtmgmt bridge. The eth0 device is cabled to a switch that provides connectivity beyond the host.

3.9. Virtual LAN (VLAN)

A VLAN (Virtual LAN) is an attribute that can be applied to network packets. Network packets can be "tagged" into a numbered VLAN. A VLAN is a security feature used to completely isolate network traffic at the switch level. VLANs are completely separate and mutually exclusive. The Red Hat Virtualization Manager is VLAN aware and able to tag and redirect VLAN traffic, however VLAN implementation requires a switch that supports VLANs.

At the switch level, ports are assigned a VLAN designation. A switch applies a VLAN tag to traffic originating from a particular port, marking the traffic as part of a VLAN, and ensures that responses carry the same VLAN tag. A VLAN can extend across multiple switches. VLAN tagged network traffic on a switch is completely undetectable except by machines connected to a port designated with the correct VLAN. A given port can be tagged into multiple VLANs, which allows traffic from multiple VLANs to be sent to a single port, to be deciphered using software on the machine that receives the traffic.

3.10. Network Labels

Network labels can be used to greatly simplify several administrative tasks associated with creating and administering logical networks and associating those logical networks with physical host network interfaces and bonds.

A network label is a plain text, human readable label that can be attached to a logical network or a physical host network interface. There is no strict limit on the length of label, but you must use a combination of lowercase and uppercase letters, underscores and hyphens; no spaces or special characters are allowed.

Attaching a label to a logical network or physical host network interface creates an association with other logical networks or physical host network interfaces to which the same label has been attached, as follows:

Network Label Associations

  • When you attach a label to a logical network, that logical network will be automatically associated with any physical host network interfaces with the given label.
  • When you attach a label to a physical host network interface, any logical networks with the given label will be automatically associated with that physical host network interface.
  • Changing the label attached to a logical network or physical host network interface acts in the same way as removing a label and adding a new label. The association between related logical networks or physical host network interfaces is updated.

Network Labels and Clusters

  • When a labeled logical network is added to a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically added to that physical host network interface.
  • When a labeled logical network is detached from a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically detached from that physical host network interface.

Network Labels and Logical Networks With Roles

  • When a labeled logical network is assigned to act as a display network or migration network, that logical network is then configured on the physical host network interface using DHCP so that the logical network can be assigned an IP address.

    Setting a label on a role network (for instance, "a migration network" or "a display network") causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.

3.11. Cluster Networking

Cluster level networking objects include:

  • Clusters
  • Logical Networks

Figure 3.1. Networking within a cluster

1009

A data center is a logical grouping of multiple clusters and each cluster is a logical group of multiple hosts. Figure 3.1, “Networking within a cluster” depicts the contents of a single cluster.

Hosts in a cluster all have access to the same storage domains. Hosts in a cluster also have logical networks applied at the cluster level. For a virtual machine logical network to become operational for use with virtual machines, the network must be defined and implemented for each host in the cluster using the Red Hat Virtualization Manager. Other logical network types can be implemented on only the hosts that use them.

Multi-host network configuration automatically applies any updated network settings to all of the hosts within the data center to which the network is assigned.

3.12. Logical Networks

Logical networking allows the Red Hat Virtualization environment to separate network traffic by type. For example, the ovirtmgmt network is created by default during the installation of Red Hat Virtualization to be used for management communication between the Manager and hosts. A typical use for logical networks is to group network traffic with similar requirements and usage together. In many cases, a storage network and a display network are created by an administrator to isolate traffic of each respective type for optimization and troubleshooting.

The types of logical network are:

  • logical networks that carry virtual machine network traffic,
  • logical networks that do not carry virtual machine network traffic,
  • optional logical networks,
  • and required networks.

All logical networks can either be required or optional.

Logical networks are defined at the data center level, and added to a host. For a required logical network to be operational, it must be implemented for every host in a given cluster.

Each virtual machine logical network in a Red Hat Virtualization environment is backed by a network bridge device on a host. So when a new virtual machine logical network is defined for a cluster, a matching bridge device must be created on each host in the cluster before the logical network can become operational to be used by virtual machines. Red Hat Virtualization Manager automatically creates required bridges for virtual machine logical networks.

The bridge device created by the Red Hat Virtualization Manager to back a virtual machine logical network is associated with a host network interface. If the host network interface that is part of a bridge has network connectivity, then any network interfaces that are subsequently included in the bridge share the network connectivity of the bridge. When virtual machines are created and placed on a particular logical network, their virtual network cards are included in the bridge for that logical network. Those virtual machines can then communicate with each other and with other objects that are connected to the bridge.

Logical networks not used for virtual machine network traffic are associated with host network interfaces directly.

Example 3.1. Example usage of a logical network.

There are two hosts called Red and White in a cluster called Pink in a data center called Purple. Both Red and White have been using the default logical network, ovirtmgmt for all networking functions. The system administrator responsible for Pink decides to isolate network testing for a web server by placing the web server and some client virtual machines on a separate logical network. She decides to call the new logical network network_testing.

First, she defines the logical network for the Purple data center. She then applies it to the Pink cluster. Logical networks must be implemented on a host in maintenance mode. So, the administrator first migrates all running virtual machines to Red, and puts White in maintenance mode. Then she edits the Network associated with the physical network interface that will be included in the bridge. The Link Status for the selected network interface will change from Down to Non-Operational. The non-operational status is because the corresponding bridge must be setup in all hosts in the cluster by adding a physical network interface on each host in the Pink cluster to the network_testing network. Next she activates White, migrates all of the running virtual machines off of Red, and repeats the process for Red.

When both White and Red both have the network_testing logical network bridged to a physical network interface, the network_testing logical network becomes Operational and is ready to be used by virtual machines.

3.13. Required Networks, Optional Networks, and Virtual Machine Networks

A required network is a logical network that must be available to all hosts in a cluster. When a host’s required network becomes non-operational, virtual machines running on that host are migrated to another host; the extent of this migration is dependent upon the chosen scheduling policy. This is beneficial if you have virtual machines running mission critical workloads.

An optional network is a logical network that has not been explicitly declared as Required. Optional networks can be implemented on only the hosts that use them. The presence or absence of optional networks does not affect the Operational status of a host. When a non-required network becomes non-operational, the virtual machines running on the network are not migrated to another host. This prevents unnecessary I/O overload caused by mass migrations. Note that when a logical network is created and added to clusters, the Required box is checked by default.

To change a network’s Required designation, from the Administration Portal, select a network, click the Cluster tab, and click the Manage Networks button.

Virtual machine networks (called a VM network in the user interface) are logical networks designated to carry only virtual machine network traffic. Virtual machine networks can be required or optional. Virtual machines that uses an optional virtual machine network will only start on hosts with that network.

3.14. Virtual Machine Connectivity

In Red Hat Virtualization, a virtual machine has its NIC put on a logical network at the time that the virtual machine is created. From that point, the virtual machine is able to communicate with any other destination on the same network.

From the host perspective, when a virtual machine is put on a logical network, the VNIC that backs the virtual machine’s NIC is added as a member to the bridge device for the logical network. For example, if a virtual machine is on the ovirtmgmt logical network, its VNIC is added as a member of the ovirtmgmt bridge of the host on which that virtual machine runs.

3.15. Port Mirroring

Port mirroring copies layer 3 network traffic on a given logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network.

The only traffic copied is internal to one logical network on one host. There is no increase on traffic on the network external to the host; however a virtual machine with port mirroring enabled uses more host CPU and RAM than other virtual machines.

Port mirroring is enabled or disabled in the vNIC profiles of logical networks, and has the following limitations:

  • Hot plugging vNICs with a profile that has port mirroring enabled is not supported.
  • Port mirroring cannot be altered when the vNIC profile is attached to a virtual machine.

Given the above limitations, it is recommended that you enable port mirroring on an additional, dedicated vNIC profile.

Important

Enabling port mirroring reduces the privacy of other network users.

3.16. Host Networking Configurations

Common types of networking configurations for virtualization hosts include:

  • Bridge and NIC configuration.

    This configuration uses a bridge to connect one or more virtual machines (or guests) to the host’s NIC.

    An example of this configuration is the automatic creation of the ovirtmgmt network when installing Red Hat Virtualization Manager. Then, during host installation, the Red Hat Virtualization Manager installs VDSM on the host. The VDSM installation process creates the ovirtmgmt bridge which obtains the host’s IP address to enable communication with the Manager.

    Important

    Set all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported.

  • Bridge, VLAN, and NIC configuration.

    A VLAN can be included in the bridge and NIC configuration to provide a secure channel for data transfer over the network and also to support the option to connect multiple bridges to a single NIC using multiple VLANs.

  • Bridge, Bond, and VLAN configuration.

    A bond creates a logical link that combines the two (or more) physical Ethernet links. The resultant benefits include NIC fault tolerance and potential bandwidth extension, depending on the bonding mode.

  • Multiple Bridge, Multiple VLAN, and NIC configuration.

    This configuration connects a NIC to multiple VLANs.

    For example, to connect a single NIC to two VLANs, the network switch can be configured to pass network traffic that has been tagged into one of the two VLANs to one NIC on the host. The host uses two VNICs to separate VLAN traffic, one for each VLAN. Traffic tagged into either VLAN then connects to a separate bridge by having the appropriate VNIC as a bridge member. Each bridge, in turn, connects to multiple virtual machines.

    Note

    You can also bond multiple NICs to facilitate a connection with multiple VLANs. Each VLAN in this configuration is defined over the bond comprising the multiple NICs. Each VLAN connects to an individual bridge and each bridge connects to one or more guests.

Chapter 4. Power Management

4.1. Introduction to Power Management and Fencing

The Red Hat Virtualization environment is most flexible and resilient when power management and fencing have been configured. Power management allows the Red Hat Virtualization Manager to control host power cycle operations, most importantly to reboot hosts on which problems have been detected. Fencing is used to isolate problem hosts from a functional Red Hat Virtualization environment by rebooting them, in order to prevent performance degradation. Fenced hosts can then be returned to responsive status through administrator action and be reintegrated into the environment.

Power management and fencing make use of special dedicated hardware in order to restart hosts independently of host operating systems. The Red Hat Virtualization Manager connects to a power management devices using a network IP address or hostname. In the context of Red Hat Virtualization, a power management device and a fencing device are the same thing.

4.2. Power Management by Proxy in Red Hat Virtualization

The Red Hat Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy.

You can select between:

  • Any host in the same cluster as the host requiring fencing.
  • Any host in the same data center as the host requiring fencing.

A viable fencing proxy host has a status of either UP or Maintenance.

4.3. Power Management

The Red Hat Virtualization Manager is capable of rebooting hosts that have entered a non-operational or non-responsive state, as well as preparing to power off under-utilized hosts to save power. This functionality depends on a properly configured power management device. The Red Hat Virtualization environment supports the following power management devices:

  • American Power Conversion (apc)
  • IBM Bladecenter (Bladecenter)
  • Cisco Unified Computing System (cisco_ucs)
  • Dell Remote Access Card 5 (drac5)
  • Dell Remote Access Card 7 (drac7)
  • Electronic Power Switch (eps)
  • HP BladeSystem (hpblade)
  • Integrated Lights Out (ilo, ilo2, ilo3, ilo4)
  • Intelligent Platform Management Interface (ipmilan)
  • Remote Supervisor Adapter (rsa)
  • Fujitsu-Siemens RSB (rsb)
  • Western Telematic, Inc (wti)

Red Hat recommends that HP servers use ilo3 or ilo4, Dell servers use drac5 or Integrated Dell Remote Access Controllers (idrac), and IBM servers use ipmilan. Integrated Management Module (IMM) uses the IPMI protocol, and therefore IMM users can use ipmilan.

Note

APC 5.x power management devices are not supported by the apc fence agent. Use the apc_snmp fence agent instead.

In order to communicate with the listed power management devices, the Red Hat Virtualization Manager makes use of fence agents. The Red Hat Virtualization Manager allows administrators to configure a fence agent for the power management device in their environment with parameters the device will accept and respond to. Basic configuration options can be configured using the graphical user interface. Special configuration options can also be entered, and are passed un-parsed to the fence device. Special configuration options are specific to a given fence device, while basic configuration options are for functionalities provided by all supported power management devices. The basic functionalities provided by all power management devices are:

  • Status: check the status of the host.
  • Start: power on the host.
  • Stop: power down the host.
  • Restart: restart the host. Actually implemented as stop, wait, status, start, wait, status.

Best practice is to test the power management configuration once when initially configuring it, and occasionally after that to ensure continued functionality.

Resilience is provided by properly configured power management devices in all of the hosts in an environment. Fencing agents allow the Red Hat Virtualization Manager to communicate with host power management devices to bypass the operating system on a problem host, and isolate the host from the rest of its environment by rebooting it. The Manager can then reassign the SPM role, if it was held by the problem host, and safely restart any highly available virtual machines on other hosts.

4.4. Fencing

In the context of the Red Hat Virtualization environment, fencing is a host reboot initiated by the Manager using a fence agent and performed by a power management device. Fencing allows a cluster to react to unexpected host failures as well as enforce power saving, load balancing, and virtual machine availability policies.

Fencing ensures that the role of Storage Pool Manager (SPM) is always assigned to a functional host. If the fenced host was the SPM, the SPM role is relinquished and reassigned to a responsive host. Because the host with the SPM role is the only host that is able to write data domain structure metadata, a non-responsive, un-fenced SPM host causes its environment to lose the ability to create and destroy virtual disks, take snapshots, extend logical volumes, and all other actions that require changes to data domain structure metadata.

When a host becomes non-responsive, all of the virtual machines that are currently running on that host can also become non-responsive. However, the non-responsive host retains the lock on the virtual machine hard disk images for virtual machines it is running. Attempting to start a virtual machine on a second host and assign the second host write privileges for the virtual machine hard disk image can cause data corruption.

Fencing allows the Red Hat Virtualization Manager to assume that the lock on a virtual machine hard disk image has been released; the Manager can use a fence agent to confirm that the problem host has been rebooted. When this confirmation is received, the Red Hat Virtualization Manager can start a virtual machine from the problem host on another host without risking data corruption. Fencing is the basis for highly-available virtual machines. A virtual machine that has been marked highly-available can not be safely started on an alternate host without the certainty that doing so will not cause data corruption.

When a host becomes non-responsive, the Red Hat Virtualization Manager allows a grace period of thirty (30) seconds to pass before any action is taken, to allow the host to recover from any temporary errors. If the host has not become responsive by the time the grace period has passed, the Manager automatically begins to mitigate any negative impact from the non-responsive host. The Manager uses the fencing agent for the power management card on the host to stop the host, confirm it has stopped, start the host, and confirm that the host has been started. When the host finishes booting, it attempts to rejoin the cluster that it was a part of before it was fenced. If the issue that caused the host to become non-responsive has been resolved by the reboot, then the host is automatically set to Up status and is once again capable of starting and hosting virtual machines.

4.5. Soft-Fencing Hosts

Hosts can sometimes become non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue.

"SSH Soft Fencing" is a process where the Manager attempts to restart VDSM via SSH on non-responsive hosts. If the Manager fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured.

Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Manager and the host times out, the following happens:

  1. On the first network failure, the status of the host changes to "connecting".
  2. The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running virtual machines on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula).
  3. If the host does not respond when that interval has elapsed, vdsm restart is executed via SSH.
  4. If vdsm restart does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes to Non Responsive and, if power management is configured, fencing is handed off to the external fencing agent.
Note

Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured.

4.6. Using Multiple Power Management Fencing Agents

Single agents are treated as primary agents. The secondary agent is valid when there are two fencing agents, for example for dual-power hosts in which each power switch has two agents connected to the same power switch. Agents can be of the same or different types.

Having multiple fencing agents on a host increases the reliability of the fencing procedure. For example, when the sole fencing agent on a host fails, the host will remain in a non-operational state until it is manually rebooted. The virtual machines previously running on the host will be suspended, and only fail over to another host in the cluster after the original host is manually fenced. With multiple agents, if the first agent fails, the next agent can be called.

When two fencing agents are defined on a host, they can be configured to use a concurrent or sequential flow:

  • Concurrent: Both primary and secondary agents have to respond to the Stop command for the host to be stopped. If one agent responds to the Start command, the host will go up.
  • Sequential: To stop or start a host, the primary agent is used first, and if it fails, the secondary agent is used.

Chapter 5. Load Balancing, Scheduling, and Migration

5.1. Load Balancing, Scheduling, and Migration

Individual hosts have finite hardware resources, and are susceptible to failure. To mitigate against failure and resource exhaustion, hosts are grouped into clusters, which are essentially a grouping of shared resources. A Red Hat Virtualization environment responds to changes in demand for host resources using load balancing policy, scheduling, and migration. The Manager is able to ensure that no single host in a cluster is responsible for all of the virtual machines in that cluster. Conversely, the Manager is able to recognize an underutilized host, and migrate all virtual machines off of it, allowing an administrator to shut down that host to save power.

Available resources are checked as a result of three events:

  • Virtual machine start - Resources are checked to determine on which host a virtual machine will start.
  • Virtual machine migration - Resources are checked in order to determine an appropriate target host.
  • Time elapses - Resources are checked at a regular interval to determine whether individual host load is in compliance with cluster load balancing policy.

The Manager responds to changes in available resources by using the load balancing policy for a cluster to schedule the migration of virtual machines from one host in a cluster to another. The relationship between load balancing policy, scheduling, and virtual machine migration are discussed in the following sections.

5.2. Load Balancing Policy

Load balancing policy is set for a cluster, which includes one or more hosts that may each have different hardware parameters and available memory. The Red Hat Virtualization Manager uses a load balancing policy to determine which host in a cluster to start a virtual machine on. Load balancing policy also allows the Manager determine when to move virtual machines from over-utilized hosts to under-utilized hosts.

The load balancing process runs once every minute for each cluster in a data center. It determines which hosts are over-utilized, which are hosts under-utilized, and which are valid targets for virtual machine migration. The determination is made based on the load balancing policy set by an administrator for a given cluster. The options for load balancing policies are VM_Evenly_Distributed, Evenly_Distributed, Power_Saving, Cluster_Maintenance, and None.

5.3. Load Balancing Policy: VM_Evenly_Distributed

A virtual machine evenly distributed load balancing policy distributes virtual machines evenly between hosts based on a count of the virtual machines. The high virtual machine count is the maximum number of virtual machines that can run on each host, beyond which qualifies as overloading the host. The VM_Evenly_Distributed policy allows an administrator to set a high virtual machine count for hosts. The maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host is also set by an administrator. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside this migration threshold. The administrator also sets the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines than other hosts it can run. If any host is running more virtual machines than the high virtual machine count and at least one host has a virtual machine count that falls outside of the migration threshold, virtual machines are migrated one by one to the host in the cluster that has the lowest CPU utilization. One virtual machine is migrated at a time until every host in the cluster has a virtual machine count that falls within the migration threshold.

5.4. Load Balancing Policy: Evenly_Distributed

Figure 5.1. Evenly Distributed Scheduling Policy

RHV SchedulingPolicies 444396 0417 ECE EvenlyDistributed

An evenly distributed load balancing policy selects the host for a new virtual machine according to lowest CPU load or highest available memory. The maximum CPU load and minimum available memory that is allowed for hosts in a cluster for a set amount of time are defined by the evenly distributed scheduling policy’s parameters. Beyond these limits the environment’s performance will degrade. The evenly distributed policy allows an administrator to set these levels for running virtual machines. If a host has reached the defined maximum CPU load or minimum available memory and the host stays there for more than the set time, virtual machines on that host are migrated one by one to the host in the cluster that has the lowest CPU or highest available memory depending on which parameter is being utilized. Host resources are checked once per minute, and one virtual machine is migrated at a time until the host CPU load is below the defined limit or the host available memory is above the defined limit.

5.5. Load Balancing Policy: Power_Saving

Figure 5.2. Power Saving Scheduling Policy

RHV SchedulingPolicies 444396 0417 ECE PowerSaving

A power saving load balancing policy selects the host for a new virtual machine according to lowest CPU or highest available memory. The maximum CPU load and minimum available memory that is allowed for hosts in a cluster for a set amount of time is defined by the power saving scheduling policy’s parameters. Beyond these limits the environment’s performance will degrade. The power saving parameters also define the minimum CPU load and maximum available memory allowed for hosts in a cluster for a set amount of time before the continued operation of a host is considered an inefficient use of electricity. If a host has reached the maximum CPU load or minimum available memory and stays there for more than the set time, the virtual machines on that host are migrated one by one to the host that has the lowest CPU or highest available memory depending on which parameter is being utilized. Host resources are checked once per minute, and one virtual machine is migrated at a time until the host CPU load is below the defined limit or the host available memory is above the defined limit. If the host’s CPU load falls below the defined minimum level or the host’s available memory rises above the defined maximum level the virtual machines on that host are migrated to other hosts in the cluster as long as the other hosts in the cluster remain below maximum CPU load and above minimum available memory. When an under-utilized host is cleared of its remaining virtual machines, the Manager will automatically power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster.

5.6. Load Balancing Policy: None

If no load balancing policy is selected, virtual machines are started on the host within a cluster with the lowest CPU utilization and available memory. To determine CPU utilization a combined metric is used that takes into account the virtual CPU count and the CPU usage percent. This approach is the least dynamic, as the only host selection point is when a new virtual machine is started. Virtual machines are not automatically migrated to reflect increased demand on a host.

An administrator must decide which host is an appropriate migration target for a given virtual machine. Virtual machines can also be associated with a particular host using pinning. Pinning prevents a virtual machine from being automatically migrated to other hosts. For environments where resources are highly consumed, manual migration is the best approach.

5.7. Load Balancing Policy: Cluster_Maintenance

A cluster maintenance scheduling policy limits activity in a cluster during maintenance tasks. When a cluster maintenance policy is set:

  • No new virtual machines may be started, except highly available virtual machines. (Users can create highly available virtual machines and start them manually.)
  • In the event of host failure, highly available virtual machines will restart properly and any virtual machine can migrate.

5.8. Highly Available Virtual Machine Reservation

A highly available (HA) virtual machine reservation policy enables the Red Hat Virtualization Manager to monitor cluster capacity for highly available virtual machines. The Manager has the capability to flag individual virtual machines for High Availability, meaning that in the event of a host failure, these virtual machines will be rebooted on an alternative host. This policy balances highly available virtual machines across the hosts in a cluster. If any host in the cluster fails, the remaining hosts can support the migrating load of highly available virtual machines without affecting cluster performance. When highly available virtual machine reservation is enabled, the Manager ensures that appropriate capacity exists within a cluster for HA virtual machines to migrate in the event that their existing host fails unexpectedly.

5.9. Scheduling

In Red Hat Virtualization, scheduling refers to the way the Red Hat Virtualization Manager selects a host in a cluster as the target for a new or migrated virtual machine.

For a host to be eligible to start a virtual machine or accept a migrated virtual machine from another host, it must have enough free memory and CPUs to support the requirements of the virtual machine being started on or migrated to it. A virtual machine will not start on a host with an overloaded CPU. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. If multiple hosts are eligible targets, one will be selected based on the load balancing policy for the cluster. For example, if the Evenly_Distributed policy is in effect, the Manager chooses the host with the lowest CPU utilization. If the Power_Saving policy is in effect, the host with the lowest CPU utilization between the maximum and minimum service levels will be selected. The Storage Pool Manager (SPM) status of a given host also affects eligibility as a target for starting virtual machines or virtual machine migration. A non-SPM host is a preferred target host, for instance, the first virtual machine started in a cluster will not run on the SPM host if the SPM role is held by a host in that cluster.

5.10. Migration

The Red Hat Virtualization Manager uses migration to enforce load balancing policies for a cluster. Virtual machine migration takes place according to the load balancing policy for a cluster and current demands on hosts within a cluster. Migration can also be configured to automatically occur when a host is fenced or moved to maintenance mode. The Red Hat Virtualization Manager first migrates virtual machines with the lowest CPU utilization. This is calculated as a percentage, and does not take into account RAM usage or I/O operations, except as I/O operations affect CPU utilization. If there are more than one virtual machines with the same CPU usage, the one that will be migrated first is the first virtual machine returned by the database query run by the Red Hat Virtualization Manager to determine virtual machine CPU usage.

Virtual machine migration has the following limitations by default:

  • A bandwidth limit of 52 MiBps is imposed on each virtual machine migration.
  • A migration will time out after 64 seconds per GB of virtual machine memory.
  • A migration will abort if progress is stalled for 240 seconds.
  • Concurrent outgoing migrations are limited to one per CPU core per host, or 2, whichever is smaller.

See https://access.redhat.com/solutions/744423 for more details about tuning migration settings.

Chapter 6. Directory Services

6.1. Directory Services

The Red Hat Virtualization platform relies on directory services for user authentication and authorization. Interactions with all Manager interfaces, including the VM Portal, Administration Portal, and REST API are limited to authenticated, authorized users. Virtual machines within the Red Hat Virtualization environment can use the same directory services to provide authentication and authorization, however they must be configured to do so. The currently supported providers of directory services for use with the Red Hat Virtualization Manager are Identity Management (IdM), Red Hat Directory Server 9 (RHDS), Active Directory (AD), and OpenLDAP. The Red Hat Virtualization Manager interfaces with the directory server for:

  • Portal logins (User, Power User, Administrator, REST API).
  • Queries to display user information.
  • Adding the Manager to a domain.

Authentication is the verification and identification of a party who generated some data, and of the integrity of the generated data. A principal is the party whose identity is verified. The verifier is the party who demands assurance of the principal’s identity. In the case of Red Hat Virtualization, the Manager is the verifier and a user is a principal. Data integrity is the assurance that the data received is the same as the data generated by the principal.

Confidentiality and authorization are closely related to authentication. Confidentiality protects data from disclosure to those not intended to receive it. Strong authentication methods can optionally provide confidentiality. Authorization determines whether a principal is allowed to perform an operation. Red Hat Virtualization uses directory services to associate users with roles and provide authorization accordingly. Authorization is usually performed after the principal has been authenticated, and may be based on information local or remote to the verifier.

During installation, a local, internal domain is automatically configured for administration of the Red Hat Virtualization environment. After the installation is complete, more domains can be added.

6.2. Local Authentication: Internal Domain

The Red Hat Virtualization Manager creates a limited, internal administration domain during installation. This domain is not the same as an AD or IdM domain, because it exists based on a key in the Red Hat Virtualization PostgreSQL database rather than as a directory service user on a directory server. The internal domain is also different from an external domain because the internal domain will only have one user: the admin@internal user. Taking this approach to initial authentication allows Red Hat Virtualization to be evaluated without requiring a complete, functional directory server, and ensures an administrative account is available to troubleshoot any issues with external directory services.

The admin@internal user is for the initial configuration of an environment. This includes installing and accepting hosts, adding external AD or IdM authentication domains, and delegating permissions to users from external domains.

6.3. Remote Authentication Using GSSAPI

In the context of Red Hat Virtualization, remote authentication refers to authentication that is handled by a remote service, not the Red Hat Virtualization Manager. Remote authentication is used for user or API connections coming to the Manager from within an AD, IdM, or RHDS domain. The Red Hat Virtualization Manager must be configured by an administrator using the engine-manage-domains tool to be a part of an RHDS, AD, or IdM domain. This requires that the Manager be provided with credentials for an account from the RHDS, AD, or IdM directory server for the domain with sufficient privileges to join a system to the domain. After domains have been added, domain users can be authenticated by the Red Hat Virtualization Manager against the directory server using a password. The Manager uses a framework called the Simple Authentication and Security Layer (SASL) which in turn uses the Generic Security Services Application Program Interface (GSSAPI) to securely verify the identity of a user, and ascertain the authorization level available to the user.

Figure 6.1. GSSAPI Authentication

1005

Chapter 7. Templates and Pools

7.1. Templates and Pools

The Red Hat Virtualization environment provides administrators with tools to simplify the provisioning of virtual machines to users. These are templates and pools. A template is a shortcut that allows an administrator to quickly create a new virtual machine based on an existing, pre-configured virtual machine, bypassing operating system installation and configuration. This is especially helpful for virtual machines that will be used like appliances, for example web server virtual machines. If an organization uses many instances of a particular web server, an administrator can create a virtual machine that will be used as a template, installing an operating system, the web server, any supporting packages, and applying unique configuration changes. The administrator can then create a template based on the working virtual machine that will be used to create new, identical virtual machines as they are required.

Virtual machine pools are groups of virtual machines based on a given template that can be rapidly provisioned to users. Permission to use virtual machines in a pool is granted at the pool level; a user who is granted permission to use the pool will be assigned any virtual machine from the pool. Inherent in a virtual machine pool is the transitory nature of the virtual machines within it. Because users are assigned virtual machines without regard for which virtual machine in the pool they have used in the past, pools are not suited for purposes which require data persistence. Virtual machine pools are best suited for scenarios where either user data is stored in a central location and the virtual machine is a means to accessing and using that data, or data persistence is not important. The creation of a pool results in the creation of the virtual machines that populate the pool, in a stopped state. These are then started on user request.

7.2. Templates

To create a template, an administrator creates and customizes a virtual machine. Desired packages are installed, customized configurations are applied, the virtual machine is prepared for its intended purpose in order to minimize the changes that must be made to it after deployment. An optional but recommended step before creating a template from a virtual machine is generalization. Generalization is used to remove details like system user names, passwords, and timezone information that will change upon deployment. Generalization does not affect customized configurations. Generalization of Windows and Linux guests in the Red Hat Virtualization environment is discussed further in Templates in the Virtual Machine Management Guide. Red Hat Enterprise Linux guests are generalized using sys-unconfig. Windows guests are generalized using sys-prep.

When the virtual machine that provides the basis for a template is satisfactorily configured, generalized if desired, and stopped, an administrator can create a template from the virtual machine. Creating a template from a virtual machine causes a read-only copy of the specially configured virtual disk to be created. The read-only image forms the backing image for all subsequently created virtual machines that are based on that template. In other words, a template is essentially a customized read-only virtual disk with an associated virtual hardware configuration. The hardware can be changed in virtual machines created from a template, for instance, provisioning two gigabytes of RAM for a virtual machine created from a template that has one gigabyte of RAM. The template virtual disk, however, cannot be changed as doing so would result in changes for all virtual machines based on the template.

When a template has been created, it can be used as the basis for multiple virtual machines. Virtual machines are created from a given template using a Thin provisioning method or a Clone provisioning method. Virtual machines that are cloned from templates take a complete writable copy of the template base image, sacrificing the space savings of the thin creation method in exchange for no longer depending on the presence of the template. Virtual machines that are created from a template using the thin method use the read-only image from the template as a base image, requiring that the template and all virtual machines created from it be stored on the same storage domain. Changes to data and newly generated data are stored in a copy-on-write image. Each virtual machine based on a template uses the same base read-only image, as well as a copy-on-write image that is unique to the virtual machine. This provides storage savings by limiting the number of times identical data is kept in storage. Furthermore, frequent use of the read-only backing image can cause the data being accessed to be cached, resulting in a net performance increase.

7.3. Pools

Virtual machine pools allow for rapid provisioning of numerous identical virtual machines to users as desktops. Users who have been granted permission to access and use virtual machines from a pool receive an available virtual machine based on their position in a queue of requests. Virtual machines in a pool do not allow data persistence; each time a virtual machine is assigned from a pool, it is allocated in its base state. This is ideally suited to be used in situations where user data is stored centrally.

Virtual machine pools are created from a template. Each virtual machine in a pool uses the same backing read-only image, and uses a temporary copy-on-write image to hold changed and newly generated data. Virtual machines in a pool are different from other virtual machines in that the copy-on-write layer that holds user-generated and -changed data is lost at shutdown. The implication of this is that a virtual machine pool requires no more storage than the template that backs it, plus some space for data generated or changed during use. Virtual machine pools are an efficient way to provide computing power to users for some tasks without the storage cost of providing each user with a dedicated virtual desktop.

Example 7.1. Example Pool Usage

A technical support company employs 10 help desk staff. However, only five are working at any given time. Instead of creating ten virtual machines, one for each help desk employee, a pool of five virtual machines can be created. Help desk employees allocate themselves a virtual machine at the beginning of their shift and return it to the pool at the end.

Chapter 8. Virtual Machine Snapshots

8.1. Snapshots

Snapshots are a storage function that allows an administrator to create a restore point of a virtual machine’s operating system, applications, and data at a certain point in time. Snapshots save the data currently present in a virtual machine hard disk image as a COW volume and allow for a recovery to the data as it existed at the time the snapshot was taken. A snapshot causes a new COW layer to be created over the current layer. All write actions performed after a snapshot is taken are written to the new COW layer.

It is important to understand that a virtual machine hard disk image is a chain of one or more volumes. From the perspective of a virtual machine, these volumes appear as a single disk image. A virtual machine is oblivious to the fact that its disk is comprised of multiple volumes.

The term COW volume and COW layer are used interchangeably, however, layer more clearly recognizes the temporal nature of snapshots. Each snapshot is created to allow an administrator to discard unsatisfactory changes made to data after the snapshot is taken. Snapshots provide similar functionality to the Undo function present in many word processors.

Note

Snapshots of virtual machine hard disks marked shareable and those that are based on Direct LUN connections are not supported, live or otherwise.

The three primary snapshot operations are:

  • Creation, which involves the first snapshot created for a virtual machine.
  • Previews, which involves previewing a snapshot to determine whether or not to restore the system data to the point in time that the snapshot was taken.
  • Deletion, which involves deleting a restoration point that is no longer required.

For task-based information about snapshot operations, see Snapshots in the Red Hat Virtualization Virtual Machine Management Guide.

8.2. Live Snapshots in Red Hat Virtualization

Snapshots of virtual machine hard disks marked shareable and those that are based on Direct LUN connections are not supported, live or otherwise.

Any other virtual machine that is not being cloned or migrated can have a snapshot taken when running, paused, or stopped.

When a live snapshot of a virtual machine is initiated, the Manager requests that the SPM host create a new volume for the virtual machine to use. When the new volume is ready, the Manager uses VDSM to communicate with libvirt and qemu on the host running the virtual machine that it should begin using the new volume for virtual machine write operations. If the virtual machine is able to write to the new volume, the snapshot operation is considered a success and the virtual machine stops writing to the previous volume. If the virtual machine is unable to write to the new volume, the snapshot operation is considered a failure, and the new volume is deleted.

The virtual machine requires access to both its current volume and the new one from the time when a live snapshot is initiated until after the new volume is ready, so both volumes are opened with read-write access.

Virtual machines with an installed guest agent that supports quiescing can ensure filesystem consistency across snapshots. Registered Red Hat Enterprise Linux guests can install the qemu-guest-agent to enable quiescing before snapshots.

If a quiescing compatible guest agent is present on a virtual machine when it a snapshot is taken, VDSM uses libvirt to communicate with the agent to prepare for a snapshot. Outstanding write actions are completed, and then filesystems are frozen before a snapshot is taken. When the snapshot is complete, and libvirt has switched the virtual machine to the new volume for disk write actions, the filesystem is thawed, and writes to disk resume.

All live snapshots attempted with quiescing enabled. If the snapshot command fails because there is no compatible guest agent present, the live snapshot is re-initiated without the use-quiescing flag. When a virtual machine is reverted to its pre-snapshot state with quiesced filesystems, it boots cleanly with no filesystem check required. Reverting the previous snapshot using an un-quiesced filesystem requires a filesystem check on boot.

8.3. Snapshot Creation

In Red Hat Virtualization the initial snapshot for a virtual machine is different from subsequent snapshots in that the initial snapshot retains its format, either QCOW2 or raw. The first snapshot for a virtual machine uses existing volumes as a base image. Additional snapshots are additional COW layers tracking the changes made to the data stored in the image since the previous snapshot.

As depicted in Figure 8.1, “Initial Snapshot Creation”, the creation of a snapshot causes the volumes that comprise a virtual disk to serve as the base image for all subsequent snapshots.

Figure 8.1. Initial Snapshot Creation

991

Snapshots taken after the initial snapshot result in the creation of new COW volumes in which data that is created or changed after the snapshot is taken will be stored. Each newly created COW layer contains only COW metadata. Data that is created by using and operating the virtual machine after a snapshot is taken is written to this new COW layer. When a virtual machine is used to modify data that exists in a previous COW layer, the data is read from the previous layer, and written into the newest layer. Virtual machines locate data by checking each COW layer from most recent to oldest, transparently to the virtual machine.

Figure 8.2. Additional Snapshot Creation

982

8.4. Snapshot Previews

To select which snapshot a virtual disk will be reverted to, the administrator can preview all previously created snapshots.

From the available snapshots per guest, the administrator can select a snapshot volume to preview its contents. As depicted in Figure 8.3, “Preview Snapshot”, each snapshot is saved as a COW volume, and when it is previewed, a new preview layer is copied from the snapshot being previewed. The guest interacts with the preview instead of the actual snapshot volume.

After the administrator previews the selected snapshot, the preview can be committed to restore the guest data to the state captured in the snapshot. If the administrator commits the preview, the guest is attached to the preview layer.

After a snapshot is previewed, the administrator can select Undo to discard the preview layer of the viewed snapshot. The layer that contains the snapshot itself is preserved despite the preview layer being discarded.

Figure 8.3. Preview Snapshot

1002

8.5. Snapshot Deletion

You can delete individual snapshots that are no longer required. Deleting a snapshot removes the ability to restore a virtual disk to that particular restoration point. It does not necessarily reclaim the disk space consumed by the snapshot, nor does it delete the data. The disk space will only be reclaimed if a subsequent snapshot has overwritten the data of the deleted snapshot. For example, if the third snapshot out of five snapshots is deleted, the unchanged data in the third snapshot must be preserved on the disk for the fourth and fifth snapshots to be usable; however, if the fourth or fifth snapshot has overwritten the data of the third, then the third snapshot has been made redundant and the disk space can be reclaimed. Aside from potential disk space reclamation, deleting a snapshot may also improve the performance of the virtual machine.

Figure 8.4. Snapshot Deletion

993

Snapshot deletion is handled as an asynchronous block job in which VDSM maintains a record of the operation in the recovery file for the virtual machine so that the job can be tracked even if VDSM is restarted or the virtual machine is shut down during the operation. Once the operation begins, the snapshot being deleted cannot be previewed or used as a restoration point, even if the operation fails or is interrupted. In operations in which the active layer is to be merged with its parent, the operation is split into a two-stage process during which data is copied from the active layer to the parent layer, and disk writes are mirrored to both the active layer and the parent. Finally, the job is considered complete once the data in the snapshot being deleted has been merged with its parent snapshot and VDSM synchronizes the changes throughout the image chain.

Note

If the deletion fails, fix the underlying problem (for example, a failed host, an inaccessible storage device, or even a temporary network issue) and try again.

Chapter 9. Hardware Drivers and Devices

9.1. Virtualized Hardware

Red Hat Virtualization presents three distinct types of system devices to virtualized guests. These hardware devices all appear as physically attached hardware devices to the virtualized guest but the device drivers work in different ways.

Emulated devices
Emulated devices, sometimes referred to as virtual devices, exist entirely in software. Emulated device drivers are a translation layer between the operating system running on the host (which manages the source device) and the operating systems running on the guests. The device level instructions directed to and from the emulated device are intercepted and translated by the hypervisor. Any device of the same type as that being emulated and recognized by the Linux kernel is able to be used as the backing source device for the emulated drivers.
Para-virtualized Devices
Para-virtualized devices require the installation of device drivers on the guest operating system providing it with an interface to communicate with the hypervisor on the host machine. This interface is used to allow traditionally intensive tasks such as disk I/O to be performed outside of the virtualized environment. Lowering the overhead inherent in virtualization in this manner is intended to allow guest operating system performance closer to that expected when running directly on physical hardware.
Physically shared devices
Certain hardware platforms allow virtualized guests to directly access various hardware devices and components. This process in virtualization is known as passthrough or device assignment. Passthrough allows devices to appear and behave as if they were physically attached to the guest operating system.

9.2. Stable Device Addresses in Red Hat Virtualization

Virtual hardware PCI address allocations are persisted in the ovirt-engine database.

PCI addresses are allocated by QEMU at virtual machine creation time, and reported to VDSM by libvirt. VDSM reports them back to the Manager, where they are stored in the ovirt-engine database.

When a virtual machine is started, the Manager sends VDSM the device address out of the database. VDSM passes them to libvirt which starts the virtual machine using the PCI device addresses that were allocated when the virtual machine was run for the first time.

When a device is removed from a virtual machine, all references to it, including the stable PCI address, are also removed. If a device is added to replace the removed device, it is allocated a PCI address by QEMU, which is unlikely to be the same as the device it replaced.

9.3. Central Processing Unit (CPU)

Each host within a cluster has a number of virtual CPUs (vCPUs). The virtual CPUs are in turn exposed to guests running on the hosts. All virtual CPUs exposed by hosts within a cluster are of the type selected when the cluster was initially created via Red Hat Virtualization Manager. Mixing of virtual CPU types within a cluster is not possible.

Each available virtual CPU type has characteristics based on physical CPUs of the same name. The virtual CPU is indistinguishable from the physical CPU to the guest operating system.

Note

Support for x2APIC:

All virtual CPU models provided by Red Hat Enterprise Linux 7 hosts include support for x2APIC. This provides an Advanced Programmable Interrupt Controller (APIC) to better handle hardware interrupts.

9.4. System Devices

System devices are critical for the guest to run and cannot be removed. Each system device attached to a guest also takes up an available PCI slot. The default system devices are:

  • Host bridge
  • ISA bridge and USB bridge (The USB and ISA bridges are the same device)
  • Graphics card using the VGA or qxl driver
  • Memory balloon device

For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine.

9.5. Network Devices

Red Hat Virtualization is able to expose three different types of network interface controller to guests. The type of network interface controller to expose to a guest is chosen when the guest is created but is changeable from the Red Hat Virtualization Manager.

  • The e1000 network interface controller exposes a virtualized Intel PRO/1000 (e1000) to guests.
  • The virtio network interface controller exposes a para-virtualized network device to guests.
  • The rtl8139 network interface controller exposes a virtualized Realtek Semiconductor Corp RTL8139 to guests.

Multiple network interface controllers are permitted per guest. Each controller added takes up an available PCI slot on the guest.

9.6. Graphics Devices

The SPICE or VNC graphics protocols can be used to connect to the emulated graphics devices.

You can select a Video Type in the Administration Portal:

  • QXL: Emulates a para-virtualized video card that works best with QXL guest drivers
  • VGA: Emulates a dummy VGA card with Bochs VESA extensions
Note

Virtual machines using VNC and Cirrus, which are imported from environments with a 3.6 compatibility level or earlier, will be converted to VNC and VGA automatically. You can update the graphics protocols and video types in the Administration Portal. See Virtual Machine Console Settings Explained in the Virtual Machine Management Guide for more information.

9.7. Storage Devices

Storage devices and storage pools can use the block device drivers to attach storage devices to virtualized guests. Note that the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtualized guest. The backing storage device can be any supported type of storage device, file, or storage pool volume.

  • The IDE driver exposes an emulated block device to guests. The emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtualized guest. The emulated IDE driver is also used to provide virtualized DVD-ROM drives.
  • The VirtIO driver exposes a para-virtualized block device to guests. The para-virtualized block driver is a driver for all storage devices supported by the hypervisor attached to the virtualized guest (except for floppy disk drives, which must be emulated).

9.8. Sound Devices

Two emulated sound devices are available:

  • The ac97 emulates an Intel 82801AA AC97 Audio compatible sound card.
  • The es1370 emulates an ENSONIQ AudioPCI ES1370 sound card.

9.9. Serial Driver

The para-virtualized serial driver (virtio-serial) is a bytestream-oriented, character stream driver. The para-virtualized serial driver provides a simple communication interface between the host’s user space and the guest’s user space where networking is not be available or unusable.

9.10. Balloon Driver

The balloon driver allows guests to express to the hypervisor how much memory they require. The balloon driver allows the host to efficiently allocate and memory to the guest and allow free memory to be allocated to other guests and processes.

Guests using the balloon driver can mark sections of the guest’s RAM as not in use (balloon inflation). The hypervisor can free the memory and use the memory for other host processes or other guests on that host. When the guest requires the freed memory again, the hypervisor can reallocate RAM to the guest (balloon deflation).

Appendix A. Enumerated Value Translation

The API uses Red Hat Virtualization Query Language to perform search queries. For more information, see Searches in the Introduction to the Administration Portal.

Note that certain enumerated values in the API require a different search query when using the Query Language. The following tables provides a translation for these key enumerated values according to resource type.

Table A.1. Enumerated Value Translations
API Enumerable TypeAPI Enumerable ValueQuery Language Value

data_center_states

not_operational

notoperational

host_states

non_responsive

nonresponsive

install_failed

installfailed

preparing_for_maintenance

preparingformaintenance

non_operational

nonoperational

pending_approval

pendingapproval

vm_states

powering_up

poweringup

powering_down

poweringdown

migrating

migratingfrom

migrating

migratingto

not_responding

notresponding

wait_for_launch

waitforlaunch

reboot_in_progress

rebootinprogress

saving_state

savingstate

restoring_state

restoringstate

Appendix B. Event Codes

This table lists all event codes.

Table B.1. Event codes
CodeNameSeverityMessage

0

UNASSIGNED

Info

 

1

VDC_START

Info

Starting oVirt Engine.

2

VDC_STOP

Info

Stopping oVirt Engine.

12

VDS_FAILURE

Error

Host ${VdsName} is non responsive.

13

VDS_DETECTED

Info

Status of host ${VdsName} was set to ${HostStatus}.

14

VDS_RECOVER

Info

Host ${VdsName} is rebooting.

15

VDS_MAINTENANCE

Normal

Host ${VdsName} was switched to Maintenance Mode.

16

VDS_ACTIVATE

Info

Activation of host ${VdsName} initiated by ${UserName}.

17

VDS_MAINTENANCE_FAILED

Error

Failed to switch Host ${VdsName} to Maintenance mode.

18

VDS_ACTIVATE_FAILED

Error

Failed to activate Host ${VdsName}.(User: ${UserName}).

19

VDS_RECOVER_FAILED

Error

Host ${VdsName} failed to recover.

20

USER_VDS_START

Info

Host ${VdsName} was started by ${UserName}.

21

USER_VDS_STOP

Info

Host ${VdsName} was stopped by ${UserName}.

22

IRS_FAILURE

Error

Failed to access Storage on Host ${VdsName}.

23

VDS_LOW_DISK_SPACE

Warning

Warning, Low disk space. Host ${VdsName} has less than ${DiskSpace} MB of free space left on: ${Disks}.

24

VDS_LOW_DISK_SPACE_ERROR

Error

Critical, Low disk space. Host ${VdsName} has less than ${DiskSpace} MB of free space left on: ${Disks}. Low disk space might cause an issue upgrading this host.

25

VDS_NO_SELINUX_ENFORCEMENT

Warning

Host ${VdsName} does not enforce SELinux. Current status: ${Mode}

26

IRS_DISK_SPACE_LOW

Warning

Warning, Low disk space. ${StorageDomainName} domain has ${DiskSpace} GB of free space.

27

VDS_STATUS_CHANGE_FAILED_DUE_TO_STOP_SPM_FAILURE

Warning

Failed to change status of host ${VdsName} due to a failure to stop the spm.

28

VDS_PROVISION

Warning

Installing OS on Host ${VdsName} using Hostgroup ${HostGroupName}.

29

USER_ADD_VM_TEMPLATE_SUCCESS

Info

Template ${VmTemplateName} was created successfully.

31

USER_VDC_LOGOUT

Info

User ${UserName} connected from '${SourceIP}' using session '${SessionID}' logged out.

32

USER_RUN_VM

Info

VM ${VmName} started on Host ${VdsName}

33

USER_STOP_VM

Info

VM ${VmName} powered off by ${UserName} (Host: ${VdsName})${OptionalReason}.

34

USER_ADD_VM

Info

VM ${VmName} was created by ${UserName}.

35

USER_UPDATE_VM

Info

VM ${VmName} configuration was updated by ${UserName}.

36

USER_ADD_VM_TEMPLATE_FAILURE

Error

Failed creating Template ${VmTemplateName}.

37

USER_ADD_VM_STARTED

Info

VM ${VmName} creation was initiated by ${UserName}.

38

USER_CHANGE_DISK_VM

Info

CD ${DiskName} was inserted to VM ${VmName} by ${UserName}.

39

USER_PAUSE_VM

Info

VM ${VmName} was suspended by ${UserName} (Host: ${VdsName}).

40

USER_RESUME_VM

Info

VM ${VmName} was resumed by ${UserName} (Host: ${VdsName}).

41

USER_VDS_RESTART

Info

Host ${VdsName} was restarted by ${UserName}.

42

USER_ADD_VDS

Info

Host ${VdsName} was added by ${UserName}.

43

USER_UPDATE_VDS

Info

Host ${VdsName} configuration was updated by ${UserName}.

44

USER_REMOVE_VDS

Info

Host ${VdsName} was removed by ${UserName}.

45

USER_CREATE_SNAPSHOT

Info

Snapshot '${SnapshotName}' creation for VM '${VmName}' was initiated by ${UserName}.

46

USER_TRY_BACK_TO_SNAPSHOT

Info

Snapshot-Preview ${SnapshotName} for VM ${VmName} was initiated by ${UserName}.

47

USER_RESTORE_FROM_SNAPSHOT

Info

VM ${VmName} restored from Snapshot by ${UserName}.

48

USER_ADD_VM_TEMPLATE

Info

Creation of Template ${VmTemplateName} from VM ${VmName} was initiated by ${UserName}.

49

USER_UPDATE_VM_TEMPLATE

Info

Template ${VmTemplateName} configuration was updated by ${UserName}.

50

USER_REMOVE_VM_TEMPLATE

Info

Removal of Template ${VmTemplateName} was initiated by ${UserName}.

51

USER_ADD_VM_TEMPLATE_FINISHED_SUCCESS

Info

Creation of Template ${VmTemplateName} from VM ${VmName} has been completed.

52

USER_ADD_VM_TEMPLATE_FINISHED_FAILURE

Error

Failed to complete creation of Template ${VmTemplateName} from VM ${VmName}.

53

USER_ADD_VM_FINISHED_SUCCESS

Info

VM ${VmName} creation has been completed.

54

USER_FAILED_RUN_VM

Error

Failed to run VM ${VmName}${DueToError} (User: ${UserName}).

55

USER_FAILED_PAUSE_VM

Error

Failed to suspend VM ${VmName} (Host: ${VdsName}, User: ${UserName}).

56

USER_FAILED_STOP_VM

Error

Failed to power off VM ${VmName} (Host: ${VdsName}, User: ${UserName}).

57

USER_FAILED_ADD_VM

Error

Failed to create VM ${VmName} (User: ${UserName}).

58

USER_FAILED_UPDATE_VM

Error

Failed to update VM ${VmName} (User: ${UserName}).

59

USER_FAILED_REMOVE_VM

Error

 

60

USER_ADD_VM_FINISHED_FAILURE

Error

Failed to complete VM ${VmName} creation.

61

VM_DOWN

Info

VM ${VmName} is down. ${ExitMessage}

62

VM_MIGRATION_START

Info

Migration started (VM: ${VmName}, Source: ${VdsName}, Destination: ${DestinationVdsName}, User: ${UserName}). ${OptionalReason}

63

VM_MIGRATION_DONE

Info

Migration completed (VM: ${VmName}, Source: ${VdsName}, Destination: ${DestinationVdsName}, Duration: ${Duration}, Total: ${TotalDuration}, Actual downtime: ${ActualDowntime})

64

VM_MIGRATION_ABORT

Error

Migration failed: ${MigrationError} (VM: ${VmName}, Source: ${VdsName}).

65

VM_MIGRATION_FAILED

Error

Migration failed${DueToMigrationError} (VM: ${VmName}, Source: ${VdsName}).

66

VM_FAILURE

Error

VM ${VmName} cannot be found on Host ${VdsName}.

67

VM_MIGRATION_START_SYSTEM_INITIATED

Info

Migration initiated by system (VM: ${VmName}, Source: ${VdsName}, Destination: ${DestinationVdsName}, Reason: ${OptionalReason}).

68

USER_CREATE_SNAPSHOT_FINISHED_SUCCESS

Info

Snapshot '${SnapshotName}' creation for VM '${VmName}' has been completed.

69

USER_CREATE_SNAPSHOT_FINISHED_FAILURE

Error

Failed to complete snapshot '${SnapshotName}' creation for VM '${VmName}'.

70

USER_RUN_VM_AS_STATELESS_FINISHED_FAILURE

Error

Failed to complete starting of VM ${VmName}.

71

USER_TRY_BACK_TO_SNAPSHOT_FINISH_SUCCESS

Info

Snapshot-Preview ${SnapshotName} for VM ${VmName} has been completed.

72

MERGE_SNAPSHOTS_ON_HOST

Info

Merging snapshots (${SourceSnapshot} into ${DestinationSnapshot}) of disk ${Disk} on host ${VDS}

73

USER_INITIATED_SHUTDOWN_VM

Info

VM shutdown initiated by ${UserName} on VM ${VmName} (Host: ${VdsName})${OptionalReason}.

74

USER_FAILED_SHUTDOWN_VM

Error

Failed to initiate shutdown on VM ${VmName} (Host: ${VdsName}, User: ${UserName}).

75

VDS_SOFT_RECOVER

Info

Soft fencing on host ${VdsName} was successful.

76

USER_STOPPED_VM_INSTEAD_OF_SHUTDOWN

Info

VM ${VmName} was powered off ungracefully by ${UserName} (Host: ${VdsName})${OptionalReason}.

77

USER_FAILED_STOPPING_VM_INSTEAD_OF_SHUTDOWN

Error

Failed to power off VM ${VmName} (Host: ${VdsName}, User: ${UserName}).

78

USER_ADD_DISK_TO_VM

Info

Add-Disk operation of ${DiskAlias} was initiated on VM ${VmName} by ${UserName}.

79

USER_FAILED_ADD_DISK_TO_VM

Error

Add-Disk operation failed on VM ${VmName} (User: ${UserName}).

80

USER_REMOVE_DISK_FROM_VM

Info

Disk was removed from VM ${VmName} by ${UserName}.

81

USER_FAILED_REMOVE_DISK_FROM_VM

Error

Failed to remove Disk from VM ${VmName} (User: ${UserName}).

88

USER_UPDATE_VM_DISK

Info

VM ${VmName} ${DiskAlias} disk was updated by ${UserName}.

89

USER_FAILED_UPDATE_VM_DISK

Error

Failed to update VM ${VmName} disk ${DiskAlias} (User: ${UserName}).

90

VDS_FAILED_TO_GET_HOST_HARDWARE_INFO

Warning

Could not get hardware information for host ${VdsName}

94

USER_COMMIT_RESTORE_FROM_SNAPSHOT_START

Info

Committing a Snapshot-Preview for VM ${VmName} was initialized by ${UserName}.

95

USER_COMMIT_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS

Info

Committing a Snapshot-Preview for VM ${VmName} has been completed.

96

USER_COMMIT_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE

Error

Failed to commit Snapshot-Preview for VM ${VmName}.

97

USER_ADD_DISK_TO_VM_FINISHED_SUCCESS

Info

The disk ${DiskAlias} was successfully added to VM ${VmName}.

98

USER_ADD_DISK_TO_VM_FINISHED_FAILURE

Error

Add-Disk operation failed to complete on VM ${VmName}.

99

USER_TRY_BACK_TO_SNAPSHOT_FINISH_FAILURE

Error

Failed to complete Snapshot-Preview ${SnapshotName} for VM ${VmName}.

100

USER_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS

Info

VM ${VmName} restoring from Snapshot has been completed.

101

USER_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE

Error

Failed to complete restoring from Snapshot of VM ${VmName}.

102

USER_FAILED_CHANGE_DISK_VM

Error

Failed to change disk in VM ${VmName} (Host: ${VdsName}, User: ${UserName}).

103

USER_FAILED_RESUME_VM

Error

Failed to resume VM ${VmName} (Host: ${VdsName}, User: ${UserName}).

104

USER_FAILED_ADD_VDS

Error

Failed to add Host ${VdsName} (User: ${UserName}).

105

USER_FAILED_UPDATE_VDS

Error

Failed to update Host ${VdsName} (User: ${UserName}).

106

USER_FAILED_REMOVE_VDS

Error

Failed to remove Host ${VdsName} (User: ${UserName}).

107

USER_FAILED_VDS_RESTART

Error

Failed to restart Host ${VdsName}, (User: ${UserName}).

108

USER_FAILED_ADD_VM_TEMPLATE

Error

Failed to initiate creation of Template ${VmTemplateName} from VM ${VmName} (User: ${UserName}).

109

USER_FAILED_UPDATE_VM_TEMPLATE

Error

Failed to update Template ${VmTemplateName} (User: ${UserName}).

110

USER_FAILED_REMOVE_VM_TEMPLATE

Error

Failed to initiate removal of Template ${VmTemplateName} (User: ${UserName}).

111

USER_STOP_SUSPENDED_VM

Info

Suspended VM ${VmName} has had its save state cleared by ${UserName}${OptionalReason}.

112

USER_STOP_SUSPENDED_VM_FAILED

Error

Failed to power off suspended VM ${VmName} (User: ${UserName}).

113

USER_REMOVE_VM_FINISHED

Info

VM ${VmName} was successfully removed.

115

USER_FAILED_TRY_BACK_TO_SNAPSHOT

Error

Failed to preview Snapshot ${SnapshotName} for VM ${VmName} (User: ${UserName}).

116

USER_FAILED_RESTORE_FROM_SNAPSHOT

Error

Failed to restore VM ${VmName} from Snapshot (User: ${UserName}).

117

USER_FAILED_CREATE_SNAPSHOT

Error

Failed to create Snapshot ${SnapshotName} for VM ${VmName} (User: ${UserName}).

118

USER_FAILED_VDS_START

Error

Failed to start Host ${VdsName}, (User: ${UserName}).

119

VM_DOWN_ERROR

Error

VM ${VmName} is down with error. ${ExitMessage}.

120

VM_MIGRATION_TO_SERVER_FAILED

Error

Migration failed${DueToMigrationError} (VM: ${VmName}, Source: ${VdsName}, Destination: ${DestinationVdsName}).

121

SYSTEM_VDS_RESTART

Info

Host ${VdsName} was restarted by the engine.

122

SYSTEM_FAILED_VDS_RESTART

Error

A restart initiated by the engine to Host ${VdsName} has failed.

123

VDS_SLOW_STORAGE_RESPONSE_TIME

Warning

Slow storage response time on Host ${VdsName}.

124

VM_IMPORT

Info

Started VM import of ${ImportedVmName} (User: ${UserName})

125

VM_IMPORT_FAILED

Error

Failed to import VM ${ImportedVmName} (User: ${UserName})

126

VM_NOT_RESPONDING

Warning

VM ${VmName} is not responding.

127

VDS_RUN_IN_NO_KVM_MODE

Error

Host ${VdsName} running without virtualization hardware acceleration

128

VM_MIGRATION_TRYING_RERUN

Warning

Failed to migrate VM ${VmName} to Host ${DestinationVdsName}${DueToMigrationError}. Trying to migrate to another Host.

129

VM_CLEARED

Info

Unused

130

USER_SUSPEND_VM_FINISH_FAILURE_WILL_TRY_AGAIN

Error

Failed to complete suspending of VM ${VmName}, will try again.

131

USER_EXPORT_VM

Info

VM ${VmName} exported to ${ExportPath} by ${UserName}

132

USER_EXPORT_VM_FAILED

Error

Failed to export VM ${VmName} to ${ExportPath} (User: ${UserName})

133

USER_EXPORT_TEMPLATE

Info

Template ${VmTemplateName} exported to ${ExportPath} by ${UserName}

134

USER_EXPORT_TEMPLATE_FAILED

Error

Failed to export Template ${VmTemplateName} to ${ExportPath} (User: ${UserName})

135

TEMPLATE_IMPORT

Info

Started Template import of ${ImportedVmTemplateName} (User: ${UserName})

136

TEMPLATE_IMPORT_FAILED

Error

Failed to import Template ${ImportedVmTemplateName} (User: ${UserName})

137

USER_FAILED_VDS_STOP

Error

Failed to stop Host ${VdsName}, (User: ${UserName}).

138

VM_PAUSED_ENOSPC

Error

VM ${VmName} has been paused due to no Storage space error.

139

VM_PAUSED_ERROR

Error

VM ${VmName} has been paused due to unknown storage error.

140

VM_MIGRATION_FAILED_DURING_MOVE_TO_MAINTENANCE

Error

Migration failed${DueToMigrationError} while Host is in 'preparing for maintenance' state.\n Consider manual intervention\: stopping/migrating Vms as Host’s state will not\n turn to maintenance while VMs are still running on it.(VM: ${VmName}, Source: ${VdsName}, Destination: ${DestinationVdsName}).

141

VDS_VERSION_NOT_SUPPORTED_FOR_CLUSTER

Error

Host ${VdsName} is installed with VDSM version (${VdsSupportedVersions}) and cannot join cluster ${ClusterName} which is compatible with VDSM versions ${CompatibilityVersion}.

142

VM_SET_TO_UNKNOWN_STATUS

Warning

VM ${VmName} was set to the Unknown status.

143

VM_WAS_SET_DOWN_DUE_TO_HOST_REBOOT_OR_MANUAL_FENCE

Info

Vm ${VmName} was shut down due to ${VdsName} host reboot or manual fence

144

VM_IMPORT_INFO

Info

Value of field ${FieldName} of imported VM ${VmName} is ${FieldValue}. The field is reset to the default value

145

VM_PAUSED_EIO

Error

VM ${VmName} has been paused due to storage I/O problem.

146

VM_PAUSED_EPERM

Error

VM ${VmName} has been paused due to storage permissions problem.

147

VM_POWER_DOWN_FAILED

Warning

Shutdown of VM ${VmName} failed.

148

VM_MEMORY_UNDER_GUARANTEED_VALUE

Error

VM ${VmName} on host ${VdsName} was guaranteed ${MemGuaranteed} MB but currently has ${MemActual} MB

149

USER_ADD

Info

User '${NewUserName}' was added successfully to the system.

150

USER_INITIATED_RUN_VM

Info

Starting VM ${VmName} was initiated by ${UserName}.

151

USER_INITIATED_RUN_VM_FAILED

Warning

Failed to run VM ${VmName} on Host ${VdsName}.

152

USER_RUN_VM_ON_NON_DEFAULT_VDS

Warning

Guest ${VmName} started on Host ${VdsName}. (Default Host parameter was ignored - assigned Host was not available).

153

USER_STARTED_VM

Info

VM ${VmName} was started by ${UserName} (Host: ${VdsName}).

154

VDS_CLUSTER_VERSION_NOT_SUPPORTED

Error

Host ${VdsName} is compatible with versions (${VdsSupportedVersions}) and cannot join Cluster ${ClusterName} which is set to version ${CompatibilityVersion}.

155

VDS_ARCHITECTURE_NOT_SUPPORTED_FOR_CLUSTER

Error

Host ${VdsName} has architecture ${VdsArchitecture} and cannot join Cluster ${ClusterName} which has architecture ${ClusterArchitecture}.

156

CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION

Error

Host ${VdsName} moved to Non-Operational state as host CPU type is not supported in this cluster compatibility version or is not supported at all

157

USER_REBOOT_VM

Info

User ${UserName} initiated reboot of VM ${VmName}.

158

USER_FAILED_REBOOT_VM

Error

Failed to reboot VM ${VmName} (User: ${UserName}).

159

USER_FORCE_SELECTED_SPM

Info

Host ${VdsName} was force selected by ${UserName}

160

USER_ACCOUNT_DISABLED_OR_LOCKED

Error

User ${UserName} cannot login, as it got disabled or locked. Please contact the system administrator.

161

VM_CANCEL_MIGRATION

Info

Migration cancelled (VM: ${VmName}, Source: ${VdsName}, User: ${UserName}).

162

VM_CANCEL_MIGRATION_FAILED

Error

Failed to cancel migration for VM: ${VmName}

163

VM_STATUS_RESTORED

Info

VM ${VmName} status was restored to ${VmStatus}.

164

VM_SET_TICKET

Info

User ${UserName} initiated console session for VM ${VmName}

165

VM_SET_TICKET_FAILED

Error

User ${UserName} failed to initiate a console session for VM ${VmName}

166

VM_MIGRATION_NO_VDS_TO_MIGRATE_TO

Warning

No available host was found to migrate VM ${VmName} to.

167

VM_CONSOLE_CONNECTED

Info

User ${UserName} is connected to VM ${VmName}.

168

VM_CONSOLE_DISCONNECTED

Info

User ${UserName} got disconnected from VM ${VmName}.

169

VM_FAILED_TO_PRESTART_IN_POOL

Warning

Cannot pre-start VM in pool '${VmPoolName}'. The system will continue trying.

170

USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE

Warning

Failed to create live snapshot '${SnapshotName}' for VM '${VmName}'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.

171

USER_RUN_VM_AS_STATELESS_WITH_DISKS_NOT_ALLOWING_SNAPSHOT

Warning

VM ${VmName} was run as stateless with one or more of disks that do not allow snapshots (User:${UserName}).

172

USER_REMOVE_VM_FINISHED_WITH_ILLEGAL_DISKS

Warning

VM ${VmName} has been removed, but the following disks could not be removed: ${DisksNames}. These disks will appear in the main disks tab in illegal state, please remove manually when possible.

173

USER_CREATE_LIVE_SNAPSHOT_NO_MEMORY_FAILURE

Error

Failed to save memory as part of Snapshot ${SnapshotName} for VM ${VmName} (User: ${UserName}).

174

VM_IMPORT_FROM_CONFIGURATION_EXECUTED_SUCCESSFULLY

Info

VM ${VmName} has been successfully imported from the given configuration.

175

VM_IMPORT_FROM_CONFIGURATION_ATTACH_DISKS_FAILED

Warning

VM ${VmName} has been imported from the given configuration but the following disk(s) failed to attach: ${DiskAliases}.

176

VM_BALLOON_DRIVER_ERROR

Error

The Balloon driver on VM ${VmName} on host ${VdsName} is requested but unavailable.

177

VM_BALLOON_DRIVER_UNCONTROLLED

Error

The Balloon device on VM ${VmName} on host ${VdsName} is inflated but the device cannot be controlled (guest agent is down).

178

VM_MEMORY_NOT_IN_RECOMMENDED_RANGE

Warning

VM ${VmName} was configured with ${VmMemInMb}MiB of memory while the recommended value range is ${VmMinMemInMb}MiB - ${VmMaxMemInMb}MiB

179

USER_INITIATED_RUN_VM_AND_PAUSE

Info

Starting in paused mode VM ${VmName} was initiated by ${UserName}.

180

TEMPLATE_IMPORT_FROM_CONFIGURATION_SUCCESS

Info

Template ${VmTemplateName} has been successfully imported from the given configuration.

181

TEMPLATE_IMPORT_FROM_CONFIGURATION_FAILED

Error

Failed to import Template ${VmTemplateName} from the given configuration.

182

USER_FAILED_ATTACH_USER_TO_VM

Error

Failed to attach User ${AdUserName} to VM ${VmName} (User: ${UserName}).

183

USER_ATTACH_TAG_TO_TEMPLATE

Info

Tag ${TagName} was attached to Templates(s) ${TemplatesNames} by ${UserName}.

184

USER_ATTACH_TAG_TO_TEMPLATE_FAILED

Error

Failed to attach Tag ${TagName} to Templates(s) ${TemplatesNames} (User: ${UserName}).

185

USER_DETACH_TEMPLATE_FROM_TAG

Info

Tag ${TagName} was detached from Template(s) ${TemplatesNames} by ${UserName}.

186

USER_DETACH_TEMPLATE_FROM_TAG_FAILED

Error

Failed to detach Tag ${TagName} from TEMPLATE(s) ${TemplatesNames} (User: ${UserName}).

187

VDS_STORAGE_CONNECTION_FAILED_BUT_LAST_VDS

Error

Failed to connect Host ${VdsName} to Data Center, due to connectivity errors with the Storage. Host ${VdsName} will remain in Up state (but inactive), as it is the last Host in the Data Center, to enable manual intervention by the Administrator.

188

VDS_STORAGES_CONNECTION_FAILED

Error

Failed to connect Host ${VdsName} to the Storage Domains ${failedStorageDomains}.

189

VDS_STORAGE_VDS_STATS_FAILED

Error

Host ${VdsName} reports about one of the Active Storage Domains as Problematic.

190

UPDATE_OVF_FOR_STORAGE_DOMAIN_FAILED

Warning

Failed to update VMs/Templates OVF data for Storage Domain ${StorageDomainName} in Data Center ${StoragePoolName}.

191

CREATE_OVF_STORE_FOR_STORAGE_DOMAIN_FAILED

Warning

Failed to create OVF store disk for Storage Domain ${StorageDomainName}.\n The Disk with the id ${DiskId} might be removed manually for automatic attempt to create new one. \n OVF updates won’t be attempted on the created disk.

192

CREATE_OVF_STORE_FOR_STORAGE_DOMAIN_INITIATE_FAILED

Warning

Failed to create OVF store disk for Storage Domain ${StorageDomainName}. \n OVF data won’t be updated meanwhile for that domain.

193

DELETE_OVF_STORE_FOR_STORAGE_DOMAIN_FAILED

Warning

Failed to delete the OVF store disk for Storage Domain ${StorageDomainName}.\n In order to detach the domain please remove it manually or try to detach the domain again for another attempt.

194

VM_CANCEL_CONVERSION

Info

Conversion cancelled (VM: ${VmName}, Source: ${VdsName}, User: ${UserName}).

195

VM_CANCEL_CONVERSION_FAILED

Error

Failed to cancel conversion for VM: ${VmName}

196

VM_RECOVERED_FROM_PAUSE_ERROR

Normal

VM ${VmName} has recovered from paused back to up.

197

SYSTEM_SSH_HOST_RESTART

Info

Host ${VdsName} was restarted using SSH by the engine.

198

SYSTEM_FAILED_SSH_HOST_RESTART

Error

A restart using SSH initiated by the engine to Host ${VdsName} has failed.

199

USER_UPDATE_OVF_STORE

Info

OVF_STORE for domain ${StorageDomainName} was updated by ${UserName}.

200

IMPORTEXPORT_GET_VMS_INFO_FAILED

Error

Failed to retrieve VM/Templates information from export domain ${StorageDomainName}

201

IRS_DISK_SPACE_LOW_ERROR

Error

Critical, Low disk space. ${StorageDomainName} domain has ${DiskSpace} GB of free space.

202

IMPORTEXPORT_GET_EXTERNAL_VMS_INFO_FAILED

Error

Failed to retrieve VMs information from external server ${URL}

204

IRS_HOSTED_ON_VDS

Info

Storage Pool Manager runs on Host ${VdsName} (Address: ${ServerIp}), Data Center ${StoragePoolName}.

205

PROVIDER_ADDED

Info

Provider ${ProviderName} was added. (User: ${UserName})

206

PROVIDER_ADDITION_FAILED

Error

Failed to add provider ${ProviderName}. (User: ${UserName})

207

PROVIDER_UPDATED

Info

Provider ${ProviderName} was updated. (User: ${UserName})

208

PROVIDER_UPDATE_FAILED

Error

Failed to update provider ${ProviderName}. (User: ${UserName})

209

PROVIDER_REMOVED

Info

Provider ${ProviderName} was removed. (User: ${UserName})

210

PROVIDER_REMOVAL_FAILED

Error

Failed to remove provider ${ProviderName}. (User: ${UserName})

213

PROVIDER_CERTIFICATE_IMPORTED

Info

Certificate for provider ${ProviderName} was imported. (User: ${UserName})

214

PROVIDER_CERTIFICATE_IMPORT_FAILED

Error

Failed importing Certificate for provider ${ProviderName}. (User: ${UserName})

215

PROVIDER_SYNCHRONIZED

Info

 

216

PROVIDER_SYNCHRONIZED_FAILED

Error

Failed to synchronize networks of Provider ${ProviderName}.

217

PROVIDER_SYNCHRONIZED_PERFORMED

Info

Networks of Provider ${ProviderName} were successfully synchronized.

218

PROVIDER_SYNCHRONIZED_PERFORMED_FAILED

Error

Networks of Provider ${ProviderName} were incompletely synchronized.

219

PROVIDER_SYNCHRONIZED_DISABLED

Error

Failed to synchronize networks of Provider ${ProviderName}, because the authentication information of the provider is invalid. Automatic synchronization is deactivated for this Provider.

250

USER_UPDATE_VM_CLUSTER_DEFAULT_HOST_CLEARED

Info

${VmName} cluster was updated by ${UserName}, Default host was reset to auto assign.

251

USER_REMOVE_VM_TEMPLATE_FINISHED

Info

Removal of Template ${VmTemplateName} has been completed.

252

SYSTEM_FAILED_UPDATE_VM

Error

Failed to Update VM ${VmName} that was initiated by system.

253

SYSTEM_UPDATE_VM

Info

VM ${VmName} configuration was updated by system.

254

VM_ALREADY_IN_REQUESTED_STATUS

Info

VM ${VmName} is already ${VmStatus}, ${Action} was skipped. User: ${UserName}.

302

USER_ADD_VM_POOL_WITH_VMS

Info

VM Pool ${VmPoolName} (containing ${VmsCount} VMs) was created by ${UserName}.

303

USER_ADD_VM_POOL_WITH_VMS_FAILED

Error

Failed to create VM Pool ${VmPoolName} (User: ${UserName}).

304

USER_REMOVE_VM_POOL

Info

VM Pool ${VmPoolName} was removed by ${UserName}.

305

USER_REMOVE_VM_POOL_FAILED

Error

Failed to remove VM Pool ${VmPoolName} (User: ${UserName}).

306

USER_ADD_VM_TO_POOL

Info

VM ${VmName} was added to VM Pool ${VmPoolName} by ${UserName}.

307

USER_ADD_VM_TO_POOL_FAILED

Error

Failed to add VM ${VmName} to VM Pool ${VmPoolName}(User: ${UserName}).

308

USER_REMOVE_VM_FROM_POOL

Info

VM ${VmName} was removed from VM Pool ${VmPoolName} by ${UserName}.

309

USER_REMOVE_VM_FROM_POOL_FAILED

Error

Failed to remove VM ${VmName} from VM Pool ${VmPoolName} (User: ${UserName}).

310

USER_ATTACH_USER_TO_POOL

Info

User ${AdUserName} was attached to VM Pool ${VmPoolName} by ${UserName}.

311

USER_ATTACH_USER_TO_POOL_FAILED

Error

Failed to attach User ${AdUserName} to VM Pool ${VmPoolName} (User: ${UserName}).

312

USER_DETACH_USER_FROM_POOL

Info

User ${AdUserName} was detached from VM Pool ${VmPoolName} by ${UserName}.

313

USER_DETACH_USER_FROM_POOL_FAILED

Error

Failed to detach User ${AdUserName} from VM Pool ${VmPoolName} (User: ${UserName}).

314

USER_UPDATE_VM_POOL

Info

VM Pool ${VmPoolName} configuration was updated by ${UserName}.

315

USER_UPDATE_VM_POOL_FAILED

Error

Failed to update VM Pool ${VmPoolName} configuration (User: ${UserName}).

316

USER_ATTACH_USER_TO_VM_FROM_POOL

Info

Attaching User ${AdUserName} to VM ${VmName} in VM Pool ${VmPoolName} was initiated by ${UserName}.

317

USER_ATTACH_USER_TO_VM_FROM_POOL_FAILED

Error

Failed to attach User ${AdUserName} to VM from VM Pool ${VmPoolName} (User: ${UserName}).

318

USER_ATTACH_USER_TO_VM_FROM_POOL_FINISHED_SUCCESS

Info

User ${AdUserName} successfully attached to VM ${VmName} in VM Pool ${VmPoolName}.

319

USER_ATTACH_USER_TO_VM_FROM_POOL_FINISHED_FAILURE

Error

Failed to attach user ${AdUserName} to VM ${VmName} in VM Pool ${VmPoolName}.

320

USER_ADD_VM_POOL_WITH_VMS_ADD_VDS_FAILED

Error

Pool ${VmPoolName} Created, but some Vms failed to create (User: ${UserName}).

321

USER_REMOVE_VM_POOL_INITIATED

Info

VM Pool ${VmPoolName} removal was initiated by ${UserName}.

325

USER_REMOVE_ADUSER

Info

User ${AdUserName} was removed by ${UserName}.

326

USER_FAILED_REMOVE_ADUSER

Error

Failed to remove User ${AdUserName} (User: ${UserName}).

327

USER_FAILED_ADD_ADUSER

Warning

Failed to add User '${NewUserName}' to the system.

342

USER_REMOVE_SNAPSHOT

Info

Snapshot '${SnapshotName}' deletion for VM '${VmName}' was initiated by ${UserName}.

343

USER_FAILED_REMOVE_SNAPSHOT

Error

Failed to remove Snapshot ${SnapshotName} for VM ${VmName} (User: ${UserName}).

344

USER_UPDATE_VM_POOL_WITH_VMS

Info

VM Pool ${VmPoolName} was updated by ${UserName}, ${VmsCount} VMs were added.

345

USER_UPDATE_VM_POOL_WITH_VMS_FAILED

Error

Failed to update VM Pool ${VmPoolName}(User: ${UserName}).

346

USER_PASSWORD_CHANGED

Info

Password changed successfully for ${UserName}

347

USER_PASSWORD_CHANGE_FAILED

Error

Failed to change password. (User: ${UserName})

348

USER_CLEAR_UNKNOWN_VMS

Info

All VMs' status on Non Responsive Host ${VdsName} were changed to 'Down' by ${UserName}

349

USER_FAILED_CLEAR_UNKNOWN_VMS

Error

Failed to clear VMs' status on Non Responsive Host ${VdsName}. (User: ${UserName}).

350

USER_ADD_BOOKMARK

Info

Bookmark ${BookmarkName} was added by ${UserName}.

351

USER_ADD_BOOKMARK_FAILED

Error

Failed to add bookmark: ${BookmarkName} (User: ${UserName}).

352

USER_UPDATE_BOOKMARK

Info

Bookmark ${BookmarkName} was updated by ${UserName}.

353

USER_UPDATE_BOOKMARK_FAILED

Error

Failed to update bookmark: ${BookmarkName} (User: ${UserName})

354

USER_REMOVE_BOOKMARK

Info

Bookmark ${BookmarkName} was removed by ${UserName}.

355

USER_REMOVE_BOOKMARK_FAILED

Error

Failed to remove bookmark ${BookmarkName} (User: ${UserName})

356

USER_REMOVE_SNAPSHOT_FINISHED_SUCCESS

Info

Snapshot '${SnapshotName}' deletion for VM '${VmName}' has been completed.

357

USER_REMOVE_SNAPSHOT_FINISHED_FAILURE

Error

Failed to delete snapshot '${SnapshotName}' for VM '${VmName}'.

358

USER_VM_POOL_MAX_SUBSEQUENT_FAILURES_REACHED

Warning

Not all VMs where successfully created in VM Pool ${VmPoolName}.

359

USER_REMOVE_SNAPSHOT_FINISHED_FAILURE_PARTIAL_SNAPSHOT

Warning

Due to partial snapshot removal, Snapshot '${SnapshotName}' of VM '${VmName}' now contains only the following disks: '${DiskAliases}'.

360

USER_DETACH_USER_FROM_VM

Info

User ${AdUserName} was detached from VM ${VmName} by ${UserName}.

361

USER_FAILED_DETACH_USER_FROM_VM

Error

Failed to detach User ${AdUserName} from VM ${VmName} (User: ${UserName}).

362

USER_REMOVE_SNAPSHOT_FINISHED_FAILURE_BASE_IMAGE_NOT_FOUND

Error

Failed to merge images of snapshot '${SnapshotName}': base volume '${BaseVolumeId}' is missing. This may have been caused by a failed attempt to remove the parent snapshot; if this is the case, please retry deletion of the parent snapshot before deleting this one.

370

USER_EXTEND_DISK_SIZE_FAILURE

Error

Failed to extend size of the disk '${DiskAlias}' to ${NewSize} GB, User: ${UserName}.

371

USER_EXTEND_DISK_SIZE_SUCCESS

Info

Size of the disk '${DiskAlias}' was successfully updated to ${NewSize} GB by ${UserName}.

372

USER_EXTEND_DISK_SIZE_UPDATE_VM_FAILURE

Warning

Failed to update VM '${VmName}' with the new volume size. VM restart is recommended.

373

USER_REMOVE_DISK_SNAPSHOT

Info

Disk '${DiskAlias}' from Snapshot(s) '${Snapshots}' of VM '${VmName}' deletion was initiated by ${UserName}.

374

USER_FAILED_REMOVE_DISK_SNAPSHOT

Error

Failed to delete Disk '${DiskAlias}' from Snapshot(s) ${Snapshots} of VM ${VmName} (User: ${UserName}).

375

USER_REMOVE_DISK_SNAPSHOT_FINISHED_SUCCESS

Info

Disk '${DiskAlias}' from Snapshot(s) '${Snapshots}' of VM '${VmName}' deletion has been completed (User: ${UserName}).

376

USER_REMOVE_DISK_SNAPSHOT_FINISHED_FAILURE

Error

Failed to complete deletion of Disk '${DiskAlias}' from snapshot(s) '${Snapshots}' of VM '${VmName}' (User: ${UserName}).

377

USER_EXTENDED_DISK_SIZE

Info

Extending disk '${DiskAlias}' to ${NewSize} GB was initiated by ${UserName}.

378

USER_REGISTER_DISK_FINISHED_SUCCESS

Info

Disk '${DiskAlias}' has been successfully registered as a floating disk.

379

USER_REGISTER_DISK_FINISHED_FAILURE

Error

Failed to register Disk '${DiskAlias}'.

380

USER_EXTEND_DISK_SIZE_UPDATE_HOST_FAILURE

Warning

Failed to refresh volume size on host '${VdsName}'. Please try the operation again.

381

USER_REGISTER_DISK_INITIATED

Info

Registering Disk '${DiskAlias}' has been initiated.

382

USER_REDUCE_DISK_FINISHED_SUCCESS

Info

Disk '${DiskAlias}' has been successfully reduced.

383

USER_REDUCE_DISK_FINISHED_FAILURE

Error

Failed to reduce Disk '${DiskAlias}'.

400

USER_ATTACH_VM_TO_AD_GROUP

Info

Group ${GroupName} was attached to VM ${VmName} by ${UserName}.

401

USER_ATTACH_VM_TO_AD_GROUP_FAILED

Error

Failed to attach Group ${GroupName} to VM ${VmName} (User: ${UserName}).

402

USER_DETACH_VM_TO_AD_GROUP

Info

Group ${GroupName} was detached from VM ${VmName} by ${UserName}.

403

USER_DETACH_VM_TO_AD_GROUP_FAILED

Error

Failed to detach Group ${GroupName} from VM ${VmName} (User: ${UserName}).

404

USER_ATTACH_VM_POOL_TO_AD_GROUP

Info

Group ${GroupName} was attached to VM Pool ${VmPoolName} by ${UserName}.

405

USER_ATTACH_VM_POOL_TO_AD_GROUP_FAILED

Error

Failed to attach Group ${GroupName} to VM Pool ${VmPoolName} (User: ${UserName}).

406

USER_DETACH_VM_POOL_TO_AD_GROUP

Info

Group ${GroupName} was detached from VM Pool ${VmPoolName} by ${UserName}.

407

USER_DETACH_VM_POOL_TO_AD_GROUP_FAILED

Error

Failed to detach Group ${GroupName} from VM Pool ${VmPoolName} (User: ${UserName}).

408

USER_REMOVE_AD_GROUP

Info

Group ${GroupName} was removed by ${UserName}.

409

USER_REMOVE_AD_GROUP_FAILED

Error

Failed to remove group ${GroupName} (User: ${UserName}).

430

USER_UPDATE_TAG

Info

Tag ${TagName} configuration was updated by ${UserName}.

431

USER_UPDATE_TAG_FAILED

Error

Failed to update Tag ${TagName} (User: ${UserName}).

432

USER_ADD_TAG

Info

New Tag ${TagName} was created by ${UserName}.

433

USER_ADD_TAG_FAILED

Error

Failed to create Tag named ${TagName} (User: ${UserName}).

434

USER_REMOVE_TAG

Info

Tag ${TagName} was removed by ${UserName}.

435

USER_REMOVE_TAG_FAILED

Error

Failed to remove Tag ${TagName} (User: ${UserName}).

436

USER_ATTACH_TAG_TO_USER

Info

Tag ${TagName} was attached to User(s) ${AttachUsersNames} by ${UserName}.

437

USER_ATTACH_TAG_TO_USER_FAILED

Error

Failed to attach Tag ${TagName} to User(s) ${AttachUsersNames} (User: ${UserName}).

438

USER_ATTACH_TAG_TO_USER_GROUP

Info

Tag ${TagName} was attached to Group(s) ${AttachGroupsNames} by ${UserName}.

439

USER_ATTACH_TAG_TO_USER_GROUP_FAILED

Error

Failed to attach Group(s) ${AttachGroupsNames} to Tag ${TagName} (User: ${UserName}).

440

USER_ATTACH_TAG_TO_VM

Info

Tag ${TagName} was attached to VM(s) ${VmsNames} by ${UserName}.

441

USER_ATTACH_TAG_TO_VM_FAILED

Error

Failed to attach Tag ${TagName} to VM(s) ${VmsNames} (User: ${UserName}).

442

USER_ATTACH_TAG_TO_VDS

Info

Tag ${TagName} was attached to Host(s) ${VdsNames} by ${UserName}.

443

USER_ATTACH_TAG_TO_VDS_FAILED

Error

Failed to attach Tag ${TagName} to Host(s) ${VdsNames} (User: ${UserName}).

444

USER_DETACH_VDS_FROM_TAG

Info

Tag ${TagName} was detached from Host(s) ${VdsNames} by ${UserName}.

445

USER_DETACH_VDS_FROM_TAG_FAILED

Error

Failed to detach Tag ${TagName} from Host(s) ${VdsNames} (User: ${UserName}).

446

USER_DETACH_VM_FROM_TAG

Info

Tag ${TagName} was detached from VM(s) ${VmsNames} by ${UserName}.

447

USER_DETACH_VM_FROM_TAG_FAILED

Error

Failed to detach Tag ${TagName} from VM(s) ${VmsNames} (User: ${UserName}).

448

USER_DETACH_USER_FROM_TAG

Info

Tag ${TagName} detached from User(s) ${DetachUsersNames} by ${UserName}.

449

USER_DETACH_USER_FROM_TAG_FAILED

Error

Failed to detach Tag ${TagName} from User(s) ${DetachUsersNames} (User: ${UserName}).

450

USER_DETACH_USER_GROUP_FROM_TAG

Info

Tag ${TagName} was detached from Group(s) ${DetachGroupsNames} by ${UserName}.

451

USER_DETACH_USER_GROUP_FROM_TAG_FAILED

Error

Failed to detach Tag ${TagName} from Group(s) ${DetachGroupsNames} (User: ${UserName}).

452

USER_ATTACH_TAG_TO_USER_EXISTS

Warning

Tag ${TagName} already attached to User(s) ${AttachUsersNamesExists}.

453

USER_ATTACH_TAG_TO_USER_GROUP_EXISTS

Warning

Tag ${TagName} already attached to Group(s) ${AttachGroupsNamesExists}.

454

USER_ATTACH_TAG_TO_VM_EXISTS

Warning

Tag ${TagName} already attached to VM(s) ${VmsNamesExists}.

455

USER_ATTACH_TAG_TO_VDS_EXISTS

Warning

Tag ${TagName} already attached to Host(s) ${VdsNamesExists}.

456

USER_LOGGED_IN_VM

Info

User ${GuestUser} logged in to VM ${VmName}.

457

USER_LOGGED_OUT_VM

Info

User ${GuestUser} logged out from VM ${VmName}.

458

USER_LOCKED_VM

Info

User ${GuestUser} locked VM ${VmName}.

459

USER_UNLOCKED_VM

Info

User ${GuestUser} unlocked VM ${VmName}.

460

USER_ATTACH_TAG_TO_TEMPLATE_EXISTS

Warning

Tag ${TagName} already attached to Template(s) ${TemplatesNamesExists}.

467

UPDATE_TAGS_VM_DEFAULT_DISPLAY_TYPE

Info

Vm ${VmName} tag default display type was updated

468

UPDATE_TAGS_VM_DEFAULT_DISPLAY_TYPE_FAILED

Info

Failed to update Vm ${VmName} tag default display type

470

USER_ATTACH_VM_POOL_TO_AD_GROUP_INTERNAL

Info

Group ${GroupName} was attached to VM Pool ${VmPoolName}.

471

USER_ATTACH_VM_POOL_TO_AD_GROUP_FAILED_INTERNAL

Error

Failed to attach Group ${GroupName} to VM Pool ${VmPoolName}.

472

USER_ATTACH_USER_TO_POOL_INTERNAL

Info

User ${AdUserName} was attached to VM Pool ${VmPoolName}.

473

USER_ATTACH_USER_TO_POOL_FAILED_INTERNAL

Error

Failed to attach User ${AdUserName} to VM Pool ${VmPoolName} (User: ${UserName}).

493

VDS_ALREADY_IN_REQUESTED_STATUS

Warning

Host ${HostName} is already ${AgentStatus}, Power Management ${Operation} operation skipped.

494

VDS_MANUAL_FENCE_STATUS

Info

Manual fence for host ${VdsName} was started.

495

VDS_MANUAL_FENCE_STATUS_FAILED

Error

Manual fence for host ${VdsName} failed.

496

VDS_FENCE_STATUS

Info

Host ${VdsName} power management was verified successfully.

497

VDS_FENCE_STATUS_FAILED

Error

Failed to verify Host ${VdsName} power management.

498

VDS_APPROVE

Info

Host ${VdsName} was successfully approved by user ${UserName}.

499

VDS_APPROVE_FAILED

Error

Failed to approve Host ${VdsName}.

500

VDS_FAILED_TO_RUN_VMS

Error

Host ${VdsName} will be switched to Error status for ${Time} minutes because it failed to run a VM.

501

USER_SUSPEND_VM

Info

Suspending VM ${VmName} was initiated by User ${UserName} (Host: ${VdsName}).

502

USER_FAILED_SUSPEND_VM

Error

Failed to suspend VM ${VmName} (Host: ${VdsName}).

503

USER_SUSPEND_VM_OK

Info

VM ${VmName} on Host ${VdsName} is suspended.

504

VDS_INSTALL

Info

Host ${VdsName} installed

505

VDS_INSTALL_FAILED

Error

Host ${VdsName} installation failed. ${FailedInstallMessage}.

506

VDS_INITIATED_RUN_VM

Info

Trying to restart VM ${VmName} on Host ${VdsName}

509

VDS_INSTALL_IN_PROGRESS

Info

Installing Host ${VdsName}. ${Message}.

510

VDS_INSTALL_IN_PROGRESS_WARNING

Warning

Host ${VdsName} installation in progress . ${Message}.

511

VDS_INSTALL_IN_PROGRESS_ERROR

Error

An error has occurred during installation of Host ${VdsName}: ${Message}.

512

USER_SUSPEND_VM_FINISH_SUCCESS

Info

Suspending VM ${VmName} has been completed.

513

VDS_RECOVER_FAILED_VMS_UNKNOWN

Error

Host ${VdsName} cannot be reached, VMs state on this host are marked as Unknown.

514

VDS_INITIALIZING

Warning

Host ${VdsName} is initializing. Message: ${ErrorMessage}

515

VDS_CPU_LOWER_THAN_CLUSTER

Warning

Host ${VdsName} moved to Non-Operational state as host does not meet the cluster’s minimum CPU level. Missing CPU features : ${CpuFlags}

516

VDS_CPU_RETRIEVE_FAILED

Warning

Failed to determine Host ${VdsName} CPU level - could not retrieve CPU flags.

517

VDS_SET_NONOPERATIONAL

Info

Host ${VdsName} moved to Non-Operational state.

518

VDS_SET_NONOPERATIONAL_FAILED

Error

Failed to move Host ${VdsName} to Non-Operational state.

519

VDS_SET_NONOPERATIONAL_NETWORK

Warning

Host ${VdsName} does not comply with the cluster ${ClusterName} networks, the following networks are missing on host: '${Networks}'

520

USER_ATTACH_USER_TO_VM

Info

User ${AdUserName} was attached to VM ${VmName} by ${UserName}.

521

USER_SUSPEND_VM_FINISH_FAILURE

Error

Failed to complete suspending of VM ${VmName}.

522

VDS_SET_NONOPERATIONAL_DOMAIN

Warning

Host ${VdsName} cannot access the Storage Domain(s) ${StorageDomainNames} attached to the Data Center ${StoragePoolName}. Setting Host state to Non-Operational.

523

VDS_SET_NONOPERATIONAL_DOMAIN_FAILED

Error

Host ${VdsName} cannot access the Storage Domain(s) ${StorageDomainNames} attached to the Data Center ${StoragePoolName}. Failed to set Host state to Non-Operational.

524

VDS_DOMAIN_DELAY_INTERVAL

Warning

Storage domain ${StorageDomainName} experienced a high latency of ${Delay} seconds from host ${VdsName}. This may cause performance and functional issues. Please consult your Storage Administrator.

525

VDS_INITIATED_RUN_AS_STATELESS_VM_NOT_YET_RUNNING

Info

Starting VM ${VmName} as stateless was initiated.

528

USER_EJECT_VM_DISK

Info

CD was ejected from VM ${VmName} by ${UserName}.

530

VDS_MANUAL_FENCE_FAILED_CALL_FENCE_SPM

Warning

Manual fence did not revoke the selected SPM (${VdsName}) since the master storage domain\n was not active or could not use another host for the fence operation.

531

VDS_LOW_MEM

Warning

Available memory of host ${HostName} in cluster ${Cluster} [${AvailableMemory} MB] is under defined threshold [${Threshold} MB].

532

VDS_HIGH_MEM_USE

Warning

Used memory of host ${HostName} in cluster ${Cluster} [${UsedMemory}%] exceeded defined threshold [${Threshold}%].

533

VDS_HIGH_NETWORK_USE

Warning

 

534

VDS_HIGH_CPU_USE

Warning

Used CPU of host ${HostName} [${UsedCpu}%] exceeded defined threshold [${Threshold}%].

535

VDS_HIGH_SWAP_USE

Warning

Used swap memory of host ${HostName} [${UsedSwap}%] exceeded defined threshold [${Threshold}%].

536

VDS_LOW_SWAP

Warning

Available swap memory of host ${HostName} [${AvailableSwapMemory} MB] is under defined threshold [${Threshold} MB].

537

VDS_INITIATED_RUN_VM_AS_STATELESS

Info

VM ${VmName} was restarted on Host ${VdsName} as stateless

538

USER_RUN_VM_AS_STATELESS

Info

VM ${VmName} started on Host ${VdsName} as stateless

539

VDS_AUTO_FENCE_STATUS

Info

Auto fence for host ${VdsName} was started.

540

VDS_AUTO_FENCE_STATUS_FAILED

Error

Auto fence for host ${VdsName} failed.

541

VDS_AUTO_FENCE_FAILED_CALL_FENCE_SPM

Warning

Auto fence did not revoke the selected SPM (${VdsName}) since the master storage domain\n was not active or could not use another host for the fence operation.

550

VDS_PACKAGES_IN_PROGRESS

Info

Package update Host ${VdsName}. ${Message}.

551

VDS_PACKAGES_IN_PROGRESS_WARNING

Warning

Host ${VdsName} update packages in progress . ${Message}.

552

VDS_PACKAGES_IN_PROGRESS_ERROR

Error

Failed to update packages Host ${VdsName}. ${Message}.

555

USER_MOVE_TAG

Info

Tag ${TagName} was moved from ${OldParnetTagName} to ${NewParentTagName} by ${UserName}.

556

USER_MOVE_TAG_FAILED

Error

Failed to move Tag ${TagName} from ${OldParnetTagName} to ${NewParentTagName} (User: ${UserName}).

560

VDS_ANSIBLE_INSTALL_STARTED

Info

Ansible host-deploy playbook execution has started on host ${VdsName}.

561

VDS_ANSIBLE_INSTALL_FINISHED

Info

Ansible host-deploy playbook execution has successfully finished on host ${VdsName}.

562

VDS_ANSIBLE_HOST_REMOVE_STARTED

Info

Ansible host-remove playbook execution started on host ${VdsName}.

563

VDS_ANSIBLE_HOST_REMOVE_FINISHED

Info

Ansible host-remove playbook execution has successfully finished on host ${VdsName}. For more details check log ${LogFile}

564

VDS_ANSIBLE_HOST_REMOVE_FAILED

Warning

Ansible host-remove playbook execution failed on host ${VdsName}. For more details please check log ${LogFile}

565

VDS_ANSIBLE_HOST_REMOVE_EXECUTION_FAILED

Info

Ansible host-remove playbook execution failed on host ${VdsName} with message: ${Message}

600

USER_VDS_MAINTENANCE

Info

Host ${VdsName} was switched to Maintenance mode by ${UserName} (Reason: ${Reason}).

601

CPU_FLAGS_NX_IS_MISSING

Warning

Host ${VdsName} is missing the NX cpu flag. This flag can be enabled via the host BIOS. Please set Disable Execute (XD) for an Intel host, or No Execute (NX) for AMD. Please make sure to completely power off the host for this change to take effect.

602

USER_VDS_MAINTENANCE_MIGRATION_FAILED

Warning

Host ${VdsName} cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: ${failedVms} (User: ${UserName}).

603

VDS_SET_NONOPERATIONAL_IFACE_DOWN

Warning

Host ${VdsName} moved to Non-Operational state because interfaces which are down are needed by required networks in the current cluster: '${NicsWithNetworks}'.

604

VDS_TIME_DRIFT_ALERT

Warning

Host ${VdsName} has time-drift of ${Actual} seconds while maximum configured value is ${Max} seconds.

605

PROXY_HOST_SELECTION

Info

Host ${Proxy} from ${Origin} was chosen as a proxy to execute fencing on Host ${VdsName}.

606

HOST_REFRESHED_CAPABILITIES

Info

Successfully refreshed the capabilities of host ${VdsName}.

607

HOST_REFRESH_CAPABILITIES_FAILED

Error

Failed to refresh the capabilities of host ${VdsName}.

608

HOST_INTERFACE_HIGH_NETWORK_USE

Warning

Host ${HostName} has network interface which exceeded the defined threshold [${Threshold}%] (${InterfaceName}: transmit rate[${TransmitRate}%], receive rate [${ReceiveRate}%])

609

HOST_INTERFACE_STATE_UP

Normal

Interface ${InterfaceName} on host ${VdsName}, changed state to up

610

HOST_INTERFACE_STATE_DOWN

Warning

Interface ${InterfaceName} on host ${VdsName}, changed state to down

611

HOST_BOND_SLAVE_STATE_UP

Normal

Slave ${SlaveName} of bond ${BondName} on host ${VdsName}, changed state to up

612

HOST_BOND_SLAVE_STATE_DOWN

Warning

Slave ${SlaveName} of bond ${BondName} on host ${VdsName}, changed state to down

613

FENCE_KDUMP_LISTENER_IS_NOT_ALIVE

Error

Unable to determine if Kdump is in progress on host ${VdsName}, because fence_kdump listener is not running.

614

KDUMP_FLOW_DETECTED_ON_VDS

Info

Kdump flow is in progress on host ${VdsName}.

615

KDUMP_FLOW_NOT_DETECTED_ON_VDS

Info

Kdump flow is not in progress on host ${VdsName}.

616

KDUMP_FLOW_FINISHED_ON_VDS

Info

Kdump flow finished on host ${VdsName}.

617

KDUMP_DETECTION_NOT_CONFIGURED_ON_VDS

Warning

Kdump integration is enabled for host ${VdsName}, but kdump is not configured properly on host.

618

HOST_REGISTRATION_FAILED_INVALID_CLUSTER

Info

No default or valid cluster was found, Host ${VdsName} registration failed

619

HOST_PROTOCOL_INCOMPATIBLE_WITH_CLUSTER

Warning

Host ${VdsName} uses not compatible protocol during activation (xmlrpc instead of jsonrpc). Please examine installation logs and VDSM logs for failures and reinstall the host.

620

USER_VDS_MAINTENANCE_WITHOUT_REASON

Info

Host ${VdsName} was switched to Maintenance mode by ${UserName}.

650

USER_UNDO_RESTORE_FROM_SNAPSHOT_START

Info

Undoing a Snapshot-Preview for VM ${VmName} was initialized by ${UserName}.

651

USER_UNDO_RESTORE_FROM_SNAPSHOT_FINISH_SUCCESS

Info

Undoing a Snapshot-Preview for VM ${VmName} has been completed.

652

USER_UNDO_RESTORE_FROM_SNAPSHOT_FINISH_FAILURE

Error

Failed to undo Snapshot-Preview for VM ${VmName}.

700

DISK_ALIGNMENT_SCAN_START

Info

Starting alignment scan of disk '${DiskAlias}'.

701

DISK_ALIGNMENT_SCAN_FAILURE

Warning

Alignment scan of disk '${DiskAlias}' failed.

702

DISK_ALIGNMENT_SCAN_SUCCESS

Info

Alignment scan of disk '${DiskAlias}' is complete.

809

USER_ADD_CLUSTER

Info

Cluster ${ClusterName} was added by ${UserName}

810

USER_ADD_CLUSTER_FAILED

Error

Failed to add Host cluster (User: ${UserName})

811

USER_UPDATE_CLUSTER

Info

Host cluster ${ClusterName} was updated by ${UserName}

812

USER_UPDATE_CLUSTER_FAILED

Error

Failed to update Host cluster (User: ${UserName})

813

USER_REMOVE_CLUSTER

Info

Host cluster ${ClusterName} was removed by ${UserName}

814

USER_REMOVE_CLUSTER_FAILED

Error

Failed to remove Host cluster (User: ${UserName})

815

USER_VDC_LOGOUT_FAILED

Error

Failed to log out user ${UserName} connected from '${SourceIP}' using session '${SessionID}'.

816

MAC_POOL_EMPTY

Warning

No MAC addresses left in the MAC Address Pool.

817

CERTIFICATE_FILE_NOT_FOUND

Error

Could not find oVirt Engine Certificate file.

818

RUN_VM_FAILED

Error

Cannot run VM ${VmName} on Host ${VdsName}. Error: ${ErrMsg}

819

VDS_REGISTER_ERROR_UPDATING_HOST

Error

Host registration failed - cannot update Host Name for Host ${VdsName2}. (Host: ${VdsName1})

820

VDS_REGISTER_ERROR_UPDATING_HOST_ALL_TAKEN

Error

Host registration failed - all available Host Names are taken. (Host: ${VdsName1})

821

VDS_REGISTER_HOST_IS_ACTIVE

Error

Host registration failed - cannot change Host Name of active Host ${VdsName2}. (Host: ${VdsName1})

822

VDS_REGISTER_ERROR_UPDATING_NAME

Error

Host registration failed - cannot update Host Name for Host ${VdsName2}. (Host: ${VdsName1})

823

VDS_REGISTER_ERROR_UPDATING_NAMES_ALL_TAKEN

Error

Host registration failed - all available Host Names are taken. (Host: ${VdsName1})

824

VDS_REGISTER_NAME_IS_ACTIVE

Error

Host registration failed - cannot change Host Name of active Host ${VdsName2}. (Host: ${VdsName1})

825

VDS_REGISTER_AUTO_APPROVE_PATTERN

Error

Host registration failed - auto approve pattern error. (Host: ${VdsName1})

826

VDS_REGISTER_FAILED

Error

Host registration failed. (Host: ${VdsName1})

827

VDS_REGISTER_EXISTING_VDS_UPDATE_FAILED

Error

Host registration failed - cannot update existing Host. (Host: ${VdsName1})

828

VDS_REGISTER_SUCCEEDED

Info

Host ${VdsName1} registered.

829

VM_MIGRATION_ON_CONNECT_CHECK_FAILED

Error

VM migration logic failed. (VM name: ${VmName})

830

VM_MIGRATION_ON_CONNECT_CHECK_SUCCEEDED

Info

Migration check failed to execute.

831

USER_VDC_SESSION_TERMINATED

Info

User ${UserName} forcibly logged out user ${TerminatedSessionUsername} connected from '${SourceIP}' using session '${SessionID}'.

832

USER_VDC_SESSION_TERMINATION_FAILED

Error

User ${UserName} failed to forcibly log out user ${TerminatedSessionUsername} connected from '${SourceIP}' using session '${SessionID}'.

833

MAC_ADDRESS_IS_IN_USE

Warning

Network Interface ${IfaceName} has MAC address ${MACAddr} which is in use.

834

VDS_REGISTER_EMPTY_ID

Warning

Host registration failed, empty host id (Host: ${VdsHostName})

835

SYSTEM_UPDATE_CLUSTER

Info

Host cluster ${ClusterName} was updated by system

836

SYSTEM_UPDATE_CLUSTER_FAILED

Info

Failed to update Host cluster by system

837

MAC_ADDRESSES_POOL_NOT_INITIALIZED

Warning

Mac Address Pool is not initialized. ${Message}

838

MAC_ADDRESS_IS_IN_USE_UNPLUG

Warning

Network Interface ${IfaceName} has MAC address ${MACAddr} which is in use, therefore it is being unplugged from VM ${VmName}.

839

HOST_AVAILABLE_UPDATES_FAILED

Error

Failed to check for available updates on host ${VdsName} with message '${Message}'.

840

HOST_UPGRADE_STARTED

Info

Host ${VdsName} upgrade was started (User: ${UserName}).

841

HOST_UPGRADE_FAILED

Error

Failed to upgrade Host ${VdsName} (User: ${UserName}).

842

HOST_UPGRADE_FINISHED

Info

Host ${VdsName} upgrade was completed successfully.

845

HOST_CERTIFICATION_IS_ABOUT_TO_EXPIRE

Warning

Host ${VdsName} certification is about to expire at ${ExpirationDate}. Please renew the host’s certification.

846

ENGINE_CERTIFICATION_HAS_EXPIRED

Info

Engine’s certification has expired at ${ExpirationDate}. Please renew the engine’s certification.

847

ENGINE_CERTIFICATION_IS_ABOUT_TO_EXPIRE

Warning

Engine’s certification is about to expire at ${ExpirationDate}. Please renew the engine’s certification.

848

ENGINE_CA_CERTIFICATION_HAS_EXPIRED

Info

Engine’s CA certification has expired at ${ExpirationDate}.

849

ENGINE_CA_CERTIFICATION_IS_ABOUT_TO_EXPIRE

Warning

Engine’s CA certification is about to expire at ${ExpirationDate}.

850

USER_ADD_PERMISSION

Info

User/Group ${SubjectName}, Namespace ${Namespace}, Authorization provider: ${Authz} was granted permission for Role ${RoleName} on ${VdcObjectType} ${VdcObjectName}, by ${UserName}.

851

USER_ADD_PERMISSION_FAILED

Error

User ${UserName} failed to grant permission for Role ${RoleName} on ${VdcObjectType} ${VdcObjectName} to User/Group ${SubjectName}.

852

USER_REMOVE_PERMISSION

Info

User/Group ${SubjectName} Role ${RoleName} permission was removed from ${VdcObjectType} ${VdcObjectName} by ${UserName}

853

USER_REMOVE_PERMISSION_FAILED

Error

User ${UserName} failed to remove permission for Role ${RoleName} from ${VdcObjectType} ${VdcObjectName} to User/Group ${SubjectName}

854

USER_ADD_ROLE

Info

Role ${RoleName} granted to ${UserName}

855

USER_ADD_ROLE_FAILED

Error

Failed to grant role ${RoleName} (User ${UserName})

856

USER_UPDATE_ROLE

Info

${UserName} Role was updated to the ${RoleName} Role

857

USER_UPDATE_ROLE_FAILED

Error

Failed to update role ${RoleName} to ${UserName}

858

USER_REMOVE_ROLE

Info

Role ${RoleName} removed from ${UserName}

859

USER_REMOVE_ROLE_FAILED

Error

Failed to remove role ${RoleName} (User ${UserName})

860

USER_ATTACHED_ACTION_GROUP_TO_ROLE

Info

Action group ${ActionGroup} was attached to Role ${RoleName} by ${UserName}

861

USER_ATTACHED_ACTION_GROUP_TO_ROLE_FAILED

Error

Failed to attach Action group ${ActionGroup} to Role ${RoleName} (User: ${UserName})

862

USER_DETACHED_ACTION_GROUP_FROM_ROLE

Info

Action group ${ActionGroup} was detached from Role ${RoleName} by ${UserName}

863

USER_DETACHED_ACTION_GROUP_FROM_ROLE_FAILED

Error

Failed to attach Action group ${ActionGroup} to Role ${RoleName} by ${UserName}

864

USER_ADD_ROLE_WITH_ACTION_GROUP

Info

Role ${RoleName} was added by ${UserName}

865

USER_ADD_ROLE_WITH_ACTION_GROUP_FAILED

Error

Failed to add role ${RoleName}

866

USER_ADD_SYSTEM_PERMISSION

Info

User/Group ${SubjectName} was granted permission for Role ${RoleName} on ${VdcObjectType} by ${UserName}.

867

USER_ADD_SYSTEM_PERMISSION_FAILED

Error

User ${UserName} failed to grant permission for Role ${RoleName} on ${VdcObjectType} to User/Group ${SubjectName}.

868

USER_REMOVE_SYSTEM_PERMISSION

Info

User/Group ${SubjectName} Role ${RoleName} permission was removed from ${VdcObjectType} by ${UserName}

869

USER_REMOVE_SYSTEM_PERMISSION_FAILED

Error

User ${UserName} failed to remove permission for Role ${RoleName} from ${VdcObjectType} to User/Group ${SubjectName}

870

USER_ADD_PROFILE

Info

Profile created for ${UserName}

871

USER_ADD_PROFILE_FAILED

Error

Failed to create profile for ${UserName}

872

USER_UPDATE_PROFILE

Info

Updated profile for ${UserName}

873

USER_UPDATE_PROFILE_FAILED

Error

Failed to update profile for ${UserName}

874

USER_REMOVE_PROFILE

Info

Removed profile for ${UserName}

875

USER_REMOVE_PROFILE_FAILED

Error

Failed to remove profile for ${UserName}

876

HOST_CERTIFICATION_IS_INVALID

Error

Host ${VdsName} certification is invalid. The certification has no peer certificates.

877

HOST_CERTIFICATION_HAS_EXPIRED

Info

Host ${VdsName} certification has expired at ${ExpirationDate}. Please renew the host’s certification.

878

ENGINE_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT

Info

Engine’s certification is about to expire at ${ExpirationDate}. Please renew the engine’s certification.

879

HOST_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT

Info

Host ${VdsName} certification is about to expire at ${ExpirationDate}. Please renew the host’s certification.

880

HOST_CERTIFICATION_ENROLLMENT_STARTED

Normal

Enrolling certificate for host ${VdsName} was started (User: ${UserName}).

881

HOST_CERTIFICATION_ENROLLMENT_FINISHED

Normal

Enrolling certificate for host ${VdsName} was completed successfully (User: ${UserName}).

882

HOST_CERTIFICATION_ENROLLMENT_FAILED

Error

Failed to enroll certificate for host ${VdsName} (User: ${UserName}).

883

ENGINE_CA_CERTIFICATION_IS_ABOUT_TO_EXPIRE_ALERT

Info

Engine’s CA certification is about to expire at ${ExpirationDate}.

884

HOST_AVAILABLE_UPDATES_STARTED

Info

Started to check for available updates on host ${VdsName}.

885

HOST_AVAILABLE_UPDATES_FINISHED

Info

Check for available updates on host ${VdsName} was completed successfully with message '${Message}'.

886

HOST_AVAILABLE_UPDATES_PROCESS_IS_ALREADY_RUNNING

Warning

Failed to check for available updates on host ${VdsName}: Another process is already running.

887

HOST_AVAILABLE_UPDATES_SKIPPED_UNSUPPORTED_STATUS

Warning

Failed to check for available updates on host ${VdsName}: Unsupported host status.

890

HOST_UPGRADE_FINISHED_MANUAL_HA

Warning

Host ${VdsName} upgrade was completed successfully, but the Hosted Engine HA service may still be in maintenance mode. If necessary, please correct this manually.

900

AD_COMPUTER_ACCOUNT_SUCCEEDED

Info

Account creation successful.

901

AD_COMPUTER_ACCOUNT_FAILED

Error

Account creation failed.

918

USER_FORCE_REMOVE_STORAGE_POOL

Info

Data Center ${StoragePoolName} was forcibly removed by ${UserName}

919

USER_FORCE_REMOVE_STORAGE_POOL_FAILED

Error

Failed to forcibly remove Data Center ${StoragePoolName}. (User: ${UserName})

925

MAC_ADDRESS_IS_EXTERNAL

Warning

VM ${VmName} has MAC address(es) ${MACAddr}, which is/are out of its MAC pool definitions.

926

NETWORK_REMOVE_BOND

Info

Remove bond: ${BondName} for Host: ${VdsName} (User:${UserName}).

927

NETWORK_REMOVE_BOND_FAILED

Error

Failed to remove bond: ${BondName} for Host: ${VdsName} (User:${UserName}).

928

NETWORK_VDS_NETWORK_MATCH_CLUSTER

Info

Vds ${VdsName} network match to cluster ${ClusterName}

929

NETWORK_VDS_NETWORK_NOT_MATCH_CLUSTER

Error

Vds ${VdsName} network does not match to cluster ${ClusterName}

930

NETWORK_REMOVE_VM_INTERFACE

Info

Interface ${InterfaceName} (${InterfaceType}) was removed from VM ${VmName}. (User: ${UserName})

931

NETWORK_REMOVE_VM_INTERFACE_FAILED

Error

Failed to remove Interface ${InterfaceName} (${InterfaceType}) from VM ${VmName}. (User: ${UserName})

932

NETWORK_ADD_VM_INTERFACE

Info

Interface ${InterfaceName} (${InterfaceType}) was added to VM ${VmName}. (User: ${UserName})

933

NETWORK_ADD_VM_INTERFACE_FAILED

Error

Failed to add Interface ${InterfaceName} (${InterfaceType}) to VM ${VmName}. (User: ${UserName})

934

NETWORK_UPDATE_VM_INTERFACE

Info

Interface ${InterfaceName} (${InterfaceType}) was updated for VM ${VmName}. ${LinkState} (User: ${UserName})

935

NETWORK_UPDATE_VM_INTERFACE_FAILED

Error

Failed to update Interface ${InterfaceName} (${InterfaceType}) for VM ${VmName}. (User: ${UserName})

936

NETWORK_ADD_TEMPLATE_INTERFACE

Info

Interface ${InterfaceName} (${InterfaceType}) was added to Template ${VmTemplateName}. (User: ${UserName})

937

NETWORK_ADD_TEMPLATE_INTERFACE_FAILED

Error

Failed to add Interface ${InterfaceName} (${InterfaceType}) to Template ${VmTemplateName}. (User: ${UserName})

938

NETWORK_REMOVE_TEMPLATE_INTERFACE

Info

Interface ${InterfaceName} (${InterfaceType}) was removed from Template ${VmTemplateName}. (User: ${UserName})

939

NETWORK_REMOVE_TEMPLATE_INTERFACE_FAILED

Error

Failed to remove Interface ${InterfaceName} (${InterfaceType}) from Template ${VmTemplateName}. (User: ${UserName})

940

NETWORK_UPDATE_TEMPLATE_INTERFACE

Info

Interface ${InterfaceName} (${InterfaceType}) was updated for Template ${VmTemplateName}. (User: ${UserName})

941

NETWORK_UPDATE_TEMPLATE_INTERFACE_FAILED

Error

Failed to update Interface ${InterfaceName} (${InterfaceType}) for Template ${VmTemplateName}. (User: ${UserName})

942

NETWORK_ADD_NETWORK

Info

Network ${NetworkName} was added to Data Center: ${StoragePoolName}

943

NETWORK_ADD_NETWORK_FAILED

Error

Failed to add Network ${NetworkName} to Data Center: ${StoragePoolName}

944

NETWORK_REMOVE_NETWORK

Info

Network ${NetworkName} was removed from Data Center: ${StoragePoolName}

945

NETWORK_REMOVE_NETWORK_FAILED

Error

Failed to remove Network ${NetworkName} from Data Center: ${StoragePoolName}

946

NETWORK_ATTACH_NETWORK_TO_CLUSTER

Info

Network ${NetworkName} attached to Cluster ${ClusterName}

947

NETWORK_ATTACH_NETWORK_TO_CLUSTER_FAILED

Error

Failed to attach Network ${NetworkName} to Cluster ${ClusterName}

948

NETWORK_DETACH_NETWORK_TO_CLUSTER

Info

Network ${NetworkName} detached from Cluster ${ClusterName}

949

NETWORK_DETACH_NETWORK_TO_CLUSTER_FAILED

Error

Failed to detach Network ${NetworkName} from Cluster ${ClusterName}

950

USER_ADD_STORAGE_POOL

Info

Data Center ${StoragePoolName}, Compatibility Version ${CompatibilityVersion} and Quota Type ${QuotaEnforcementType} was added by ${UserName}

951

USER_ADD_STORAGE_POOL_FAILED

Error

Failed to add Data Center ${StoragePoolName}. (User: ${UserName})

952

USER_UPDATE_STORAGE_POOL

Info

Data Center ${StoragePoolName} was updated by ${UserName}

953

USER_UPDATE_STORAGE_POOL_FAILED

Error

Failed to update Data Center ${StoragePoolName}. (User: ${UserName})

954

USER_REMOVE_STORAGE_POOL

Info

Data Center ${StoragePoolName} was removed by ${UserName}

955

USER_REMOVE_STORAGE_POOL_FAILED

Error

Failed to remove Data Center ${StoragePoolName}. (User: ${UserName})

956

USER_ADD_STORAGE_DOMAIN

Info

Storage Domain ${StorageDomainName} was added by ${UserName}

957

USER_ADD_STORAGE_DOMAIN_FAILED

Error

Failed to add Storage Domain ${StorageDomainName}. (User: ${UserName})

958

USER_UPDATE_STORAGE_DOMAIN

Info

Storage Domain ${StorageDomainName} was updated by ${UserName}

959

USER_UPDATE_STORAGE_DOMAIN_FAILED

Error

Failed to update Storage Domain ${StorageDomainName}. (User: ${UserName})

960

USER_REMOVE_STORAGE_DOMAIN

Info

Storage Domain ${StorageDomainName} was removed by ${UserName}

961

USER_REMOVE_STORAGE_DOMAIN_FAILED

Error

Failed to remove Storage Domain ${StorageDomainName}. (User: ${UserName})

962

USER_ATTACH_STORAGE_DOMAIN_TO_POOL

Info

Storage Domain ${StorageDomainName} was attached to Data Center ${StoragePoolName} by ${UserName}

963

USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED

Error

Failed to attach Storage Domain ${StorageDomainName} to Data Center ${StoragePoolName}. (User: ${UserName})

964

USER_DETACH_STORAGE_DOMAIN_FROM_POOL

Info

Storage Domain ${StorageDomainName} was detached from Data Center ${StoragePoolName} by ${UserName}

965

USER_DETACH_STORAGE_DOMAIN_FROM_POOL_FAILED

Error

Failed to detach Storage Domain ${StorageDomainName} from Data Center ${StoragePoolName}. (User: ${UserName})

966

USER_ACTIVATED_STORAGE_DOMAIN

Info

Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}) was activated by ${UserName}

967

USER_ACTIVATE_STORAGE_DOMAIN_FAILED

Error

Failed to activate Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}) by ${UserName}

968

USER_DEACTIVATED_STORAGE_DOMAIN

Info

Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}) was deactivated and has moved to 'Preparing for maintenance' until it will no longer be accessed by any Host of the Data Center.

969

USER_DEACTIVATE_STORAGE_DOMAIN_FAILED

Error

Failed to deactivate Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}).

970

SYSTEM_DEACTIVATED_STORAGE_DOMAIN

Warning

Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}) was deactivated by system because it’s not visible by any of the hosts.

971

SYSTEM_DEACTIVATE_STORAGE_DOMAIN_FAILED

Error

Failed to deactivate Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}).

972

USER_EXTENDED_STORAGE_DOMAIN

Info

Storage ${StorageDomainName} has been extended by ${UserName}. Please wait for refresh.

973

USER_EXTENDED_STORAGE_DOMAIN_FAILED

Error

Failed to extend Storage Domain ${StorageDomainName}. (User: ${UserName})

974

USER_REMOVE_VG

Info

Volume group ${VgId} was removed by ${UserName}.

975

USER_REMOVE_VG_FAILED

Error

Failed to remove Volume group ${VgId}. (User: UserName)

976

USER_ACTIVATE_STORAGE_POOL

Info

Data Center ${StoragePoolName} was activated. (User: ${UserName})

977

USER_ACTIVATE_STORAGE_POOL_FAILED

Error

Failed to activate Data Center ${StoragePoolName}. (User: ${UserName})

978

SYSTEM_FAILED_CHANGE_STORAGE_POOL_STATUS

Error

Failed to change Data Center ${StoragePoolName} status.

979

SYSTEM_CHANGE_STORAGE_POOL_STATUS_NO_HOST_FOR_SPM

Error

Fencing failed on Storage Pool Manager ${VdsName} for Data Center ${StoragePoolName}. Setting status to Non-Operational.

980

SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC

Warning

Invalid status on Data Center ${StoragePoolName}. Setting status to Non Responsive.

981

USER_FORCE_REMOVE_STORAGE_DOMAIN

Info

Storage Domain ${StorageDomainName} was forcibly removed by ${UserName}

982

USER_FORCE_REMOVE_STORAGE_DOMAIN_FAILED

Error

Failed to forcibly remove Storage Domain ${StorageDomainName}. (User: ${UserName})

983

RECONSTRUCT_MASTER_FAILED_NO_MASTER

Warning

No valid Data Storage Domains are available in Data Center ${StoragePoolName} (please check your storage infrastructure).

984

RECONSTRUCT_MASTER_DONE

Info

Reconstruct Master Domain for Data Center ${StoragePoolName} completed.

985

RECONSTRUCT_MASTER_FAILED

Error

Failed to Reconstruct Master Domain for Data Center ${StoragePoolName}.

986

SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_SEARCHING_NEW_SPM

Warning

Data Center is being initialized, please wait for initialization to complete.

987

SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_WITH_ERROR

Warning

Invalid status on Data Center ${StoragePoolName}. Setting Data Center status to Non Responsive (On host ${VdsName}, Error: ${Error}).

988

USER_CONNECT_HOSTS_TO_LUN_FAILED

Error

Failed to connect Host ${VdsName} to device. (User: ${UserName})

989

SYSTEM_CHANGE_STORAGE_POOL_STATUS_PROBLEMATIC_FROM_NON_OPERATIONAL

Info

Try to recover Data Center ${StoragePoolName}. Setting status to Non Responsive.

990

SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC

Warning

Sync Error on Master Domain between Host ${VdsName} and oVirt Engine. Domain: ${StorageDomainName} is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue.

991

RECOVERY_STORAGE_POOL

Info

Data Center ${StoragePoolName} was recovered by ${UserName}

992

RECOVERY_STORAGE_POOL_FAILED

Error

Failed to recover Data Center ${StoragePoolName} (User:${UserName})

993

SYSTEM_CHANGE_STORAGE_POOL_STATUS_RESET_IRS

Info

Data Center ${StoragePoolName} was reset. Setting status to Non Responsive (Elect new Storage Pool Manager).

994

CONNECT_STORAGE_SERVERS_FAILED

Warning

Failed to connect Host ${VdsName} to Storage Servers

995

CONNECT_STORAGE_POOL_FAILED

Warning

Failed to connect Host ${VdsName} to Storage Pool ${StoragePoolName}

996

STORAGE_DOMAIN_ERROR

Error

The error message for connection ${Connection} returned by VDSM was: ${ErrorMessage}

997

REFRESH_REPOSITORY_IMAGE_LIST_FAILED

Error

Refresh image list failed for domain(s): ${imageDomains}. Please check domain activity.

998

REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED

Info

Refresh image list succeeded for domain(s): ${imageDomains}

999

STORAGE_ALERT_VG_METADATA_CRITICALLY_FULL

Error

The system has reached the 80% watermark on the VG metadata area size on ${StorageDomainName}.\nThis is due to a high number of Vdisks or large Vdisks size allocated on this specific VG.

1000

STORAGE_ALERT_SMALL_VG_METADATA

Warning

The allocated VG metadata area size is smaller than 50MB on ${StorageDomainName},\nwhich might limit its capacity (the number of Vdisks and/or their size).

1001

USER_RUN_VM_FAILURE_STATELESS_SNAPSHOT_LEFT

Error

Failed to start VM ${VmName}, because exist snapshot for stateless state. Snapshot will be deleted.

1002

USER_ATTACH_STORAGE_DOMAINS_TO_POOL

Info

Storage Domains were attached to Data Center ${StoragePoolName} by ${UserName}

1003

USER_ATTACH_STORAGE_DOMAINS_TO_POOL_FAILED

Error

Failed to attach Storage Domains to Data Center ${StoragePoolName}. (User: ${UserName})

1004

STORAGE_DOMAIN_TASKS_ERROR

Warning

Storage Domain ${StorageDomainName} is down while there are tasks running on it. These tasks may fail.

1005

UPDATE_OVF_FOR_STORAGE_POOL_FAILED

Warning

Failed to update VMs/Templates OVF data in Data Center ${StoragePoolName}.

1006

UPGRADE_STORAGE_POOL_ENCOUNTERED_PROBLEMS

Warning

Data Center ${StoragePoolName} has encountered problems during upgrade process.

1007

REFRESH_REPOSITORY_IMAGE_LIST_INCOMPLETE

Warning

Refresh image list probably incomplete for domain ${imageDomain}, only ${imageListSize} images discovered.

1008

NUMBER_OF_LVS_ON_STORAGE_DOMAIN_EXCEEDED_THRESHOLD

Warning

The number of LVs on the domain ${storageDomainName} exceeded ${maxNumOfLVs}, you are approaching the limit where performance may degrade.

1009

USER_DEACTIVATE_STORAGE_DOMAIN_OVF_UPDATE_INCOMPLETE

Warning

Failed to deactivate Storage Domain ${StorageDomainName} as the engine was restarted during the operation, please retry. (Data Center ${StoragePoolName}).

1010

RELOAD_CONFIGURATIONS_SUCCESS

Info

System Configurations reloaded successfully.

1011

RELOAD_CONFIGURATIONS_FAILURE

Error

System Configurations failed to reload.

1012

NETWORK_ACTIVATE_VM_INTERFACE_SUCCESS

Info

Network Interface ${InterfaceName} (${InterfaceType}) was plugged to VM ${VmName}. (User: ${UserName})

1013

NETWORK_ACTIVATE_VM_INTERFACE_FAILURE

Error

Failed to plug Network Interface ${InterfaceName} (${InterfaceType}) to VM ${VmName}. (User: ${UserName})

1014

NETWORK_DEACTIVATE_VM_INTERFACE_SUCCESS

Info

Network Interface ${InterfaceName} (${InterfaceType}) was unplugged from VM ${VmName}. (User: ${UserName})

1015

NETWORK_DEACTIVATE_VM_INTERFACE_FAILURE

Error

Failed to unplug Network Interface ${InterfaceName} (${InterfaceType}) from VM ${VmName}. (User: ${UserName})

1016

UPDATE_FOR_OVF_STORES_FAILED

Warning

Failed to update OVF disks ${DisksIds}, OVF data isn’t updated on those OVF stores (Data Center ${DataCenterName}, Storage Domain ${StorageDomainName}).

1017

RETRIEVE_OVF_STORE_FAILED

Warning

Failed to retrieve VMs and Templates from the OVF disk of Storage Domain ${StorageDomainName}.

1018

OVF_STORE_DOES_NOT_EXISTS

Warning

This Data center compatibility version does not support importing a data domain with its entities (VMs and Templates). The imported domain will be imported without them.

1019

UPDATE_DESCRIPTION_FOR_DISK_FAILED

Error

Failed to update the meta data description of disk ${DiskName} (Data Center ${DataCenterName}, Storage Domain ${StorageDomainName}).

1020

UPDATE_DESCRIPTION_FOR_DISK_SKIPPED_SINCE_STORAGE_DOMAIN_NOT_ACTIVE

Warning

Not updating the metadata of Disk ${DiskName} (Data Center ${DataCenterName}. Since the Storage Domain ${StorageDomainName} is not in active.

1022

USER_REFRESH_LUN_STORAGE_DOMAIN

Info

Resize LUNs operation succeeded.

1023

USER_REFRESH_LUN_STORAGE_DOMAIN_FAILED

Error

Failed to resize LUNs.

1024

USER_REFRESH_LUN_STORAGE_DIFFERENT_SIZE_DOMAIN_FAILED

Error

Failed to resize LUNs.\n Not all the hosts are seeing the same LUN size.

1025

VM_PAUSED

Info

VM ${VmName} has been paused.

1026

FAILED_TO_STORE_ENTIRE_DISK_FIELD_IN_DISK_DESCRIPTION_METADATA

Warning

Failed to store field ${DiskFieldName} as a part of ${DiskAlias}'s description metadata due to storage space limitations. The field ${DiskFieldName} will be truncated.

1027

FAILED_TO_STORE_ENTIRE_DISK_FIELD_AND_REST_OF_FIELDS_IN_DISK_DESCRIPTION_METADATA

Warning

Failed to store field ${DiskFieldName} as a part of ${DiskAlias}'s description metadata due to storage space limitations. The value will be truncated and the following fields will not be stored at all: ${DiskFieldsNames}.

1028

FAILED_TO_STORE_DISK_FIELDS_IN_DISK_DESCRIPTION_METADATA

Warning

Failed to store the following fields in the description metadata of disk ${DiskAlias} due to storage space limitations: ${DiskFieldsNames}.

1029

STORAGE_DOMAIN_MOVED_TO_MAINTENANCE

Info

Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}) successfully moved to Maintenance as it’s no longer accessed by any Host of the Data Center.

1030

USER_DEACTIVATED_LAST_MASTER_STORAGE_DOMAIN

Info

Storage Domain ${StorageDomainName} (Data Center ${StoragePoolName}) was deactivated.

1031

TRANSFER_IMAGE_INITIATED

Info

Image ${TransferType} with disk ${DiskAlias} was initiated by ${UserName}.

1032

TRANSFER_IMAGE_SUCCEEDED

Info

Image ${TransferType} with disk ${DiskAlias} succeeded.

1033

TRANSFER_IMAGE_CANCELLED

Info

Image ${TransferType} with disk ${DiskAlias} was cancelled.

1034

TRANSFER_IMAGE_FAILED

Error

Image ${TransferType} with disk ${DiskAlias} failed.

1035

TRANSFER_IMAGE_TEARDOWN_FAILED

Info

Failed to tear down image ${DiskAlias} after image transfer session.

1036

USER_SCAN_STORAGE_DOMAIN_FOR_UNREGISTERED_DISKS

Info

Storage Domain ${StorageDomainName} has finished to scan for unregistered disks by ${UserName}.

1037

USER_SCAN_STORAGE_DOMAIN_FOR_UNREGISTERED_DISKS_FAILED

Error

Storage Domain ${StorageDomainName} failed to scan for unregistered disks by ${UserName}.

1039

LUNS_BROKE_SD_PASS_DISCARD_SUPPORT

Warning

Luns with IDs: [${LunsIds}] were updated in the DB but caused the storage domain ${StorageDomainName} (ID ${storageDomainId}) to stop supporting passing discard from the guest to the underlying storage. Please configure these luns' discard support in the underlying storage or disable 'Enable Discard' for vm disks on this storage domain.

1040

DISKS_WITH_ILLEGAL_PASS_DISCARD_EXIST

Warning

Disks with IDs: [${DisksIds}] have their 'Enable Discard' on even though the underlying storage does not support it. Please configure the underlying storage to support discard or disable 'Enable Discard' for these disks.

1041

USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN_FAILED

Error

Failed to remove ${LunId} from Storage Domain ${StorageDomainName}. (User: ${UserName})

1042

USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN

Info

${LunId} was removed from Storage Domain ${StorageDomainName}. (User: ${UserName})

1043

USER_REMOVE_DEVICE_FROM_STORAGE_DOMAIN_STARTED

Info

Started to remove ${LunId} from Storage Domain ${StorageDomainName}. (User: ${UserName})

1044

ILLEGAL_STORAGE_DOMAIN_DISCARD_AFTER_DELETE

Warning

The storage domain with id ${storageDomainId} has its 'Discard After Delete' enabled even though the underlying storage does not support discard. Therefore, disks and snapshots on this storage domain will not be discarded before they are removed.

1045

LUNS_BROKE_SD_DISCARD_AFTER_DELETE_SUPPORT

Warning

Luns with IDs: [${LunsIds}] were updated in the DB but caused the storage domain ${StorageDomainName} (ID ${storageDomainId}) to stop supporting discard after delete. Please configure these luns' discard support in the underlying storage or disable 'Discard After Delete' for this storage domain.

1046

STORAGE_DOMAINS_COULD_NOT_BE_SYNCED

Info

Storage domains with IDs [${StorageDomainsIds}] could not be synchronized. To synchronize them, please move them to maintenance and then activate.

1048

DIRECT_LUNS_COULD_NOT_BE_SYNCED

Info

Direct LUN disks with IDs [${DirectLunDisksIds}] could not be synchronized because there was no active host in the data center. Please synchronize them to get their latest information from the storage.

1052

OVF_STORES_UPDATE_IGNORED

Normal

OVFs update was ignored - nothing to update for storage domain '${StorageDomainName}'

1060

UPLOAD_IMAGE_CLIENT_ERROR

Error

Unable to upload image to disk ${DiskId} due to a client error. Make sure the selected file is readable.

1061

UPLOAD_IMAGE_XHR_TIMEOUT_ERROR

Error

Unable to upload image to disk ${DiskId} due to a request timeout error. The upload bandwidth might be too slow. Please try to reduce the chunk size: 'engine-config -s UploadImageChunkSizeKB

1062

UPLOAD_IMAGE_NETWORK_ERROR

Error

Unable to upload image to disk ${DiskId} due to a network error. Ensure that ovirt-imageio-proxy service is installed and configured and that ovirt-engine’s CA certificate is registered as a trusted CA in the browser. The certificate can be fetched from ${EngineUrl}/ovirt-engine/services/pki-resource?resource

1063

DOWNLOAD_IMAGE_NETWORK_ERROR

Error

Unable to download disk ${DiskId} due to a network error. Make sure ovirt-imageio-proxy service is installed and configured, and ovirt-engine’s certificate is registered as a valid CA in the browser. The certificate can be fetched from https://<engine_url>/ovirt-engine/services/pki-resource?resource

1064

TRANSFER_IMAGE_STOPPED_BY_SYSTEM_TICKET_RENEW_FAILURE

Error

Transfer was stopped by system. Reason: failure in transfer image ticket renewal.

1065

TRANSFER_IMAGE_STOPPED_BY_SYSTEM_MISSING_TICKET

Error

Transfer was stopped by system. Reason: missing transfer image ticket.

1067

TRANSFER_IMAGE_STOPPED_BY_SYSTEM_MISSING_HOST

Error

Transfer was stopped by system. Reason: Could not find a suitable host for image data transfer.

1068

TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_CREATE_TICKET

Error

Transfer was stopped by system. Reason: failed to create a signed image ticket.

1069

TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_ADD_TICKET_TO_DAEMON

Error

Transfer was stopped by system. Reason: failed to add image ticket to ovirt-imageio-daemon.

1070

TRANSFER_IMAGE_STOPPED_BY_SYSTEM_FAILED_TO_ADD_TICKET_TO_PROXY

Error

Transfer was stopped by system. Reason: failed to add image ticket to ovirt-imageio-proxy.

1071

UPLOAD_IMAGE_PAUSED_BY_SYSTEM_TIMEOUT

Error

Upload was paused by system. Reason: timeout due to transfer inactivity.

1072

DOWNLOAD_IMAGE_CANCELED_TIMEOUT

Error

Download was canceled by system. Reason: timeout due to transfer inactivity.

1073

TRANSFER_IMAGE_PAUSED_BY_USER

Normal

Image transfer was paused by user (${UserName}).

1074

TRANSFER_IMAGE_RESUMED_BY_USER

Normal

Image transfer was resumed by user (${UserName}).

1098

NETWORK_UPDATE_DISPLAY_FOR_HOST_WITH_ACTIVE_VM

Warning

Display Network was updated on Host ${VdsName} with active VMs attached. The change will be applied to those VMs after their next reboot. Running VMs might loose display connectivity until then.

1099

NETWORK_UPDATE_DISPLAY_FOR_CLUSTER_WITH_ACTIVE_VM

Warning

Display Network (${NetworkName}) was updated for Cluster ${ClusterName} with active VMs attached. The change will be applied to those VMs after their next reboot.

1100

NETWORK_UPDATE_DISPLAY_TO_CLUSTER

Info

Update Display Network (${NetworkName}) for Cluster ${ClusterName}. (User: ${UserName})

1101

NETWORK_UPDATE_DISPLAY_TO_CLUSTER_FAILED

Error

Failed to update Display Network (${NetworkName}) for Cluster ${ClusterName}. (User: ${UserName})

1102

NETWORK_UPDATE_NETWORK_TO_VDS_INTERFACE

Info

Update Network ${NetworkName} in Host ${VdsName}. (User: ${UserName})

1103

NETWORK_UPDATE_NETWORK_TO_VDS_INTERFACE_FAILED

Error

Failed to update Network ${NetworkName} in Host ${VdsName}. (User: ${UserName})

1104

NETWORK_COMMINT_NETWORK_CHANGES

Info

Network changes were saved on host ${VdsName}

1105

NETWORK_COMMINT_NETWORK_CHANGES_FAILED

Error

Failed to commit network changes on ${VdsName}

1106

NETWORK_HOST_USING_WRONG_CLUSER_VLAN

Warning

${VdsName} is having wrong vlan id: ${VlanIdHost}, expected vlan id: ${VlanIdCluster}

1107

NETWORK_HOST_MISSING_CLUSER_VLAN

Warning

${VdsName} is missing vlan id: ${VlanIdCluster} that is expected by the cluster

1108

VDS_NETWORK_MTU_DIFFER_FROM_LOGICAL_NETWORK

Info

 

1109

BRIDGED_NETWORK_OVER_MULTIPLE_INTERFACES

Warning

Bridged network ${NetworkName} is attached to multiple interfaces: ${Interfaces} on Host ${VdsName}.

1110

VDS_NETWORKS_OUT_OF_SYNC

Warning

Host ${VdsName}'s following network(s) are not synchronized with their Logical Network configuration: ${Networks}.

1111

VM_MIGRATION_FAILED_DURING_MOVE_TO_MAINTENANCE_NO_DESTINATION_VDS

Error

Migration failed${DueToMigrationError} while Source Host is in 'preparing for maintenance' state.\n Consider manual intervention\: stopping/migrating Vms as Host’s state will not\n turn to maintenance while VMs are still running on it.(VM: ${VmName}, Source: ${VdsName}).

1112

NETWORK_UPDTAE_NETWORK_ON_CLUSTER

Info

Network ${NetworkName} on Cluster ${ClusterName} updated.

1113

NETWORK_UPDTAE_NETWORK_ON_CLUSTER_FAILED

Error

Failed to update Network ${NetworkName} on Cluster ${ClusterName}.

1114

NETWORK_UPDATE_NETWORK

Info

Network ${NetworkName} was updated on Data Center: ${StoragePoolName}

1115

NETWORK_UPDATE_NETWORK_FAILED

Error

Failed to update Network ${NetworkName} on Data Center: ${StoragePoolName}

1116

NETWORK_UPDATE_VM_INTERFACE_LINK_UP

Info

Link State is UP.

1117

NETWORK_UPDATE_VM_INTERFACE_LINK_DOWN

Info

Link State is DOWN.

1118

INVALID_BOND_INTERFACE_FOR_MANAGEMENT_NETWORK_CONFIGURATION

Error

Failed to configure management network on host ${VdsName}. Host ${VdsName} has an invalid bond interface (${InterfaceName} contains less than 2 active slaves) for the management network configuration.

1119

VLAN_ID_MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION

Error

Failed to configure management network on host ${VdsName}. Host ${VdsName} has an interface ${InterfaceName} for the management network configuration with VLAN-ID (${VlanId}), which is different from data-center definition (${MgmtVlanId}).

1120

SETUP_NETWORK_FAILED_FOR_MANAGEMENT_NETWORK_CONFIGURATION

Error

Failed to configure management network on host ${VdsName} due to setup networks failure.

1121

PERSIST_NETWORK_FAILED_FOR_MANAGEMENT_NETWORK

Warning

Failed to configure management network on host ${VdsName} due to failure in persisting the management network configuration.

1122

ADD_VNIC_PROFILE

Info

VM network interface profile ${VnicProfileName} was added to network ${NetworkName} in Data Center: ${DataCenterName}. (User: ${UserName})

1123

ADD_VNIC_PROFILE_FAILED

Error

Failed to add VM network interface profile ${VnicProfileName} to network ${NetworkName} in Data Center: ${DataCenterName} (User: ${UserName})

1124

UPDATE_VNIC_PROFILE

Info

VM network interface profile ${VnicProfileName} was updated for network ${NetworkName} in Data Center: ${DataCenterName}. (User: ${UserName})

1125

UPDATE_VNIC_PROFILE_FAILED

Error

Failed to update VM network interface profile ${VnicProfileName} for network ${NetworkName} in Data Center: ${DataCenterName}. (User: ${UserName})

1126

REMOVE_VNIC_PROFILE

Info

VM network interface profile ${VnicProfileName} was removed from network ${NetworkName} in Data Center: ${DataCenterName}. (User: ${UserName})

1127

REMOVE_VNIC_PROFILE_FAILED

Error

Failed to remove VM network interface profile ${VnicProfileName} from network ${NetworkName} in Data Center: ${DataCenterName}. (User: ${UserName})

1128

NETWORK_WITHOUT_INTERFACES

Warning

Network ${NetworkName} is not attached to any interface on host ${VdsName}.

1129

VNIC_PROFILE_UNSUPPORTED_FEATURES

Warning

VM ${VmName} has network interface ${NicName} which is using profile ${VnicProfile} with unsupported feature(s) '${UnsupportedFeatures}' by VM cluster ${ClusterName} (version ${CompatibilityVersion}).

1131

REMOVE_NETWORK_BY_LABEL_FAILED

Error

Network ${Network} cannot be removed from the following hosts: ${HostNames} in data-center ${StoragePoolName}.

1132

LABEL_NETWORK

Info

Network ${NetworkName} was labeled ${Label} in data-center ${StoragePoolName}.

1133

LABEL_NETWORK_FAILED

Error

Failed to label network ${NetworkName} with label ${Label} in data-center ${StoragePoolName}.

1134

UNLABEL_NETWORK

Info

Network ${NetworkName} was unlabeled in data-center ${StoragePoolName}.

1135

UNLABEL_NETWORK_FAILED

Error

Failed to unlabel network ${NetworkName} in data-center ${StoragePoolName}.

1136

LABEL_NIC

Info

Network interface card ${NicName} was labeled ${Label} on host ${VdsName}.

1137

LABEL_NIC_FAILED

Error

Failed to label network interface card ${NicName} with label ${Label} on host ${VdsName}.

1138

UNLABEL_NIC

Info

Label ${Label} was removed from network interface card ${NicName} on host ${VdsName}.

1139

UNLABEL_NIC_FAILED

Error

Failed to remove label ${Label} from network interface card ${NicName} on host ${VdsName}.

1140

SUBNET_REMOVED

Info

Subnet ${SubnetName} was removed from provider ${ProviderName}. (User: ${UserName})

1141

SUBNET_REMOVAL_FAILED

Error

Failed to remove subnet ${SubnetName} from provider ${ProviderName}. (User: ${UserName})

1142

SUBNET_ADDED

Info

Subnet ${SubnetName} was added on provider ${ProviderName}. (User: ${UserName})

1143

SUBNET_ADDITION_FAILED

Error

Failed to add subnet ${SubnetName} on provider ${ProviderName}. (User: ${UserName})

1144

CONFIGURE_NETWORK_BY_LABELS_WHEN_CHANGING_CLUSTER_FAILED

Error

Failed to configure networks on host ${VdsName} while changing its cluster.

1145

PERSIST_NETWORK_ON_HOST

Info

(${Sequence}/${Total}): Applying changes for network(s) ${NetworkNames} on host ${VdsName}. (User: ${UserName})

1146

PERSIST_NETWORK_ON_HOST_FINISHED

Info

(${Sequence}/${Total}): Successfully applied changes for network(s) ${NetworkNames} on host ${VdsName}. (User: ${UserName})

1147

PERSIST_NETWORK_ON_HOST_FAILED

Error

(${Sequence}/${Total}): Failed to apply changes for network(s) ${NetworkNames} on host ${VdsName}. (User: ${UserName})

1148

MULTI_UPDATE_NETWORK_NOT_POSSIBLE

Warning

Cannot apply network ${NetworkName} changes to hosts on unsupported data center ${StoragePoolName}. (User: ${UserName})

1149

REMOVE_PORT_FROM_EXTERNAL_PROVIDER_FAILED

Warning

Failed to remove vNIC ${NicName} from external network provider ${ProviderName}. The vNIC can be identified on the provider by device id ${NicId}.

1150

IMPORTEXPORT_EXPORT_VM

Info

Vm ${VmName} was exported successfully to ${StorageDomainName}

1151

IMPORTEXPORT_EXPORT_VM_FAILED

Error

Failed to export Vm ${VmName} to ${StorageDomainName}

1152

IMPORTEXPORT_IMPORT_VM

Info

Vm ${VmName} was imported successfully to Data Center ${StoragePoolName}, Cluster ${ClusterName}

1153

IMPORTEXPORT_IMPORT_VM_FAILED

Error

Failed to import Vm ${VmName} to Data Center ${StoragePoolName}, Cluster ${ClusterName}

1154

IMPORTEXPORT_REMOVE_TEMPLATE

Info

Template ${VmTemplateName} was removed from ${StorageDomainName}

1155

IMPORTEXPORT_REMOVE_TEMPLATE_FAILED

Error

Failed to remove Template ${VmTemplateName} from ${StorageDomainName}

1156

IMPORTEXPORT_EXPORT_TEMPLATE

Info

Template ${VmTemplateName} was exported successfully to ${StorageDomainName}

1157

IMPORTEXPORT_EXPORT_TEMPLATE_FAILED

Error

Failed to export Template ${VmTemplateName} to ${StorageDomainName}

1158

IMPORTEXPORT_IMPORT_TEMPLATE

Info

Template ${VmTemplateName} was imported successfully to Data Center ${StoragePoolName}, Cluster ${ClusterName}

1159

IMPORTEXPORT_IMPORT_TEMPLATE_FAILED

Error

Failed to import Template ${VmTemplateName} to Data Center ${StoragePoolName}, Cluster ${ClusterName}

1160

IMPORTEXPORT_REMOVE_VM

Info

Vm ${VmName} was removed from ${StorageDomainName}

1161

IMPORTEXPORT_REMOVE_VM_FAILED

Error

Failed to remove Vm ${VmName} remove from ${StorageDomainName}

1162

IMPORTEXPORT_STARTING_EXPORT_VM

Info

Starting export Vm ${VmName} to ${StorageDomainName}

1163

IMPORTEXPORT_STARTING_IMPORT_TEMPLATE

Info

Starting to import Template ${VmTemplateName} to Data Center ${StoragePoolName}, Cluster ${ClusterName}

1164

IMPORTEXPORT_STARTING_EXPORT_TEMPLATE

Info

Starting to export Template ${VmTemplateName} to ${StorageDomainName}

1165

IMPORTEXPORT_STARTING_IMPORT_VM

Info

Starting to import Vm ${VmName} to Data Center ${StoragePoolName}, Cluster ${ClusterName}

1166

IMPORTEXPORT_STARTING_REMOVE_TEMPLATE

Info

Starting to remove Template ${VmTemplateName} remove ${StorageDomainName}

1167

IMPORTEXPORT_STARTING_REMOVE_VM

Info

Starting to remove Vm ${VmName} remove from ${StorageDomainName}

1168

IMPORTEXPORT_FAILED_TO_IMPORT_VM

Warning

Failed to read VM '${ImportedVmName}' OVF, it may be corrupted. Underlying error message: ${ErrorMessage}

1169

IMPORTEXPORT_FAILED_TO_IMPORT_TEMPLATE

Warning

Failed to read Template '${Template}' OVF, it may be corrupted. Underlying error message: ${ErrorMessage}

1170

IMPORTEXPORT_IMPORT_TEMPLATE_INVALID_INTERFACES

Normal

While importing Template ${EntityName}, the Network/s ${Networks} were found to be Non-VM Networks or do not exist in Cluster. Network Name was not set in the Interface/s ${Interfaces}.

1171

USER_ACCOUNT_PASSWORD_EXPIRED

Error

User ${UserName} cannot login, as the user account password has expired. Please contact the system administrator.

1172

AUTH_FAILED_INVALID_CREDENTIALS

Error

User ${UserName} cannot login, please verify the username and password.

1173

AUTH_FAILED_CLOCK_SKEW_TOO_GREAT

Error

User ${UserName} cannot login, the engine clock is not synchronized with directory services. Please contact the system administrator.

1174

AUTH_FAILED_NO_KDCS_FOUND

Error

User ${UserName} cannot login, authentication domain cannot be found. Please contact the system administrator.

1175

AUTH_FAILED_DNS_ERROR

Error

User ${UserName} cannot login, there’s an error in DNS configuration. Please contact the system administrator.

1176

AUTH_FAILED_OTHER

Error

User ${UserName} cannot login, unknown kerberos error. Please contact the system administrator.

1177

AUTH_FAILED_DNS_COMMUNICATION_ERROR

Error

User ${UserName} cannot login, cannot lookup DNS for SRV records. Please contact the system administrator.

1178

AUTH_FAILED_CONNECTION_TIMED_OUT

Error

User ${UserName} cannot login, connection to LDAP server has timed out. Please contact the system administrator.

1179

AUTH_FAILED_WRONG_REALM

Error

User ${UserName} cannot login, please verify your domain name.

1180

AUTH_FAILED_CONNECTION_ERROR

Error

User ${UserName} cannot login, connection refused or some configuration problems exist. Possible DNS error. Please contact the system administrator.

1181

AUTH_FAILED_CANNOT_FIND_LDAP_SERVER_FOR_DOMAIN

Error

User ${UserName} cannot login, cannot find valid LDAP server for domain. Please contact the system administrator.

1182

AUTH_FAILED_NO_USER_INFORMATION_WAS_FOUND

Error

User ${UserName} cannot login, no user information was found. Please contact the system administrator.

1183

AUTH_FAILED_CLIENT_NOT_FOUND_IN_KERBEROS_DATABASE

Error

User ${UserName} cannot login, user was not found in domain. Please contact the system administrator.

1184

AUTH_FAILED_INTERNAL_KERBEROS_ERROR

Error

User ${UserName} cannot login, an internal error has ocurred in the Kerberos implementation of the JVM. Please contact the system administrator.

1185

USER_ACCOUNT_EXPIRED

Error

The account for ${UserName} got expired. Please contact the system administrator.

1186

IMPORTEXPORT_NO_PROXY_HOST_AVAILABLE_IN_DC

Error

No Host in Data Center '${StoragePoolName}' can serve as a proxy to retrieve remote VMs information (User: ${UserName}).

1187

IMPORTEXPORT_HOST_CANNOT_SERVE_AS_PROXY

Error

Host ${VdsName} cannot be used as a proxy to retrieve remote VMs information since it is not up (User: ${UserName}).

1188

IMPORTEXPORT_PARTIAL_VM_MISSING_ENTITIES

Warning

The following entities could not be verified and will not be part of the imported VM ${VmName}: '${MissingEntities}' (User: ${UserName}).

1189

IMPORTEXPORT_IMPORT_VM_FAILED_UPDATING_OVF

Error

Failed to import Vm ${VmName} to Data Center ${StoragePoolName}, Cluster ${ClusterName}, could not update VM data in export.

1190

USER_RESTORE_FROM_SNAPSHOT_START

Info

Restoring VM ${VmName} from snapshot started by user ${UserName}.

1191

VM_DISK_ALREADY_CHANGED

Info

CD ${DiskName} is already inserted to VM ${VmName}, disk change action was skipped. User: ${UserName}.

1192

VM_DISK_ALREADY_EJECTED

Info

CD is already ejected from VM ${VmName}, disk change action was skipped. User: ${UserName}.

1193

IMPORTEXPORT_STARTING_CONVERT_VM

Info

Starting to convert Vm ${VmName}

1194

IMPORTEXPORT_CONVERT_FAILED

Info

Failed to convert Vm ${VmName}

1195

IMPORTEXPORT_CANNOT_GET_OVF

Info

Failed to get the configuration of converted Vm ${VmName}

1196

IMPORTEXPORT_INVALID_OVF

Info

Failed to process the configuration of converted Vm ${VmName}

1197

IMPORTEXPORT_PARTIAL_TEMPLATE_MISSING_ENTITIES

Warning

The following entities could not be verified and will not be part of the imported Template ${VmTemplateName}: '${MissingEntities}' (User: ${UserName}).

1200

ENTITY_RENAMED

Info

${EntityType} ${OldEntityName} was renamed from ${OldEntityName} to ${NewEntityName} by ${UserName}.

1201

UPDATE_HOST_NIC_VFS_CONFIG

Info

The VFs configuration of network interface card ${NicName} on host ${VdsName} was updated.

1202

UPDATE_HOST_NIC_VFS_CONFIG_FAILED

Error

Failed to update the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1203

ADD_VFS_CONFIG_NETWORK

Info

Network ${NetworkName} was added to the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1204

ADD_VFS_CONFIG_NETWORK_FAILED

Info

Failed to add ${NetworkName} to the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1205

REMOVE_VFS_CONFIG_NETWORK

Info

Network ${NetworkName} was removed from the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1206

REMOVE_VFS_CONFIG_NETWORK_FAILED

Info

Failed to remove ${NetworkName} from the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1207

ADD_VFS_CONFIG_LABEL

Info

Label ${Label} was added to the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1208

ADD_VFS_CONFIG_LABEL_FAILED

Info

Failed to add ${Label} to the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1209

REMOVE_VFS_CONFIG_LABEL

Info

Label ${Label} was removed from the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1210

REMOVE_VFS_CONFIG_LABEL_FAILED

Info

Failed to remove ${Label} from the VFs configuration of network interface card ${NicName} on host ${VdsName}.

1211

USER_REDUCE_DOMAIN_DEVICES_STARTED

Info

Started to reduce Storage ${StorageDomainName} devices. (User: ${UserName}).

1212

USER_REDUCE_DOMAIN_DEVICES_FAILED_METADATA_DEVICES

Error

Failed to reduce Storage ${StorageDomainName}. The following devices contains the domain metadata ${deviceIds} and can’t be reduced from the domain. (User: ${UserName}).

1213

USER_REDUCE_DOMAIN_DEVICES_FAILED

Error

Failed to reduce Storage ${StorageDomainName}. (User: ${UserName}).

1214

USER_REDUCE_DOMAIN_DEVICES_SUCCEEDED

Info

Storage ${StorageDomainName} has been reduced. (User: ${UserName}).

1215

USER_REDUCE_DOMAIN_DEVICES_FAILED_NO_FREE_SPACE

Error

Can’t reduce Storage ${StorageDomainName}. There is not enough space on the destination devices of the storage domain. (User: ${UserName}).

1216

USER_REDUCE_DOMAIN_DEVICES_FAILED_TO_GET_DOMAIN_INFO

Error

Can’t reduce Storage ${StorageDomainName}. Failed to get the domain info. (User: ${UserName}).

1217

CANNOT_IMPORT_VM_WITH_LEASE_COMPAT_VERSION

Warning

The VM ${VmName} has a VM lease defined yet will be imported without it as the VM compatibility version does not support VM leases.

1218

CANNOT_IMPORT_VM_WITH_LEASE_STORAGE_DOMAIN

Warning

The VM ${VmName} has a VM lease defined yet will be imported without it as the Storage Domain for the lease does not exist or is not active.

1219

FAILED_DETERMINE_STORAGE_DOMAIN_METADATA_DEVICES

Error

Failed to determine the metadata devices of Storage Domain ${StorageDomainName}.

1220

HOT_PLUG_LEASE_FAILED

Error

Failed to hot plug lease to the VM ${VmName}. The VM is running without a VM lease.

1221

HOT_UNPLUG_LEASE_FAILED

Error

Failed to hot unplug lease to the VM ${VmName}.

1222

DETACH_DOMAIN_WITH_VMS_AND_TEMPLATES_LEASES

Warning

The deactivated domain ${storageDomainName} contained leases for the following VMs/Templates: ${entitiesNames}, a part of those VMs will not run and need manual removal of the VM leases.

1223

IMPORTEXPORT_STARTING_EXPORT_VM_TO_OVA

Info

Starting to export Vm ${VmName} as a Virtual Appliance

1224

IMPORTEXPORT_EXPORT_VM_TO_OVA

Info

Vm ${VmName} was exported successfully as a Virtual Appliance to path ${OvaPath} on Host ${VdsName}

1225

IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED

Error

Failed to export Vm ${VmName} as a Virtual Appliance to path ${OvaPath} on Host ${VdsName}

1226

IMPORTEXPORT_STARTING_EXPORT_TEMPLATE_TO_OVA

Info

Starting to export Template ${VmTemplateName} as a Virtual Appliance

1227

IMPORTEXPORT_EXPORT_TEMPLATE_TO_OVA

Info

Template ${VmTemplateName} was exported successfully as a Virtual Appliance to path ${OvaPath} on Host ${VdsName}

1228

IMPORTEXPORT_EXPORT_TEMPLATE_TO_OVA_FAILED

Error

Failed to export Template ${VmTemplateName} as a Virtual Appliance to path ${OvaPath} on Host ${VdsName}

1300

NUMA_ADD_VM_NUMA_NODE_SUCCESS

Info

Add VM NUMA node successfully.

1301

NUMA_ADD_VM_NUMA_NODE_FAILED

Error

Add VM NUMA node failed.

1310

NUMA_UPDATE_VM_NUMA_NODE_SUCCESS

Info

Update VM NUMA node successfully.

1311

NUMA_UPDATE_VM_NUMA_NODE_FAILED

Error

Update VM NUMA node failed.

1320

NUMA_REMOVE_VM_NUMA_NODE_SUCCESS

Info

Remove VM NUMA node successfully.

1321

NUMA_REMOVE_VM_NUMA_NODE_FAILED

Error

Remove VM NUMA node failed.

1322

USER_ADD_VM_TEMPLATE_CREATE_TEMPLATE_FAILURE

Error

Failed to create Template ${VmTemplateName} or its disks from VM ${VmName}.

1323

USER_ADD_VM_TEMPLATE_ASSIGN_ILLEGAL_FAILURE

Error

Failed preparing Template ${VmTemplateName} for sealing (VM: ${VmName}).

1324

USER_ADD_VM_TEMPLATE_SEAL_FAILURE

Error

Failed to seal Template ${VmTemplateName} (VM: ${VmName}).

1325

USER_SPARSIFY_IMAGE_START

Info

Started to sparsify ${DiskAlias}

1326

USER_SPARSIFY_IMAGE_FINISH_SUCCESS

Info

${DiskAlias} sparsified successfully.

1327

USER_SPARSIFY_IMAGE_FINISH_FAILURE

Error

Failed to sparsify ${DiskAlias}.

1328

USER_AMEND_IMAGE_START

Info

Started to amend ${DiskAlias}

1329

USER_AMEND_IMAGE_FINISH_SUCCESS

Info

${DiskAlias} has been amended successfully.

1330

USER_AMEND_IMAGE_FINISH_FAILURE

Error

Failed to amend ${DiskAlias}.

1340

VM_DOES_NOT_FIT_TO_SINGLE_NUMA_NODE

Warning

VM ${VmName} does not fit to a single NUMA node on host ${HostName}. This may negatively impact its performance. Consider using vNUMA and NUMA pinning for this VM.

1400

ENTITY_RENAMED_INTERNALLY

Info

${EntityType} ${OldEntityName} was renamed from ${OldEntityName} to ${NewEntityName}.

1402

USER_LOGIN_ON_BEHALF_FAILED

Error

Failed to execute login on behalf - ${LoginOnBehalfLogInfo}.

1403

IRS_CONFIRMED_DISK_SPACE_LOW

Warning

Warning, low confirmed disk space. ${StorageDomainName} domain has ${DiskSpace} GB of confirmed free space.

2000

USER_HOTPLUG_DISK

Info

VM ${VmName} disk ${DiskAlias} was plugged by ${UserName}.

2001

USER_FAILED_HOTPLUG_DISK

Error

Failed to plug disk ${DiskAlias} to VM ${VmName} (User: ${UserName}).

2002

USER_HOTUNPLUG_DISK

Info

VM ${VmName} disk ${DiskAlias} was unplugged by ${UserName}.

2003

USER_FAILED_HOTUNPLUG_DISK

Error

Failed to unplug disk ${DiskAlias} from VM ${VmName} (User: ${UserName}).

2004

USER_COPIED_DISK

Info

User ${UserName} is copying disk ${DiskAlias} to domain ${StorageDomainName}.

2005

USER_FAILED_COPY_DISK

Error

User ${UserName} failed to copy disk ${DiskAlias} to domain ${StorageDomainName}.

2006

USER_COPIED_DISK_FINISHED_SUCCESS

Info

User ${UserName} finished copying disk ${DiskAlias} to domain ${StorageDomainName}.

2007

USER_COPIED_DISK_FINISHED_FAILURE

Error

User ${UserName} finished with error copying disk ${DiskAlias} to domain ${StorageDomainName}.

2008

USER_MOVED_DISK

Info

User ${UserName} moving disk ${DiskAlias} to domain ${StorageDomainName}.

2009

USER_FAILED_MOVED_VM_DISK

Error

User ${UserName} failed to move disk ${DiskAlias} to domain ${StorageDomainName}.

2010

USER_MOVED_DISK_FINISHED_SUCCESS

Info

User ${UserName} finished moving disk ${DiskAlias} to domain ${StorageDomainName}.

2011

USER_MOVED_DISK_FINISHED_FAILURE

Error

User ${UserName} have failed to move disk ${DiskAlias} to domain ${StorageDomainName}.

2012

USER_FINISHED_REMOVE_DISK_NO_DOMAIN

Info

Disk ${DiskAlias} was successfully removed (User ${UserName}).

2013

USER_FINISHED_FAILED_REMOVE_DISK_NO_DOMAIN

Warning

Failed to remove disk ${DiskAlias} (User ${UserName}).

2014

USER_FINISHED_REMOVE_DISK

Info

Disk ${DiskAlias} was successfully removed from domain ${StorageDomainName} (User ${UserName}).

2015

USER_FINISHED_FAILED_REMOVE_DISK

Warning

Failed to remove disk ${DiskAlias} from storage domain ${StorageDomainName} (User: ${UserName}).

2016

USER_ATTACH_DISK_TO_VM

Info

Disk ${DiskAlias} was successfully attached to VM ${VmName} by ${UserName}.

2017

USER_FAILED_ATTACH_DISK_TO_VM

Error

Failed to attach Disk ${DiskAlias} to VM ${VmName} (User: ${UserName}).

2018

USER_DETACH_DISK_FROM_VM

Info

Disk ${DiskAlias} was successfully detached from VM ${VmName} by ${UserName}.

2019

USER_FAILED_DETACH_DISK_FROM_VM

Error

Failed to detach Disk ${DiskAlias} from VM ${VmName} (User: ${UserName}).

2020

USER_ADD_DISK

Info

Add-Disk operation of '${DiskAlias}' was initiated by ${UserName}.

2021

USER_ADD_DISK_FINISHED_SUCCESS

Info

The disk '${DiskAlias}' was successfully added.

2022

USER_ADD_DISK_FINISHED_FAILURE

Error

Add-Disk operation failed to complete.

2023

USER_FAILED_ADD_DISK

Error

Add-Disk operation failed (User: ${UserName}).

2024

USER_RUN_UNLOCK_ENTITY_SCRIPT

Info

 

2025

USER_MOVE_IMAGE_GROUP_FAILED_TO_DELETE_SRC_IMAGE

Warning

Possible failure while deleting ${DiskAlias} from the source Storage Domain ${StorageDomainName} during the move operation. The Storage Domain may be manually cleaned-up from possible leftovers (User:${UserName}).

2026

USER_MOVE_IMAGE_GROUP_FAILED_TO_DELETE_DST_IMAGE

Warning

Possible failure while clearing possible leftovers of ${DiskAlias} from the target Storage Domain ${StorageDomainName} after the move operation failed to copy the image to it properly. The Storage Domain may be manually cleaned-up from possible leftovers (User:${UserName}).

2027

USER_IMPORT_IMAGE

Info

User ${UserName} importing image ${RepoImageName} to domain ${StorageDomainName}.

2028

USER_IMPORT_IMAGE_FINISHED_SUCCESS

Info

User ${UserName} successfully imported image ${RepoImageName} to domain ${StorageDomainName}.

2029

USER_IMPORT_IMAGE_FINISHED_FAILURE

Error

User ${UserName} failed to import image ${RepoImageName} to domain ${StorageDomainName}.

2030

USER_EXPORT_IMAGE

Info

User ${UserName} exporting image ${RepoImageName} to domain ${DestinationStorageDomainName}.

2031

USER_EXPORT_IMAGE_FINISHED_SUCCESS

Info

User ${UserName} successfully exported image ${RepoImageName} to domain ${DestinationStorageDomainName}.

2032

USER_EXPORT_IMAGE_FINISHED_FAILURE

Error

User ${UserName} failed to export image ${RepoImageName} to domain ${DestinationStorageDomainName}.

2033

HOT_SET_NUMBER_OF_CPUS

Info

Hotplug CPU: changed the number of CPUs on VM ${vmName} from ${previousNumberOfCpus} to ${numberOfCpus}

2034

FAILED_HOT_SET_NUMBER_OF_CPUS

Error

Failed to hot set number of CPUS to VM ${vmName}. Underlying error message: ${ErrorMessage}

2035

USER_ISCSI_BOND_HOST_RESTART_WARNING

Warning

The following Networks has been removed from the iSCSI bond ${IscsiBondName}: ${NetworkNames}. for those changes to take affect, the hosts must be moved to maintenance and activated again.

2036

ADD_DISK_INTERNAL

Info

Add-Disk operation of '${DiskAlias}' was initiated by the system.

2037

ADD_DISK_INTERNAL_FAILURE

Info

Add-Disk operation of '${DiskAlias}' failed to complete.

2038

USER_REMOVE_DISK_INITIATED

Info

Removal of Disk ${DiskAlias} from domain ${StorageDomainName} was initiated by ${UserName}.

2039

HOT_SET_MEMORY

Info

Hotset memory: changed the amount of memory on VM ${vmName} from ${previousMem} to ${newMem}

2040

FAILED_HOT_SET_MEMORY

Error

Failed to hot set memory to VM ${vmName}. Underlying error message: ${ErrorMessage}

2041

DISK_PREALLOCATION_FAILED

Error

 

2042

USER_FINISHED_REMOVE_DISK_ATTACHED_TO_VMS

Info

Disk ${DiskAlias} associated to the VMs ${VmNames} was successfully removed from domain ${StorageDomainName} (User ${UserName}).

2043

USER_FINISHED_REMOVE_DISK_ATTACHED_TO_VMS_NO_DOMAIN

Info

Disk ${DiskAlias} associated to the VMs ${VmNames} was successfully removed (User ${UserName}).

2044

USER_REMOVE_DISK_ATTACHED_TO_VMS_INITIATED

Info

Removal of Disk ${DiskAlias} associated to the VMs ${VmNames} from domain ${StorageDomainName} was initiated by ${UserName}.

2045

USER_COPY_IMAGE_GROUP_FAILED_TO_DELETE_DST_IMAGE

Warning

Possible failure while clearing possible leftovers of ${DiskAlias} from the target Storage Domain ${StorageDomainName} after the operation failed. The Storage Domain may be manually cleaned-up from possible leftovers (User:${UserName}).

2046

MEMORY_HOT_UNPLUG_SUCCESSFULLY_REQUESTED

Info

Hot unplug of memory device (${deviceId}) of size ${memoryDeviceSizeMb}MB was successfully requested on VM '${vmName}'. Physical memory guaranteed updated from ${oldMinMemoryMb}MB to ${newMinMemoryMb}MB}.

2047

MEMORY_HOT_UNPLUG_FAILED

Error

Failed to hot unplug memory device (${deviceId}) of size ${memoryDeviceSizeMb}MiB out of VM '${vmName}': ${errorMessage}

2048

FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE

Error

Failed to hot plug memory to VM ${vmName}. Amount of added memory (${memoryAdded}MiB) is not dividable by ${requiredFactor}MiB.

2049

MEMORY_HOT_UNPLUG_SUCCESSFULLY_REQUESTED_PLUS_MEMORY_INFO

Info

Hot unplug of memory device (${deviceId}) of size ${memoryDeviceSizeMb}MiB was successfully requested on VM '${vmName}'. Defined Memory updated from ${oldMemoryMb}MiB to ${newMemoryMb}MiB. Physical memory guaranteed updated from ${oldMinMemoryMb}MiB to ${newMinMemoryMb}MiB.

2050

NO_MEMORY_DEVICE_TO_HOT_UNPLUG

Info

Defined memory can’t be decreased. There are no hot plugged memory devices on VM ${vmName}.

2051

NO_SUITABLE_MEMORY_DEVICE_TO_HOT_UNPLUG

Info

There is no memory device to hot unplug to satisfy request to decrement memory from ${oldMemoryMb}MiB to ${newMemoryMB}MiB on VM ${vmName}. Available memory devices (decremented memory sizes): ${memoryHotUnplugOptions}.

3000

USER_ADD_QUOTA

Info

Quota ${QuotaName} has been added by ${UserName}.

3001

USER_FAILED_ADD_QUOTA

Error

Failed to add Quota ${QuotaName}. The operation was initiated by ${UserName}.

3002

USER_UPDATE_QUOTA

Info

Quota ${QuotaName} has been updated by ${UserName}.

3003

USER_FAILED_UPDATE_QUOTA

Error

Failed to update Quota ${QuotaName}. The operation was initiated by ${UserName}..

3004

USER_DELETE_QUOTA

Info

Quota ${QuotaName} has been deleted by ${UserName}.

3005

USER_FAILED_DELETE_QUOTA

Error

Failed to delete Quota ${QuotaName}. The operation was initiated by ${UserName}..

3006

USER_EXCEEDED_QUOTA_CLUSTER_GRACE_LIMIT

Error

Cluster-Quota ${QuotaName} limit exceeded and operation was blocked. Utilization: ${Utilization}, Requested: ${Requested} - Please select a different quota or contact your administrator to extend the quota.

3007

USER_EXCEEDED_QUOTA_CLUSTER_LIMIT

Warning

Cluster-Quota ${QuotaName} limit exceeded and entered the grace zone. Utilization: ${Utilization} (It is advised to select a different quota or contact your administrator to extend the quota).

3008

USER_EXCEEDED_QUOTA_CLUSTER_THRESHOLD

Warning

Cluster-Quota ${QuotaName} is about to exceed. Utilization: ${Utilization}

3009

USER_EXCEEDED_QUOTA_STORAGE_GRACE_LIMIT

Error

Storage-Quota ${QuotaName} limit exceeded and operation was blocked. Utilization(used/requested): ${CurrentStorage}%/${Requested}% - Please select a different quota or contact your administrator to extend the quota.

3010

USER_EXCEEDED_QUOTA_STORAGE_LIMIT

Warning

Storage-Quota ${QuotaName} limit exceeded and entered the grace zone. Utilization: ${CurrentStorage}% (It is advised to select a different quota or contact your administrator to extend the quota).

3011

USER_EXCEEDED_QUOTA_STORAGE_THRESHOLD

Warning

Storage-Quota ${QuotaName} is about to exceed. Utilization: ${CurrentStorage}%

3012

QUOTA_STORAGE_RESIZE_LOWER_THEN_CONSUMPTION

Warning

Storage-Quota ${QuotaName}: the new size set for this quota is less than current disk utilization.

3013

MISSING_QUOTA_STORAGE_PARAMETERS_PERMISSIVE_MODE

Warning

Missing Quota for Disk, proceeding since in Permissive (Audit) mode.

3014

MISSING_QUOTA_CLUSTER_PARAMETERS_PERMISSIVE_MODE

Warning

Missing Quota for VM ${VmName}, proceeding since in Permissive (Audit) mode.

3015

USER_EXCEEDED_QUOTA_CLUSTER_GRACE_LIMIT_PERMISSIVE_MODE

Warning

Cluster-Quota ${QuotaName} limit exceeded, proceeding since in Permissive (Audit) mode. Utilization: ${Utilization}, Requested: ${Requested} - Please select a different quota or contact your administrator to extend the quota.

3016

USER_EXCEEDED_QUOTA_STORAGE_GRACE_LIMIT_PERMISSIVE_MODE

Warning

Storage-Quota ${QuotaName} limit exceeded, proceeding since in Permissive (Audit) mode. Utilization(used/requested): ${CurrentStorage}%/${Requested}% - Please select a different quota or contact your administrator to extend the quota.

3017

USER_IMPORT_IMAGE_AS_TEMPLATE

Info

User ${UserName} importing image ${RepoImageName} as template ${TemplateName} to domain ${StorageDomainName}.

3018

USER_IMPORT_IMAGE_AS_TEMPLATE_FINISHED_SUCCESS

Info

User ${UserName} successfully imported image ${RepoImageName} as template ${TemplateName} to domain ${StorageDomainName}.

3019

USER_IMPORT_IMAGE_AS_TEMPLATE_FINISHED_FAILURE

Error

User ${UserName} failed to import image ${RepoImageName} as template ${TemplateName} to domain ${StorageDomainName}.

4000

GLUSTER_VOLUME_CREATE

Info

Gluster Volume ${glusterVolumeName} created on cluster ${clusterName}.

4001

GLUSTER_VOLUME_CREATE_FAILED

Error

Creation of Gluster Volume ${glusterVolumeName} failed on cluster ${clusterName}.

4002

GLUSTER_VOLUME_OPTION_ADDED

Info

Volume Option ${Key}

4003

GLUSTER_VOLUME_OPTION_SET_FAILED

Error

Volume Option ${Key}

4004

GLUSTER_VOLUME_START

Info

Gluster Volume ${glusterVolumeName} of cluster ${clusterName} started.

4005

GLUSTER_VOLUME_START_FAILED

Error

Could not start Gluster Volume ${glusterVolumeName} of cluster ${clusterName}.

4006

GLUSTER_VOLUME_STOP

Info

Gluster Volume ${glusterVolumeName} stopped on cluster ${clusterName}.

4007

GLUSTER_VOLUME_STOP_FAILED

Error

Could not stop Gluster Volume ${glusterVolumeName} on cluster ${clusterName}.

4008

GLUSTER_VOLUME_OPTIONS_RESET

Info

Volume Option ${Key}

4009

GLUSTER_VOLUME_OPTIONS_RESET_FAILED

Error

Could not reset Gluster Volume ${glusterVolumeName} Options on cluster ${clusterName}.

4010

GLUSTER_VOLUME_DELETE

Info

Gluster Volume ${glusterVolumeName} deleted on cluster ${clusterName}.

4011

GLUSTER_VOLUME_DELETE_FAILED

Error

Could not delete Gluster Volume ${glusterVolumeName} on cluster ${clusterName}.

4012

GLUSTER_VOLUME_REBALANCE_START

Info

Gluster Volume ${glusterVolumeName} rebalance started on cluster ${clusterName}.

4013

GLUSTER_VOLUME_REBALANCE_START_FAILED

Error

Could not start Gluster Volume ${glusterVolumeName} rebalance on cluster ${clusterName}.

4014

GLUSTER_VOLUME_REMOVE_BRICKS

Info

Bricks removed from Gluster Volume ${glusterVolumeName} of cluster ${clusterName}.

4015

GLUSTER_VOLUME_REMOVE_BRICKS_FAILED

Error

Could not remove bricks from Gluster Volume ${glusterVolumeName} of cluster ${clusterName}.

4016

GLUSTER_VOLUME_REPLACE_BRICK_FAILED

Error

Replace Gluster Volume ${glusterVolumeName} Brick failed on cluster ${clusterName}

4017

GLUSTER_VOLUME_REPLACE_BRICK_START

Info

Gluster Volume ${glusterVolumeName} Replace Brick started on cluster ${clusterName}.

4018

GLUSTER_VOLUME_REPLACE_BRICK_START_FAILED

Error

Could not start Gluster Volume ${glusterVolumeName} Replace Brick on cluster ${clusterName}.

4019

GLUSTER_VOLUME_ADD_BRICK

Info

${NoOfBricks} brick(s) added to volume ${glusterVolumeName} of cluster ${clusterName}.

4020

GLUSTER_VOLUME_ADD_BRICK_FAILED

Error

Failed to add bricks to the Gluster Volume ${glusterVolumeName} of cluster ${clusterName}.

4021

GLUSTER_SERVER_REMOVE_FAILED

Error

Failed to remove host ${VdsName} from Cluster ${ClusterName}.

4022

GLUSTER_VOLUME_PROFILE_START

Info

Gluster Volume ${glusterVolumeName} profiling started on cluster ${clusterName}.

4023

GLUSTER_VOLUME_PROFILE_START_FAILED

Error

Could not start profiling on gluster volume ${glusterVolumeName} of cluster ${clusterName}

4024

GLUSTER_VOLUME_PROFILE_STOP

Info

Gluster Volume ${glusterVolumeName} profiling stopped on cluster ${clusterName}.

4025

GLUSTER_VOLUME_PROFILE_STOP_FAILED

Error

Could not stop Profiling on gluster volume ${glusterVolumeName} of cluster ${clusterName}.

4026

GLUSTER_VOLUME_CREATED_FROM_CLI

Warning

Detected new volume ${glusterVolumeName} on cluster ${ClusterName}, and added it to engine DB.

4027

GLUSTER_VOLUME_DELETED_FROM_CLI

Info

Detected deletion of volume ${glusterVolumeName} on cluster ${ClusterName}, and deleted it from engine DB.

4028

GLUSTER_VOLUME_OPTION_SET_FROM_CLI

Warning

Detected new option ${key}

4029

GLUSTER_VOLUME_OPTION_RESET_FROM_CLI

Warning

Detected option ${key}

4030

GLUSTER_VOLUME_PROPERTIES_CHANGED_FROM_CLI

Warning

Detected changes in properties of volume ${glusterVolumeName} of cluster ${ClusterName}, and updated the same in engine DB.

4031

GLUSTER_VOLUME_BRICK_ADDED_FROM_CLI

Warning

Detected new brick ${brick} on volume ${glusterVolumeName} of cluster ${ClusterName}, and added it to engine DB.

4032

GLUSTER_VOLUME_BRICK_REMOVED_FROM_CLI

Info

Detected brick ${brick} removed from Volume ${glusterVolumeName} of cluster ${ClusterName}, and removed it from engine DB.

4033

GLUSTER_SERVER_REMOVED_FROM_CLI

Info

Detected server ${VdsName} removed from Cluster ${ClusterName}, and removed it from engine DB.

4034

GLUSTER_VOLUME_INFO_FAILED

Error

Failed to fetch gluster volume list from server ${VdsName}.

4035

GLUSTER_COMMAND_FAILED

Error

Gluster command [${Command}] failed on server ${Server}.

4038

GLUSTER_SERVER_REMOVE

Info

Host ${VdsName} removed from Cluster ${ClusterName}.

4039

GLUSTER_VOLUME_STARTED_FROM_CLI

Warning

Detected that Volume ${glusterVolumeName} of Cluster ${ClusterName} was started, and updated engine DB with it’s new status.

4040

GLUSTER_VOLUME_STOPPED_FROM_CLI

Warning

Detected that Volume ${glusterVolumeName} of Cluster ${ClusterName} was stopped, and updated engine DB with it’s new status.

4041

GLUSTER_VOLUME_OPTION_CHANGED_FROM_CLI

Info

Detected change in value of option ${key} from ${oldValue} to ${newValue} on volume ${glusterVolumeName} of cluster ${ClusterName}, and updated it to engine DB.

4042

GLUSTER_HOOK_ENABLE

Info

Gluster Hook ${GlusterHookName} enabled on cluster ${ClusterName}.

4043

GLUSTER_HOOK_ENABLE_FAILED

Error

Failed to enable Gluster Hook ${GlusterHookName} on cluster ${ClusterName}. ${FailureMessage}

4044

GLUSTER_HOOK_ENABLE_PARTIAL

Warning

Gluster Hook ${GlusterHookName} enabled on some of the servers on cluster ${ClusterName}. ${FailureMessage}

4045

GLUSTER_HOOK_DISABLE

Info

Gluster Hook ${GlusterHookName} disabled on cluster ${ClusterName}.

4046

GLUSTER_HOOK_DISABLE_FAILED

Error

Failed to disable Gluster Hook ${GlusterHookName} on cluster ${ClusterName}. ${FailureMessage}

4047

GLUSTER_HOOK_DISABLE_PARTIAL

Warning

Gluster Hook ${GlusterHookName} disabled on some of the servers on cluster ${ClusterName}. ${FailureMessage}

4048

GLUSTER_HOOK_LIST_FAILED

Error

Failed to retrieve hook list from ${VdsName} of Cluster ${ClusterName}.

4049

GLUSTER_HOOK_CONFLICT_DETECTED

Warning

Detected conflict in hook ${HookName} of Cluster ${ClusterName}.

4050

GLUSTER_HOOK_DETECTED_NEW

Info

Detected new hook ${HookName} in Cluster ${ClusterName}.

4051

GLUSTER_HOOK_DETECTED_DELETE

Info

Detected removal of hook ${HookName} in Cluster ${ClusterName}.

4052

GLUSTER_VOLUME_OPTION_MODIFIED

Info

Volume Option ${Key} changed to ${Value} from ${oldvalue} on ${glusterVolumeName} of cluster ${clusterName}.

4053

GLUSTER_HOOK_GETCONTENT_FAILED

Error

Failed to read content of hook ${HookName} in Cluster ${ClusterName}.

4054

GLUSTER_SERVICES_LIST_FAILED

Error

Could not fetch statuses of services from server ${VdsName}. Updating statuses of all services on this server to UNKNOWN.

4055

GLUSTER_SERVICE_TYPE_ADDED_TO_CLUSTER

Info

Service type ${ServiceType} was not mapped to cluster ${ClusterName}. Mapped it now.

4056

GLUSTER_CLUSTER_SERVICE_STATUS_CHANGED

Info

Status of service type ${ServiceType} changed from ${OldStatus} to ${NewStatus} on cluster ${ClusterName}

4057

GLUSTER_SERVICE_ADDED_TO_SERVER

Info

Service ${ServiceName} was not mapped to server ${VdsName}. Mapped it now.

4058

GLUSTER_SERVER_SERVICE_STATUS_CHANGED

Info

Status of service ${ServiceName} on server ${VdsName} changed from ${OldStatus} to ${NewStatus}. Updating in engine now.

4059

GLUSTER_HOOK_UPDATED

Info

Gluster Hook ${GlusterHookName} updated on conflicting servers.

4060

GLUSTER_HOOK_UPDATE_FAILED

Error

Failed to update Gluster Hook ${GlusterHookName} on conflicting servers. ${FailureMessage}

4061

GLUSTER_HOOK_ADDED

Info

Gluster Hook ${GlusterHookName} added on conflicting servers.

4062

GLUSTER_HOOK_ADD_FAILED

Error

Failed to add Gluster Hook ${GlusterHookName} on conflicting servers. ${FailureMessage}

4063

GLUSTER_HOOK_REMOVED

Info

Gluster Hook ${GlusterHookName} removed from all servers in cluster ${ClusterName}.

4064

GLUSTER_HOOK_REMOVE_FAILED

Error

Failed to remove Gluster Hook ${GlusterHookName} from cluster ${ClusterName}. ${FailureMessage}

4065

GLUSTER_HOOK_REFRESH

Info

Refreshed gluster hooks in Cluster ${ClusterName}.

4066

GLUSTER_HOOK_REFRESH_FAILED

Error

Failed to refresh gluster hooks in Cluster ${ClusterName}.

4067

GLUSTER_SERVICE_STARTED

Info

${servicetype} service started on host ${VdsName} of cluster ${ClusterName}.

4068

GLUSTER_SERVICE_START_FAILED

Error

Could not start ${servicetype} service on host ${VdsName} of cluster ${ClusterName}.

4069

GLUSTER_SERVICE_STOPPED

Info

${servicetype} services stopped on host ${VdsName} of cluster ${ClusterName}.

4070

GLUSTER_SERVICE_STOP_FAILED

Error

Could not stop ${servicetype} service on host ${VdsName} of cluster ${ClusterName}.

4071

GLUSTER_SERVICES_LIST_NOT_FETCHED

Info

Could not fetch list of services from ${ServiceGroupType} named ${ServiceGroupName}.

4072

GLUSTER_SERVICE_RESTARTED

Info

${servicetype} service re-started on host ${VdsName} of cluster ${ClusterName}.

4073

GLUSTER_SERVICE_RESTART_FAILED

Error

Could not re-start ${servicetype} service on host ${VdsName} of cluster ${ClusterName}.

4074

GLUSTER_VOLUME_OPTIONS_RESET_ALL

Info

All Volume Options reset on ${glusterVolumeName} of cluster ${clusterName}.

4075

GLUSTER_HOST_UUID_NOT_FOUND

Error

Could not find gluster uuid of server ${VdsName} on Cluster ${ClusterName}.

4076

GLUSTER_VOLUME_BRICK_ADDED

Info

Brick [${brickpath}] on host [${servername}] added to volume [${glusterVolumeName}] of cluster ${clusterName}

4077

GLUSTER_CLUSTER_SERVICE_STATUS_ADDED

Info

Status of service type ${ServiceType} set to ${NewStatus} on cluster ${ClusterName}

4078

GLUSTER_VOLUME_REBALANCE_STOP

Info

Gluster Volume ${glusterVolumeName} rebalance stopped of cluster ${clusterName}.

4079

GLUSTER_VOLUME_REBALANCE_STOP_FAILED

Error

Could not stop rebalance of gluster volume ${glusterVolumeName} of cluster ${clusterName}.

4080

START_REMOVING_GLUSTER_VOLUME_BRICKS

Info

Started removing bricks from Volume ${glusterVolumeName} of cluster ${clusterName}

4081

START_REMOVING_GLUSTER_VOLUME_BRICKS_FAILED

Error

Could not start remove bricks from Volume ${glusterVolumeName} of cluster ${clusterName}

4082

GLUSTER_VOLUME_REMOVE_BRICKS_STOP

Info

Stopped removing bricks from Volume ${glusterVolumeName} of cluster ${clusterName}

4083

GLUSTER_VOLUME_REMOVE_BRICKS_STOP_FAILED

Error

Failed to stop remove bricks from Volume ${glusterVolumeName} of cluster ${clusterName}

4084

GLUSTER_VOLUME_REMOVE_BRICKS_COMMIT

Info

Gluster volume ${glusterVolumeName} remove bricks committed on cluster ${clusterName}. ${NoOfBricks} brick(s) removed from volume ${glusterVolumeName}.

4085

GLUSTER_VOLUME_REMOVE_BRICKS_COMMIT_FAILED

Error

Gluster volume ${glusterVolumeName} remove bricks could not be commited on cluster ${clusterName}

4086

GLUSTER_BRICK_STATUS_CHANGED

Warning

Detected change in status of brick ${brickpath} of volume ${glusterVolumeName} of cluster ${clusterName} from ${oldValue} to ${newValue} via ${source}.

4087

GLUSTER_VOLUME_REBALANCE_FINISHED

Info

${action} ${status} on volume ${glusterVolumeName} of cluster ${clusterName}.

4088

GLUSTER_VOLUME_MIGRATE_BRICK_DATA_FINISHED

Info

${action} ${status} for brick(s) on volume ${glusterVolumeName} of cluster ${clusterName}. Please review to abort or commit.

4089

GLUSTER_VOLUME_REBALANCE_START_DETECTED_FROM_CLI

Info

Detected start of rebalance on volume ${glusterVolumeName} of Cluster ${ClusterName} from CLI.

4090

START_REMOVING_GLUSTER_VOLUME_BRICKS_DETECTED_FROM_CLI

Info

Detected start of brick removal for bricks ${brick} on volume ${glusterVolumeName} of Cluster ${ClusterName} from CLI.

4091

GLUSTER_VOLUME_REBALANCE_NOT_FOUND_FROM_CLI

Warning

Could not find information for rebalance on volume ${glusterVolumeName} of Cluster ${ClusterName} from CLI. Marking it as unknown.

4092

REMOVE_GLUSTER_VOLUME_BRICKS_NOT_FOUND_FROM_CLI

Warning

Could not find information for remove brick on volume ${glusterVolumeName} of Cluster ${ClusterName} from CLI. Marking it as unknown.

4093

GLUSTER_VOLUME_DETAILS_REFRESH

Info

Refreshed details of the volume ${glusterVolumeName} of cluster ${clusterName}.

4094

GLUSTER_VOLUME_DETAILS_REFRESH_FAILED

Error

Failed to refresh the details of volume ${glusterVolumeName} of cluster ${clusterName}.

4095

GLUSTER_HOST_UUID_ALREADY_EXISTS

Error

Gluster UUID of host ${VdsName} on Cluster ${ClusterName} already exists.

4096

USER_FORCE_SELECTED_SPM_STOP_FAILED

Error

Failed to force select ${VdsName} as the SPM due to a failure to stop the current SPM.

4097

GLUSTER_GEOREP_SESSION_DELETED_FROM_CLI

Warning

Detected deletion of geo-replication session ${geoRepSessionKey} from volume ${glusterVolumeName} of cluster ${clusterName}

4098

GLUSTER_GEOREP_SESSION_DETECTED_FROM_CLI

Warning

Detected new geo-replication session ${geoRepSessionKey} for volume ${glusterVolumeName} of cluster ${clusterName}. Adding it to engine.

4099

GLUSTER_GEOREP_SESSION_REFRESH

Info

Refreshed geo-replication sessions for volume ${glusterVolumeName} of cluster ${clusterName}.

4100

GLUSTER_GEOREP_SESSION_REFRESH_FAILED

Error

Failed to refresh geo-replication sessions for volume ${glusterVolumeName} of cluster ${clusterName}.

4101

GEOREP_SESSION_STOP

Info

Geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName} has been stopped.

4102

GEOREP_SESSION_STOP_FAILED

Error

Failed to stop geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName}

4103

GEOREP_SESSION_DELETED

Info

Geo-replication session deleted on volume ${glusterVolumeName} of cluster ${clusterName}

4104

GEOREP_SESSION_DELETE_FAILED

Error

Failed to delete geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName}

4105

GLUSTER_GEOREP_CONFIG_SET

Info

Configuration ${key} has been set to ${value} on the geo-rep session ${geoRepSessionKey}.

4106

GLUSTER_GEOREP_CONFIG_SET_FAILED

Error

Failed to set the configuration ${key} to ${value} on geo-rep session ${geoRepSessionKey}.

4107

GLUSTER_GEOREP_CONFIG_LIST

Info

Refreshed configuration options for geo-replication session ${geoRepSessionKey}

4108

GLUSTER_GEOREP_CONFIG_LIST_FAILED

Error

Failed to refresh configuration options for geo-replication session ${geoRepSessionKey}

4109

GLUSTER_GEOREP_CONFIG_SET_DEFAULT

Info

Configuration of ${key} of session ${geoRepSessionKey} reset to its default value .

4110

GLUSTER_GEOREP_CONFIG_SET_DEFAULT_FAILED

Error

Failed to set ${key} of session ${geoRepSessionKey} to its default value.

4111

GLUSTER_VOLUME_SNAPSHOT_DELETED

Info

Gluster volume snapshot ${snapname} deleted.

4112

GLUSTER_VOLUME_SNAPSHOT_DELETE_FAILED

Error

Failed to delete gluster volume snapshot ${snapname}.

4113

GLUSTER_VOLUME_ALL_SNAPSHOTS_DELETED

Info

Deleted all the gluster volume snapshots for the volume ${glusterVolumeName} of cluster ${clusterName}.

4114

GLUSTER_VOLUME_ALL_SNAPSHOTS_DELETE_FAILED

Error

Failed to delete all the gluster volume snapshots for the volume ${glusterVolumeName} of cluster ${clusterName}.

4115

GLUSTER_VOLUME_SNAPSHOT_ACTIVATED

Info

Activated the gluster volume snapshot ${snapname} on volume ${glusterVolumeName} of cluster ${clusterName}.

4116

GLUSTER_VOLUME_SNAPSHOT_ACTIVATE_FAILED

Error

Failed to activate the gluster volume snapshot ${snapname} on volume ${glusterVolumeName} of cluster ${clusterName}.

4117

GLUSTER_VOLUME_SNAPSHOT_DEACTIVATED

Info

De-activated the gluster volume snapshot ${snapname} on volume ${glusterVolumeName} of cluster ${clusterName}.

4118

GLUSTER_VOLUME_SNAPSHOT_DEACTIVATE_FAILED

Error

Failed to de-activate gluster volume snapshot ${snapname} on volume ${glusterVolumeName} of cluster ${clusterName}.

4119

GLUSTER_VOLUME_SNAPSHOT_RESTORED

Info

Restored the volume ${glusterVolumeName} of cluster ${clusterName} to the state of gluster volume snapshot ${snapname}.

4120

GLUSTER_VOLUME_SNAPSHOT_RESTORE_FAILED

Error

Failed to restore the volume ${glusterVolumeName} of cluster ${clusterName} to the state of gluster volume snapshot ${snapname}.

4121

GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATED

Info

Updated Gluster volume snapshot configuration(s).

4122

GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATE_FAILED

Error

Failed to update gluster volume snapshot configuration(s).

4123

GLUSTER_VOLUME_SNAPSHOT_CONFIG_UPDATE_FAILED_PARTIALLY

Error

Failed to update gluster volume snapshot configuration(s) ${failedSnapshotConfigs}.

4124

NEW_STORAGE_DEVICE_DETECTED

Info

Found new storage device ${storageDevice} on host ${VdsName}, and added it to engine DB."

4125

STORAGE_DEVICE_REMOVED_FROM_THE_HOST

Info

Detected deletion of storage device ${storageDevice} on host ${VdsName}, and deleting it from engine DB."

4126

SYNC_STORAGE_DEVICES_IN_HOST

Info

Manually synced the storage devices from host ${VdsName}

4127

SYNC_STORAGE_DEVICES_IN_HOST_FAILED

Error

Failed to sync storage devices from host ${VdsName}

4128

GEOREP_OPTION_SET_FROM_CLI

Warning

Detected new option ${key}

4129

GEOREP_OPTION_CHANGED_FROM_CLI

Warning

Detected change in value of option ${key} from ${oldValue} to ${value} for geo-replication session on volume ${glusterVolumeName} of cluster ${ClusterName}, and updated it to engine.

4130

GLUSTER_MASTER_VOLUME_STOP_FAILED_DURING_SNAPSHOT_RESTORE

Error

Could not stop master volume ${glusterVolumeName} of cluster ${clusterName} during snapshot restore.

4131

GLUSTER_MASTER_VOLUME_SNAPSHOT_RESTORE_FAILED

Error

Could not restore master volume ${glusterVolumeName} of cluster ${clusterName}.

4132

GLUSTER_VOLUME_SNAPSHOT_CREATED

Info

Snapshot ${snapname} created for volume ${glusterVolumeName} of cluster ${clusterName}.

4133

GLUSTER_VOLUME_SNAPSHOT_CREATE_FAILED

Error

Could not create snapshot for volume ${glusterVolumeName} of cluster ${clusterName}.

4134

GLUSTER_VOLUME_SNAPSHOT_SCHEDULED

Info

Snapshots scheduled on volume ${glusterVolumeName} of cluster ${clusterName}.

4135

GLUSTER_VOLUME_SNAPSHOT_SCHEDULE_FAILED

Error

Failed to schedule snapshots on the volume ${glusterVolumeName} of cluster ${clusterName}.

4136

GLUSTER_VOLUME_SNAPSHOT_RESCHEDULED

Info

Rescheduled snapshots on volume ${glusterVolumeName} of cluster ${clusterName}.

4137

GLUSTER_VOLUME_SNAPSHOT_RESCHEDULE_FAILED

Error

Failed to reschedule snapshots on volume ${glusterVolumeName} of cluster ${clusterName}.

4138

CREATE_GLUSTER_BRICK

Info

Brick ${brickName} created successfully on host ${vdsName} of cluster ${clusterName}.

4139

CREATE_GLUSTER_BRICK_FAILED

Error

Failed to create brick ${brickName} on host ${vdsName} of cluster ${clusterName}.

4140

GLUSTER_GEO_REP_PUB_KEY_FETCH_FAILED

Error

Failed to fetch public keys.

4141

GLUSTER_GET_PUB_KEY

Info

Public key fetched.

4142

GLUSTER_GEOREP_PUBLIC_KEY_WRITE_FAILED

Error

Failed to write public keys to ${VdsName}

4143

GLUSTER_WRITE_PUB_KEYS

Info

Public keys written to ${VdsName}

4144

GLUSTER_GEOREP_SETUP_MOUNT_BROKER_FAILED

Error

Failed to setup geo-replication mount broker for user ${geoRepUserName} on the slave volume ${geoRepSlaveVolumeName}.

4145

GLUSTER_SETUP_GEOREP_MOUNT_BROKER

Info

Geo-replication mount broker has been setup for user ${geoRepUserName} on the slave volume ${geoRepSlaveVolumeName}.

4146

GLUSTER_GEOREP_SESSION_CREATE_FAILED

Error

Failed to create geo-replication session between master volume : ${glusterVolumeName} of cluster ${clusterName} and slave volume : ${geoRepSlaveVolumeName} for the user ${geoRepUserName}.

4147

CREATE_GLUSTER_VOLUME_GEOREP_SESSION

Info

Created geo-replication session between master volume : ${glusterVolumeName} of cluster ${clusterName} and slave volume : ${geoRepSlaveVolumeName} for the user ${geoRepUserName}.

4148

GLUSTER_VOLUME_SNAPSHOT_SOFT_LIMIT_REACHED

Info

Gluster Volume Snapshot soft limit reached for the volume ${glusterVolumeName} on cluster ${clusterName}.

4149

HOST_FEATURES_INCOMPATIBILE_WITH_CLUSTER

Error

Host ${VdsName} does not comply with the list of features supported by cluster ${ClusterName}. ${UnSupportedFeature} is not supported by the Host

4150

GLUSTER_VOLUME_SNAPSHOT_SCHEDULE_DELETED

Info

Snapshot schedule deleted for volume ${glusterVolumeName} of ${clusterName}.

4151

GLUSTER_BRICK_STATUS_DOWN

Info

Status of brick ${brickpath} of volume ${glusterVolumeName} on cluster ${ClusterName} is down.

4152

GLUSTER_VOLUME_SNAPSHOT_DETECTED_NEW

Info

Found new gluster volume snapshot ${snapname} for volume ${glusterVolumeName} on cluster ${ClusterName}, and added it to engine DB."

4153

GLUSTER_VOLUME_SNAPSHOT_DELETED_FROM_CLI

Info

Detected deletion of gluster volume snapshot ${snapname} for volume ${glusterVolumeName} on cluster ${ClusterName}, and deleting it from engine DB."

4154

GLUSTER_VOLUME_SNAPSHOT_CLUSTER_CONFIG_DETECTED_NEW

Info

Found new gluster volume snapshot configuration ${snapConfigName} with value ${snapConfigValue} on cluster ${ClusterName}, and added it to engine DB."

4155

GLUSTER_VOLUME_SNAPSHOT_VOLUME_CONFIG_DETECTED_NEW

Info

Found new gluster volume snapshot configuration ${snapConfigName} with value ${snapConfigValue} for volume ${glusterVolumeName} on cluster ${ClusterName}, and added it to engine DB."

4156

GLUSTER_VOLUME_SNAPSHOT_HARD_LIMIT_REACHED

Info

Gluster Volume Snapshot hard limit reached for the volume ${glusterVolumeName} on cluster ${clusterName}.

4157

GLUSTER_CLI_SNAPSHOT_SCHEDULE_DISABLE_FAILED

Error

Failed to disable gluster CLI based snapshot schedule on cluster ${clusterName}.

4158

GLUSTER_CLI_SNAPSHOT_SCHEDULE_DISABLED

Info

Disabled gluster CLI based scheduling successfully on cluster ${clusterName}.

4159

SET_UP_PASSWORDLESS_SSH

Info

Password-less SSH has been setup for user ${geoRepUserName} on the nodes of remote volume ${geoRepSlaveVolumeName} from the nodes of the volume ${glusterVolumeName}.

4160

SET_UP_PASSWORDLESS_SSH_FAILED

Error

Failed to setup Passwordless ssh for user ${geoRepUserName} on the nodes of remote volume ${geoRepSlaveVolumeName} from the nodes of the volume ${glusterVolumeName}.

4161

GLUSTER_VOLUME_TYPE_UNSUPPORTED

Warning

Detected a volume ${glusterVolumeName} with type ${glusterVolumeType} on cluster ${Cluster} and it is not fully supported by engine.

4162

GLUSTER_VOLUME_BRICK_REPLACED

Info

Replaced brick '${brick}' with new brick '${newBrick}' of Gluster Volume ${glusterVolumeName} on cluster ${clusterName}

4163

GLUSTER_SERVER_STATUS_DISCONNECTED

Info

Gluster server ${vdsName} set to DISCONNECTED on cluster ${clusterName}.

4164

GLUSTER_STORAGE_DOMAIN_SYNC_FAILED

Info

Failed to synchronize data from storage domain ${storageDomainName} to remote location.

4165

GLUSTER_STORAGE_DOMAIN_SYNCED

Info

Successfully synchronized data from storage domain ${storageDomainName} to remote location.

4166

GLUSTER_STORAGE_DOMAIN_SYNC_STARTED

Info

Successfully started data synchronization data from storage domain ${storageDomainName} to remote location.

4167

STORAGE_DOMAIN_DR_DELETED

Error

Deleted the data synchronization schedule for storage domain ${storageDomainName} as the underlying geo-replication session ${geoRepSessionKey} has been deleted.

4168

GLUSTER_WEBHOOK_ADDED

Info

Added webhook on ${clusterName}

4169

GLUSTER_WEBHOOK_ADD_FAILED

Error

Failed to add webhook on ${clusterName}

4170

GLUSTER_VOLUME_RESET_BRICK_FAILED

Error

 

4171

GLUSTER_VOLUME_BRICK_RESETED

Info

 

4172

GLUSTER_VOLUME_CONFIRMED_SPACE_LOW

Warning

Warning! Low confirmed free space on gluster volume ${glusterVolumeName}

4436

GLUSTER_SERVER_ADD_FAILED

Error

Failed to add host ${VdsName} into Cluster ${ClusterName}. ${ErrorMessage}

4437

GLUSTER_SERVERS_LIST_FAILED

Error

Failed to fetch gluster peer list from server ${VdsName} on Cluster ${ClusterName}. ${ErrorMessage}

4595

GLUSTER_VOLUME_GEO_REP_START_FAILED_EXCEPTION

Error

Failed to start geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName}

4596

GLUSTER_VOLUME_GEO_REP_START

Info

Geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName} has been started.

4597

GLUSTER_VOLUME_GEO_REP_PAUSE_FAILED

Error

Failed to pause geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName}

4598

GLUSTER_VOLUME_GEO_REP_RESUME_FAILED

Error

Failed to resume geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName}

4599

GLUSTER_VOLUME_GEO_REP_RESUME

Info

Geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName} has been resumed.

4600

GLUSTER_VOLUME_GEO_REP_PAUSE

Info

Geo-replication session on volume ${glusterVolumeName} of cluster ${clusterName} has been paused.

9000

VDS_ALERT_FENCE_IS_NOT_CONFIGURED

Info

Failed to verify Power Management configuration for Host ${VdsName}.

9001

VDS_ALERT_FENCE_TEST_FAILED

Info

Power Management test failed for Host ${VdsName}.${Reason}

9002

VDS_ALERT_FENCE_OPERATION_FAILED

Info

Failed to power fence host ${VdsName}. Please check the host status and it’s power management settings, and then manually reboot it and click "Confirm Host Has Been Rebooted"

9003

VDS_ALERT_FENCE_OPERATION_SKIPPED

Info

Host ${VdsName} became non responsive. Fence operation skipped as the system is still initializing and this is not a host where hosted engine was running on previously.

9004

VDS_ALERT_FENCE_NO_PROXY_HOST

Info

There is no other host in the data center that can be used to test the power management settings.

9005

VDS_ALERT_FENCE_STATUS_VERIFICATION_FAILED

Info

Failed to verify Host ${Host} ${Status} status, Please ${Status} Host ${Host} manually.

9006

CANNOT_HIBERNATE_RUNNING_VMS_AFTER_CLUSTER_CPU_UPGRADE

Warning

Hibernation of VMs after CPU upgrade of Cluster ${Cluster} is not supported. Please stop and restart those VMs in case you wish to hibernate them

9007

VDS_ALERT_SECONDARY_AGENT_USED_FOR_FENCE_OPERATION

Info

Secondary fence agent was used to ${Operation} Host ${VdsName}

9008

VDS_HOST_NOT_RESPONDING_CONNECTING

Warning

Host ${VdsName} is not responding. It will stay in Connecting state for a grace period of ${Seconds} seconds and after that an attempt to fence the host will be issued.

9009

VDS_ALERT_PM_HEALTH_CHECK_FENCE_AGENT_NON_RESPONSIVE

Info

Health check on Host ${VdsName} indicates that Fence-Agent ${AgentId} is non-responsive.

9010

VDS_ALERT_PM_HEALTH_CHECK_START_MIGHT_FAIL

Info

Health check on Host ${VdsName} indicates that future attempts to Start this host using Power-Management are expected to fail.

9011

VDS_ALERT_PM_HEALTH_CHECK_STOP_MIGHT_FAIL

Info

Health check on Host ${VdsName} indicates that future attempts to Stop this host using Power-Management are expected to fail.

9012

VDS_ALERT_PM_HEALTH_CHECK_RESTART_MIGHT_FAIL

Info

Health check on Host ${VdsName} indicates that future attempts to Restart this host using Power-Management are expected to fail.

9013

VDS_ALERT_FENCE_OPERATION_SKIPPED_BROKEN_CONNECTIVITY

Info

Host ${VdsName} became non responsive and was not restarted due to Fencing Policy: ${Percents} percents of the Hosts in the Cluster have connectivity issues.

9014

VDS_ALERT_NOT_RESTARTED_DUE_TO_POLICY

Info

Host ${VdsName} became non responsive and was not restarted due to the Cluster Fencing Policy.

9015

VDS_ALERT_FENCE_DISABLED_BY_CLUSTER_POLICY

Info

Host ${VdsName} became Non Responsive and was not restarted due to disabled fencing in the Cluster Fencing Policy.

9016

FENCE_DISABLED_IN_CLUSTER_POLICY

Info

Fencing is disabled in Fencing Policy of the Cluster ${ClusterName}, so HA VMs running on a non-responsive host will not be restarted elsewhere.

9017

FENCE_OPERATION_STARTED

Info

Power management ${Action} of Host ${VdsName} initiated.

9018

FENCE_OPERATION_SUCCEEDED

Info

Power management ${Action} of Host ${VdsName} succeeded.

9019

FENCE_OPERATION_FAILED

Error

Power management ${Action} of Host ${VdsName} failed.

9020

FENCE_OPERATION_USING_AGENT_AND_PROXY_STARTED

Info

Executing power management ${Action} on Host ${Host} using Proxy Host ${ProxyHost} and Fence Agent ${AgentType}:${AgentIp}.

9021

FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED

Warning

Execution of power management ${Action} on Host ${Host} using Proxy Host ${ProxyHost} and Fence Agent ${AgentType}:${AgentIp} failed.

9022

ENGINE_NO_FULL_BACKUP

Info

There is no full backup available, please run engine-backup to prevent data loss in case of corruption.

9023

ENGINE_NO_WARM_BACKUP

Info

Full backup was created on ${Date} and it’s too old. Please run engine-backup to prevent data loss in case of corruption.

9024

ENGINE_BACKUP_STARTED

Normal

Engine backup started.

9025

ENGINE_BACKUP_COMPLETED

Normal

Engine backup completed successfully.