2.2. Hypervisor Requirements


2.2.1. CPU Requirements

All CPUs must have support for the Intel® 64 or AMD64 CPU extensions, and the AMD-V™ or Intel VT® hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required.
Table 2.4. Supported Hypervisor CPU Models
AMD Intel IBM
AMD Opteron G1 Intel Conroe IBM POWER8
AMD Opteron G2 Intel Penryn
AMD Opteron G3 Intel Nehalem
AMD Opteron G4 Intel Westmere
AMD Opteron G5 Intel Sandybridge
Intel Haswell

Procedure 2.1. Checking if a Processor Supports the Required Flags

You must enable Virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied.
  1. At the host's boot screen, press any key and select the Boot or Boot with serial console entry from the list.
  2. Press Tab to edit the kernel parameters for the selected option.
  3. Ensure there is a Space after the last kernel parameter listed, and append the rescue parameter.
  4. Press Enter to boot into rescue mode.
  5. At the prompt which appears, determine that your processor has the required extensions and that they are enabled by running this command:
    # grep -E 'svm|vmx' /proc/cpuinfo | grep nx
    If any output is shown, then the processor is hardware virtualization capable. If no output is shown, then it is still possible that your processor supports hardware virtualization. In some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer.

2.2.2. Memory Requirements

The amount of RAM required varies depending on guest operating system requirements, guest application requirements, and memory activity and usage of guests. You also need to take into account that KVM is able to overcommit physical RAM for virtualized guests. This allows for provisioning of guests with RAM requirements greater than what is physically present, on the basis that the guests are not all concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap.
Table 2.5. Memory Requirements
Minimum Maximum
2 GB of RAM 2 TB of RAM

2.2.3. Storage Requirements

Hypervisor hosts require local storage to store configuration, logs, kernel dumps, and for use as swap space. The minimum storage requirements of the Red Hat Enterprise Virtualization Hypervisor (RHEV-H) and Red Hat Virtualization Host (RHVH) are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of the RHEV-H and RHVH.
For RHEV-H and RHVH requirements, see the following table for the minimum supported internal storage for each version of the Hypervisor:
Table 2.6. Hypervisor Minimum Storage Requirements
Version Root and RootBackup Partitions Configuration Partition Logging Partition Data Partition Swap Partition Minimum Total
Red Hat Enterprise Virtualization Hypervisor 6 512 MB 8 MB 2048 MB 512 MB 8 MB 3.5 GB
Red Hat Enterprise Virtualization Hypervisor 7 8600 MB 8 MB 2048 MB 10240 MB 8 MB 20.4 GB
Red Hat Virtualization Host 6 GB NA 8 GB 15 GB 1 GB 32 GB

Important

If you are also installing the RHEV-M Virtual Appliance on RHEV-H, the minimum data partition is 60 GB.
By default, all disk space remaining after allocation of swap space will be allocated to the data partition.
For the recommended swap size, see https://access.redhat.com/solutions/15244.

Important

The Red Hat Enterprise Virtualization Hypervisor does not support installation on fakeraid devices. Where a fakeraid device is present it must be reconfigured such that it no longer runs in RAID mode.
  1. Access the RAID controller's BIOS and remove all logical drives from it.
  2. Change controller mode to be non-RAID. This may be referred to as compatibility or JBOD mode.
Access the manufacturer provided documentation for further information related to the specific device in use.

2.2.4. PCI Device Requirements

Virtualization hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. It is recommended that each virtualization host have two network interfaces with one dedicated to support network intensive activities such as virtual machine migration. The performance of such operations are limited by the bandwidth available.

2.2.5. Hardware Considerations For Device Assignment

If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met:
  • CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default.
  • Firmware must support IOMMU.
  • CPU root ports used must support ACS or ACS-equivalent capability.
  • PCIe device must support ACS or ACS-equivalent capability.
  • It is recommended that all PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine.
  • For GPU support, Red Hat Enterprise Linux 7 supports PCI device assignment of NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card.
Refer to vendor specification and datasheets to confirm that hardware meets these requirements. After you have installed a hypervisor host, see Appendix G, Configuring a Hypervisor Host for PCI Passthrough for more information on how to enable the hypervisor hardware and software for device passthrough.
To implement SR-IOV, see Hardware Considerations for Implementing SR-IOV for more information.
The lspci -v command can be used to print information for PCI devices already installed on a system.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.