이 콘텐츠는 선택한 언어로 제공되지 않습니다.

2.2. Hypervisor Requirements


2.2.1. CPU Requirements

All CPUs must have support for the Intel® 64 or AMD64 CPU extensions, and the AMD-V™ or Intel VT® hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required.
Expand
Table 2.4. Supported Hypervisor CPU Models
AMD Intel IBM
AMD Opteron G1 Intel Conroe IBM POWER8
AMD Opteron G2 Intel Penryn
AMD Opteron G3 Intel Nehalem
AMD Opteron G4 Intel Westmere
AMD Opteron G5 Intel Sandybridge
Intel Haswell

Procedure 2.1. Checking if a Processor Supports the Required Flags

You must enable Virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied.
  1. At the host's boot screen, press any key and select the Boot or Boot with serial console entry from the list.
  2. Press Tab to edit the kernel parameters for the selected option.
  3. Ensure there is a Space after the last kernel parameter listed, and append the rescue parameter.
  4. Press Enter to boot into rescue mode.
  5. At the prompt which appears, determine that your processor has the required extensions and that they are enabled by running this command:
    # grep -E 'svm|vmx' /proc/cpuinfo | grep nx
    Copy to Clipboard Toggle word wrap
    If any output is shown, then the processor is hardware virtualization capable. If no output is shown, then it is still possible that your processor supports hardware virtualization. In some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer.

2.2.2. Memory Requirements

The amount of RAM required varies depending on guest operating system requirements, guest application requirements, and memory activity and usage of guests. You also need to take into account that KVM is able to overcommit physical RAM for virtualized guests. This allows for provisioning of guests with RAM requirements greater than what is physically present, on the basis that the guests are not all concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap.
Expand
Table 2.5. Memory Requirements
Minimum Maximum
2 GB of RAM 2 TB of RAM

2.2.3. Storage Requirements

Hypervisor hosts require local storage to store configuration, logs, kernel dumps, and for use as swap space. The minimum storage requirements of the Red Hat Enterprise Virtualization Hypervisor (RHEV-H) and Red Hat Virtualization Host (RHVH) are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of the RHEV-H and RHVH.
For RHEV-H and RHVH requirements, see the following table for the minimum supported internal storage for each version of the Hypervisor:
Expand
Table 2.6. Hypervisor Minimum Storage Requirements
Version Root and RootBackup Partitions Configuration Partition Logging Partition Data Partition Swap Partition Minimum Total
Red Hat Enterprise Virtualization Hypervisor 6 512 MB 8 MB 2048 MB 512 MB 8 MB 3.5 GB
Red Hat Enterprise Virtualization Hypervisor 7 8600 MB 8 MB 2048 MB 10240 MB 8 MB 20.4 GB
Red Hat Virtualization Host 6 GB NA 8 GB 15 GB 1 GB 32 GB

Important

If you are also installing the RHEV-M Virtual Appliance on RHEV-H, the minimum data partition is 60 GB.
By default, all disk space remaining after allocation of swap space will be allocated to the data partition.
For the recommended swap size, see https://access.redhat.com/solutions/15244.

Important

The Red Hat Enterprise Virtualization Hypervisor does not support installation on fakeraid devices. Where a fakeraid device is present it must be reconfigured such that it no longer runs in RAID mode.
  1. Access the RAID controller's BIOS and remove all logical drives from it.
  2. Change controller mode to be non-RAID. This may be referred to as compatibility or JBOD mode.
Access the manufacturer provided documentation for further information related to the specific device in use.

2.2.4. PCI Device Requirements

Virtualization hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. It is recommended that each virtualization host have two network interfaces with one dedicated to support network intensive activities such as virtual machine migration. The performance of such operations are limited by the bandwidth available.

2.2.5. Hardware Considerations For Device Assignment

If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met:
  • CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default.
  • Firmware must support IOMMU.
  • CPU root ports used must support ACS or ACS-equivalent capability.
  • PCIe device must support ACS or ACS-equivalent capability.
  • It is recommended that all PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine.
  • For GPU support, Red Hat Enterprise Linux 7 supports PCI device assignment of NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card.
Refer to vendor specification and datasheets to confirm that hardware meets these requirements. After you have installed a hypervisor host, see Appendix G, Configuring a Hypervisor Host for PCI Passthrough for more information on how to enable the hypervisor hardware and software for device passthrough.
To implement SR-IOV, see Hardware Considerations for Implementing SR-IOV for more information.
The lspci -v command can be used to print information for PCI devices already installed on a system.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat