6. Virtualization


This section contains information about updates made to Red Hat Enterprise Linux suite of Virtualization tools.

6.1. Feature Updates

  • The blktap (blocktap) userspace toolkit has been updated, providing the functionality to monitor the transfer statistics of blktap backed virtualized guests.
  • Support was added for the Intel Extended Page Table (EPT) feature, improving performance of fully virtualized guests on hardware that supports EPT.
  • e1000 network device emulation for guests has been added in this update, supporting only Windows 2003 guests on the ia64 architecture. To use e1000 emulation, the xm command must be used.
  • Drivers for virtio, the platform for I/O virtualization in KVM, has been backported to Red Hat Enterprise Linux 5.3 from Linux Kernel 2.6.27. These drivers will enable KVM guests to achieve higher levels of I/O performance. Various userspace components such as: anaconda, kudzu, lvm, selinux and mkinitrd have also been updated to support virtio devices.
  • The native Linux kernel supports vmcoreinfo automatically, but, to setup kdump on dom0 domains, the kernel-xen-debuginfo package was needed. With this release, the kernel and the hypervisor have been modified and now support vmcoreinfo reading and writing kdump natively. Users needing to use kdump for de-bugging or other investigations on dom0 domains can now do so without installing the debuginfo or debuginfo-common packages.
  • Fully virtualized Red Hat Enterprise Linux 5 guests encountered suboptimal performance when using emulated disk and network devices. In this update, the kmod-xenpv package has been included to simplify the use of paravirtualized disks and networks in fully virtualized guests.
    Using these drivers in fully virtualized guests can significantly improve the performance and functionality of fully virtualized guests. Bug fixes made for netfront and block front drivers are immediately realized and synchronized with the kernel package.
  • Guests now have the ability to utilize 2MB backing page memory tables, which can improve system performance.

6.2. Resolved Issues

6.2.1. All Architectures

  • Shutting down a paravirtualized guest may have caused the dom0 to stop responding for a period of time. Delays of several seconds were experienced on guests with large amounts of memory (ie 12GB and above.) In this update, the virtualized kernel allows the shutdown of a large paravirtualized guest to be pre-emptible, which resolves this issue.
  • crash was unable to read the relocation address of the hypervisor from a vmcore file. Consequently, opening a Virtualized kernel vmcore file with crash would fail, resulting in the error:
    crash: cannot resolve "idle_pg_table_4"
    
    In this update, the hypervisor now saves the address correctly, which resolves this issue.
  • Previously, paravirtualized guests could only have a maximum of 16 disk devices. In this update, this limit has been increased to a maximum of 256 disk devices.
  • Memory reserved for the kdump kernel was incorrect, resulting in unusable crash dumps. In this update, the memory reservation is now correct, allowing proper crash dumps to be generated.
  • Attaching a disk with a specific name (ie. /dev/xvdaa, /dev/xvdab, /dev/xvdbc etc.) to a paravirtualized guest resulted in a corrupted /dev device inside the guest. This update resolves the issue so that attaching disks with these names to a paravirtualized guest creates the proper /dev device inside the guest.
  • Previously, the number of loopback devices was limited to 4. Consequently, this limited the ability to create bridges on systems with more than 4 network interfaces. In this update, the netloop driver now creates additional loopback devices as required.
  • A race condition could occur when creating and destroying virtual network devices. In some circumstances — especially high load situations — this would cause the virtual device to not respond. In this update, the state of the virtual device is checked to prevent the race condition from occurring.
  • a memory leak in virt-manager would be encountered if the application was left running. Consequently, the application would constantly consume more resources, which may have led to memory starvation. In this update, the leak has been fixed, which resolves this issue.
  • the crash utility could not analyze x86_64 vmcores from systems running kernel-xen because the Red Hat Enterprise Linux hypervisor was relocatable and the relocated physical base address is not passed in the vmcore file's ELF header. The new --xen_phys_start command line option for the crash utility allows the user to pass crash the relocated base physical address.
  • Not all mouse events were being captured and processed by the Paravirtual Frame Buffer (PVFB). Consequently, the scroll wheel did not function when interacting with a paravirtualized guest with the Virtual Machine Console. In this update, scroll wheel mouse events are now handled correctly, which resolves this issue.
  • Using Virtualization on a machine with a large number of CPUs may have caused the hypervisor to crash during guest installation. In this update, this issue has been resolved.
  • On Intel processors that return a CPUID family value of 6, only one performance counter register was enabled in kernel-xen. Consequently, only counter 0 provided samples. In this update, this issue has been resolved.

6.2.2. x86 Architectures

  • On systems with newer CPU's, the CPU APIC ID differs from the CPU ID. Consequently, the virtualized kernel was unable to initialize CPU frequency scaling. In this update, the virtualized kernel now retrieves CPU APIC ID from the hypervisor, allowing CPU frequency scaling to be initialized properly.
  • When running an x86 paravirtualized guest, if a process accessed invalid memory, it would run in a loop instead of getting a SEGV signal. This was caused a flaw in the way execshield checks were done under the hypervisor. In this update, this issue has been resolved.

6.2.3. ia64 Architecture

  • A xend bug that previously caused guest installation failures is now fixed.
  • the evtchn event channel device lacked locks and memory barriers. This led to xenstore becoming unresponsive. In this update, this issue has been resolved.
  • Non-Uniform Memory Access (NUMA) information was not being displayed by the xm info command. Consequently, node_to_cpu value for each node was being incorrectly returned as no cpus. In this update, this issue has been resolved.
  • Previously, creating a guest on a Hardware Virtual Machine (HVM) would fail on processors that include the VT-i2 technology. In this update, this issue has been resolved.

6.2.4. x86_64 Architectures

  • When the Dynamic IRQs available for guests virtual machines were exhausted, the dom0 kernel would crash. In this update, the crash condition has been fixed, and the number of available IRQs has been increased, which resolves this issue.
  • On systems with newer CPU's, the CPU APIC ID differs from the CPU ID. Consequently, the virtualized kernel was unable to initialize CPU frequency scaling. In this update, the virtualized kernel now retrieves CPU APIC ID from the hypervisor, allowing CPU frequency scaling to be initialized properly.

6.3. Known Issues

6.3.1. All Architectures

  • Diskette drive media will not be accessible when using the virtualized kernel. To work around this, use a USB-attached diskette drive instead.
    Note that diskette drive media works well with other non-virtualized kernels.
  • In live migrations of paravirtualized guests, time-dependent guest processes may function improperly if the corresponding hosts' (dom0) times are not synchronized. Use NTP to synchronize system times for all corresponding hosts before migration.
  • Repeated live migration of paravirtualized guests between two hosts may cause one host to panic. If a host is rebooted after migrating a guest out of the system and before migrating the same guest back, the panic will not occur.
  • Formatting a disk when running Windows 2008 or Windows Vista as a guest can crash when the guest has been booted with multiple virtual CPUs. To work around this, boot the guest with a single virtual CPU when formatting.
  • Fully virtualized guests created through virt-manager may sometimes prevent the mouse from moving freely throughout the screen. To work around this, use virt-manager to configure a USB tablet device for the guest.
  • The maximum CPUs must be restricted to less than 128 when on a 128 or greater CPU system. The maximum that is supported at this time is 126. Use the maxcpus=126 hypervisor argument to limit the Hypervisor to 126
  • Fully virtualized guests cannot correct for time lost due to the domain being paused and unpaused. Being able to correctly track the time across pause and unpause events is one of the advantages of paravirtualized kernels. This issue is being addressed upstream with replaceable timers, so fully virtualized guests will have paravirtualized timers. Currently, this code is under development upstream and should be available in later versions of Red Hat Enterprise Linux.
  • Repeated migration of paravirtualized guests may result in bad mpa messages on the dom0 console. In some cases, the hypervisor may also panic.
    To prevent a hypervisor kernel panic, restart the migrated guests once the bad mpa messages appear.
  • When setting up interface bonding on dom0, the default network-bridge script may cause bonded network interfaces to alternately switch between unavailable and available. This occurrence is commonly known as flapping.
    To prevent this, replace the standard network-script line in /etc/xen/xend-config.sxp with the following line:
    			
    (network-script network-bridge-bonding netdev=bond0)
    
    Doing so will disable the netloop device, which prevents Address Resolution Protocol (ARP) monitoring from failing during the address transfer process.
  • When running multiple guest domains, guest networking may temporarily stop working, resulting in the following error being reported in the dom0 logs:
    Memory squeeze in netback driver
    
    To work around this, raise the amount of memory available to the dom0 with the dom0_mem hypervisor command line option.

6.3.2. x86 Architectures

  • Migrating paravirtualized guests through xm migrate [domain] [dom0 IP address] does not work.
  • When installing Red Hat Enterprise Linux 5 on a fully virtualized SMP guest, the installation may freeze. This can occur when the host (dom0) is running Red Hat Enterprise Linux 5.2.
    To prevent this, set the guest to use a single processor using the install. You can do this by using the --vcpus=1 option in virt-install. Once the installation is completed, you can set the guest to SMP by modifying the allocated vcpus in virt-manager.

6.3.3. x86_64 Architectures

  • Migrating paravirtualized guests through xm migrate [domain] [dom0 IP address] does not work.
  • Installing the Virtualization feature may cause a time went backwards warning on HP systems with model numbers xw9300 and xw9400.
    To work around this issue for xw9400 machines, configure the BIOS settings to enable the HPET timer. Note that this option is not available on xw9300 machines.
  • Installing Red Hat Enterprise Linux 3.9 on a fully virtualized guest may be extremely slow. In addition, booting up the guest after installation may result in hda: lost interrupt errors.
    To avoid this bootup error, configure the guest to use the SMP kernel.
  • Upgrading a host (dom0) system to Red Hat Enterprise Linux 5.2 may render existing Red Hat Enterprise Linux 4.5 SMP paravirtualized guests unbootable. This is more likely to occur when the host system has more than 4GB of RAM.
    To work around this, boot each Red Hat Enterprise Linux 4.5 guest in single CPU mode and upgrade its kernel to the latest version (for Red Hat Enterprise Linux 4.5.z).

6.3.4. ia64 Architecture

  • Migrating paravirtualized guests through xm migrate [domain] [dom0 IP address] does not work.
  • On some Itanium systems configured for console output to VGA, the dom0 virtualized kernel may fail to boot. This is because the virtualized kernel failed to properly detect the default console device from the Extensible Firmware Interface (EFI) settings.
    When this occurs, add the boot parameter console=tty to the kernel boot options in /boot/efi/elilo.conf.
  • On some Itanium systems (such as the Hitachi Cold Fusion 3e), the serial port cannot be detected in dom0 when VGA is enabled by the EFI Maintenance Manager. As such, you need to supply the following serial port information to the dom0 kernel:
    • Speed in bits/second
    • Number of data bits
    • Parity
    • io_base address
    These details must be specified in the append= line of the dom0 kernel in /boot/efi/elilo.conf. For example:
    append="com1=19200,8n1,0x3f8 -- quiet rhgb console=tty0 console=ttyS0,19200n8"
    In this example, com1 is the serial port, 19200 is the speed (in bits/second), 8n1 specifies the number of data bits/parity settings, and 0x3f8 is the io_base address.
  • Virtualization does not work on some architectures that use Non-Uniform Memory Access (NUMA). As such, installing the virtualized kernel on systems that use NUMA will result in a boot failure.
    Some installation numbers install the virtualized kernel by default. If you have such an installation number and your system uses NUMA and does not work with kernel-xen, deselect the Virtualization option during installation.
  • Currently, live migration of fully virtualized guests is not supported on this architecture. In addition, kexec and kdump are also not supported for virtualization on this architecture.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.