Search

Chapter 4. Getting started with virtualization on IBM Z

download PDF

You can use KVM virtualization when using RHEL 8 on IBM Z hardware. However, enabling the KVM hypervisor on your system requires extra steps compared to virtualization on AMD64 and Intel 64 architectures. Certain RHEL 8 virtualization features also have different or restricted functionality on IBM Z.

Apart from the information in the following sections, using virtualization on IBM Z works the same as on AMD64 and Intel 64. Therefore, you can see other RHEL 8 virtualization documentation for more information when using virtualization on IBM Z.

Note

Running KVM on the z/VM OS is not supported.

4.1. Enabling virtualization on IBM Z

To set up a KVM hypervisor and create virtual machines (VMs) on an IBM Z system running RHEL 8, follow the instructions below.

Prerequisites

  • RHEL 8.6 or later is installed and registered on your host machine.

    Important

    If you already enabled virtualization on an IBM Z machine by using RHEL 8.5 or earlier, you should instead reconfigure your virtualization module and update your system. For instructions, see How virtualization on IBM Z differs from AMD64 and Intel 64.

  • The following minimum system resources are available:

    • 6 GB free disk space for the host, plus another 6 GB for each intended VM.
    • 2 GB of RAM for the host, plus another 2 GB for each intended VM.
    • 4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive during high load.
  • Your IBM Z host system is using a z13 CPU or later.
  • RHEL 8 is installed on a logical partition (LPAR). In addition, the LPAR supports the start-interpretive execution (SIE) virtualization functions.

    To verify this, search for sie in your /proc/cpuinfo file.

    # grep sie /proc/cpuinfo
    features        : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te sie

Procedure

  1. Load the KVM kernel module:

    # modprobe kvm
  2. Verify that the KVM kernel module is loaded:

    # lsmod | grep kvm

    If KVM loaded successfully, the output of this command includes kvm.

  3. Install the packages in the virt:rhel/common module:

    # yum module install virt:rhel/common
  4. Start the virtualization services:

    # for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done

Verification

  1. Verify that your system is prepared to be a virtualization host.

    # virt-host-validate
    [...]
    QEMU: Checking if device /dev/kvm is accessible             : PASS
    QEMU: Checking if device /dev/vhost-net exists              : PASS
    QEMU: Checking if device /dev/net/tun exists                : PASS
    QEMU: Checking for cgroup 'memory' controller support       : PASS
    QEMU: Checking for cgroup 'memory' controller mount-point   : PASS
    [...]
  2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.

    If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.

    If any of the checks return a WARN value, consider following the displayed instructions to improve virtualization capabilities.

Troubleshooting

  • If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output:

    QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)

    However, VMs on such a host system will fail to boot, rather than have performance problems.

    To work around this, you can change the <domain type> value in the XML configuration of the VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain type, and setting this is highly discouraged in production environments.

4.2. Updating virtualization on IBM Z from RHEL 8.5 to RHEL 8.6 or later

If you installed RHEL 8 on IBM Z hardware prior to RHEL 8.6, you had to obtain virtualization RPMs from the AV stream, separate from the base RPM stream of RHEL 8. Starting with RHEL 8.6, virtualization RPMs previously available only from the AV stream are available on the base RHEL stream. In addition, the AV stream will be discontinued in a future minor release of RHEL 8. Therefore, using the AV stream is no longer recommended.

By following the instructions below, you will deactivate your AV stream and enable your access to virtualization RPMs available in RHEL 8.6 and later versions.

Prerequisites

  • You are using a RHEL 8.5 on IBM Z, with the virt:av module installed. To confirm that this is the case:

    # hostnamectl | grep "Operating System"
    Operating System: Red Hat Enterprise Linux 8.5 (Ootpa)
    # yum module list --installed
    [...]
    Advanced Virtualization for RHEL 8 IBM Z Systems (RPMs)
    Name                Stream                  Profiles                  Summary
    virt                av [e]                common [i]                Virtualization module

Procedure

  1. Disable the virt:av module.

    # yum disable virt:av
  2. Remove the pre-existing virtualization packages and modules that your system already contains.

    # yum module reset virt -y
  3. Upgrade your packages to their latest RHEL versions.

    # yum update

    This also automatically enables the virt:rhel module on your system.

Verification

  • Ensure the virt module on your system is provided by the rhel stream.

    # yum module info virt
    
    Name             : virt
    Stream           : rhel [d][e][a]
    Version          : 8050020211203195115
    [...]

4.3. How virtualization on IBM Z differs from AMD64 and Intel 64

KVM virtualization in RHEL 8 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following:

PCI and USB devices

Virtual PCI and USB devices are not supported on IBM Z. This also means that virtio-*-pci devices are unsupported, and virtio-*-ccw devices should be used instead. For example, use virtio-net-ccw instead of virtio-net-pci.

Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.

Supported guest operating system
Red Hat only supports VMs hosted on IBM Z if they use RHEL 7, 8, or 9 as their guest operating system.
Device boot order

IBM Z does not support the <boot dev='device'> XML configuration element. To define device boot order, use the <boot order='number'> element in the <devices> section of the XML.

In addition, you can select the required boot entry by using the architecture-specific loadparm attribute in the <boot> element. For example, the following determines that the disk should be used first in the boot sequence and if a Linux distribution is available on that disk, it will select the second boot entry:

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2'/>
  <source file='/path/to/qcow2'/>
  <target dev='vda' bus='virtio'/>
  <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
  <boot order='1' loadparm='2'/>
</disk>
Note

By using <boot order='number'> for boot order management is also preferred on AMD64 and Intel 64 hosts.

Memory hot plug
Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM (memory hot unplug) is also not possible on IBM Z, as well as on AMD64 and Intel 64.
NUMA topology
Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by libvirt on IBM Z. Therefore, tuning vCPU performance by using NUMA is not possible on these systems.
GPU devices
Assigning GPU devices is not supported on IBM Z systems.
vfio-ap
VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architecture.
vfio-ccw
VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any other architecture.
SMBIOS
SMBIOS configuration is not available on IBM Z.
Watchdog devices

If using watchdog devices in your VM on an IBM Z host, use the diag288 model. For example:

<devices>
  <watchdog model='diag288' action='poweroff'/>
</devices>
kvm-clock
The kvm-clock service is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z.
v2v and p2v
The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z.
Nested virtualization
Creating nested VMs requires different settings on IBM Z than on AMD64 and Intel 64. For details, see Creating nested virtual machines.
No graphical output in earlier releases
When using RHEL 8.3 or an earlier minor version on your host, displaying the VM graphical output is not possible when connecting to the VM by using the VNC protocol. This is because the gnome-desktop utility was not supported in earlier RHEL versions on IBM Z. In addition, the SPICE display protocol does not work on IBM Z.
Migrations

To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the host-model CPU mode. The host-passthrough and maximum CPU modes are not recommended, as they are generally not migration-safe.

If you want to specify an explicit CPU model in the custom CPU mode, follow these guidelines:

  • Do not use CPU models that end with -base.
  • Do not use the qemu, max or host CPU model.

To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without -base at the end.

PXE installation and booting

When using PXE to run a VM on IBM Z, a specific configuration is required for the pxelinux.cfg/default file. For example:

# pxelinux
default linux
label linux
kernel kernel.img
initrd initrd.img
append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/
Secure Execution
You can boot a VM with a prepared secure guest image by defining <launchSecurity type="s390-pv"/> in the XML configuration of the VM. This encrypts the VM’s memory to protect it from unwanted access by the hypervisor.

Note that the following features are not supported when running a VM in secure execution mode:

  • Device passthrough by using vfio
  • Obtaining memory information by using virsh domstats and virsh memstat
  • The memballoon and virtio-rng virtual devices
  • Memory backing by using huge pages
  • Live and non-live VM migrations
  • Saving and restoring VMs
  • VM snapshots, including memory snapshots (using the --memspec option)
  • Full memory dumps. Instead, specify the --memory-only option for the virsh dump command.
  • 248 or more vCPUs. The vCPU limit for secure guests is 247.
  • Nested virtualization

4.4. Next steps

  • When setting up a VM on an IBM Z system, it is recommended to protect the guest OS from the "Spectre" vulnerability. To do so, use the virsh edit command to modify the VM’s XML configuration and configure its CPU in one of the following ways:

    • Use the host CPU model:

      <cpu mode='host-model' check='partial'>
        <model fallback='allow'/>
      </cpu>

      This makes the ppa15 and bpb features available to the guest if the host supports them.

    • If using a specific host model, add the ppa15 and pbp features. The following example uses the zEC12 CPU model:

      <cpu mode='custom' match='exact' check='partial'>
          <model fallback='allow'>zEC12</model>
          <feature policy='force' name='ppa15'/>
          <feature policy='force' name='bpb'/>
      </cpu>

      Note that when using the ppa15 feature with the z114 and z196 CPU models on a host machine that uses a z12 CPU, you also need to use the latest microcode level (bundle 95 or later).

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.