This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.7.11. Advanced virtual machine management
7.11.1. Automating management tasks 复制链接链接已复制到粘贴板!
You can automate OpenShift Virtualization management tasks by using Red Hat Ansible Automation Platform. Learn the basics by using an Ansible Playbook to create a new virtual machine.
7.11.1.1. About Red Hat Ansible Automation 复制链接链接已复制到粘贴板!
Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for OpenShift Virtualization, and Ansible modules enable you to automate cluster management tasks such as template, persistent volume claim, and virtual machine operations.
Ansible provides a way to automate OpenShift Virtualization management, which you can also accomplish by using the oc
CLI tool or APIs. Ansible is unique because it allows you to integrate KubeVirt modules with other Ansible modules.
7.11.1.2. Automating virtual machine creation 复制链接链接已复制到粘贴板!
You can use the kubevirt_vm
Ansible Playbook to create virtual machines in your OpenShift Container Platform cluster using Red Hat Ansible Automation Platform.
Prerequisites
- Red Hat Ansible Engine version 2.8 or newer
Procedure
Edit an Ansible Playbook YAML file so that it includes the
kubevirt_vm
task:Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意This snippet only includes the
kubevirt_vm
portion of the playbook.Edit the values to reflect the virtual machine you want to create, including the
namespace
, the number ofcpu_cores
, thememory
, and thedisks
. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you want the virtual machine to boot immediately after creation, add
state: running
to the YAML file. For example:kubevirt_vm: namespace: default name: vm1 state: running cpu_cores: 1
kubevirt_vm: namespace: default name: vm1 state: running
1 cpu_cores: 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Changing this value to
state: absent
deletes the virtual machine, if it already exists.
Run the
ansible-playbook
command, using your playbook’s file name as the only argument:ansible-playbook create-vm.yaml
$ ansible-playbook create-vm.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the output to determine if the play was successful:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you did not include
state: running
in your playbook file and you want to boot the VM now, edit the file so that it includesstate: running
and run the playbook again:ansible-playbook create-vm.yaml
$ ansible-playbook create-vm.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To verify that the virtual machine was created, try to access the VM console.
You can use the kubevirt_vm
Ansible Playbook to automate virtual machine creation.
The following YAML file is an example of the kubevirt_vm
playbook. It includes sample values that you must replace with your own information if you run the playbook.
Additional information
7.11.2. Configuring PXE booting for virtual machines 复制链接链接已复制到粘贴板!
PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
7.11.2.1. Prerequisites 复制链接链接已复制到粘贴板!
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
7.11.2.2. OpenShift Virtualization networking glossary 复制链接链接已复制到粘贴板!
OpenShift Virtualization provides advanced networking functionality by using custom resources and plug-ins.
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
- Multus
- a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.
- Custom Resource Definition (CRD)
- a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- NetworkAttachmentDefinition
- a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Preboot eXecution Environment (PXE)
- an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.
7.11.2.3. PXE booting with a specified MAC address 复制链接链接已复制到粘贴板!
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the NetworkAttachmentDefinition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the NetworkAttachmentDefinition file for PXE network
pxe-net-conf
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意The virtual machine instance will be attached to the bridge
br1
through an access port with the requested VLAN.
Create the NetworkAttachmentDefinition object by using the file you created in the previous step:
oc create -f pxe-net-conf.yaml
$ oc create -f pxe-net-conf.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. However, note that at this time, MAC addresses assigned automatically are not persistent.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意Boot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify that the network is connected to the previously created NetworkAttachmentDefinition. In this scenario,
<pxe-net>
is connected to the NetworkAttachmentDefinition called<pxe-net-conf>
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the virtual machine instance:
oc create -f vmi-pxe-boot.yaml
$ oc create -f vmi-pxe-boot.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
oc get vmi vmi-pxe-boot -o yaml | grep -i phase
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the virtual machine instance using VNC:
virtctl vnc vmi-pxe-boot
$ virtctl vnc vmi-pxe-boot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
virtctl console vmi-pxe-boot
$ virtctl console vmi-pxe-boot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from OpenShift Container Platform.ip addr
$ ip addr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
...
3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
7.11.3. Managing guest memory 复制链接链接已复制到粘贴板!
If you want to adjust guest memory settings to suit a specific use case, you can do so by editing the guest’s YAML configuration file. OpenShift Virtualization allows you to configure guest memory overcommitment and disable guest memory overhead accounting.
The following procedures increase the chance that virtual machine processes will be killed due to memory pressure. Proceed only if you understand the risks.
7.11.3.1. Configuring guest memory overcommitment 复制链接链接已复制到粘贴板!
If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host’s memory to your virtual machine instances (VMIs). Enabling memory overcommitment means that you can maximize resources that are normally reserved for the host.
For example, if the host has 32 GB RAM, you can use memory overcommitment to fit 8 virtual machines (VMs) with 4 GB RAM each. This allocation works under the assumption that the virtual machines will not use all of their memory at the same time.
Memory overcommitment increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).
The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.
Procedure
To explicitly tell the virtual machine instance that it has more memory available than was requested from the cluster, edit the virtual machine configuration file and set
spec.domain.memory.guest
to a higher value thanspec.domain.resources.requests.memory
. This process is called memory overcommitment.In this example,
1024M
is requested from the cluster, but the virtual machine instance is told that it has2048M
available. As long as there is enough free memory available on the node, the virtual machine instance will consume up to 2048M.Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意The same eviction rules as those for pods apply to the virtual machine instance if the node is under memory pressure.
Create the virtual machine:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.11.3.2. Disabling guest memory overhead accounting 复制链接链接已复制到粘贴板!
A small amount of memory is requested by each virtual machine instance in addition to the amount that you request. This additional memory is used for the infrastructure that wraps each VirtualMachineInstance
process.
Though it is not usually advisable, it is possible to increase the virtual machine instance density on the node by disabling guest memory overhead accounting.
Disabling guest memory overhead accounting increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).
The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.
Procedure
To disable guest memory overhead accounting, edit the YAML configuration file and set the
overcommitGuestOverhead
value totrue
. This parameter is disabled by default.Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意If
overcommitGuestOverhead
is enabled, it adds the guest overhead to memory limits, if present.Create the virtual machine:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.11.4. Using huge pages with virtual machines 复制链接链接已复制到粘贴板!
You can use huge pages as backing memory for virtual machines in your cluster.
7.11.4.1. Prerequisites 复制链接链接已复制到粘贴板!
- Nodes must have pre-allocated huge pages configured.
7.11.4.2. What huge pages do 复制链接链接已复制到粘贴板!
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. In order to use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.
7.11.4.3. Configuring huge pages for virtual machines 复制链接链接已复制到粘贴板!
You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize
and resources.requests.memory
parameters in your virtual machine configuration.
The memory request must be divisible by the page size. For example, you cannot request 500Mi
memory with a page size of 1Gi
.
The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.
If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.
Prerequisites
- Nodes must have pre-allocated huge pages configured.
Procedure
In your virtual machine configuration, add the
resources.requests.memory
andmemory.hugepages.pageSize
parameters to thespec.domain
. The following configuration snippet is for a virtual machine that requests a total of4Gi
memory with a page size of1Gi
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the virtual machine configuration:
oc apply -f <virtual_machine>.yaml
$ oc apply -f <virtual_machine>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Virtual machines can have resources of a node, such as CPU, dedicated to them in order to improve performance.
7.11.5.1. About dedicated resources 复制链接链接已复制到粘贴板!
When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.
7.11.5.2. Prerequisites 复制链接链接已复制到粘贴板!
-
The CPU Manager must be configured on the node. Verify that the node has the
cpumanager
=true
label before scheduling virtual machine workloads.
You can enable dedicated resources for a virtual machine in the Virtual Machine Overview page of the web console.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Select a virtual machine to open the Virtual Machine Overview page.
- Click the Details tab.
- Click the pencil icon to the right of the Dedicated Resources field to open the Dedicated Resources window.
- Select Schedule this workload with dedicated resources (guaranteed policy).
- Click Save.