Este conteúdo não está disponível no idioma selecionado.
Virtualization
OpenShift Virtualization installation, usage, and release notes
Abstract
Chapter 1. About Copiar o linkLink copiado para a área de transferência!
1.1. About OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Learn about OpenShift Virtualization’s capabilities and support scope.
1.1.1. What you can do with OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.
OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include:
- Creating and managing Linux and Windows virtual machines (VMs)
- Running pod and VM workloads alongside each other in a cluster
- Connecting to virtual machines through a variety of consoles and CLI tools
- Importing and cloning existing virtual machines
- Managing network interface controllers and storage disks attached to virtual machines
- Live migrating virtual machines between nodes
An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.
OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features.
When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.
You can use OpenShift Virtualization with OVN-Kubernetes, OpenShift SDN, or one of the other certified network plugins listed in Certified OpenShift CNI Plug-ins.
You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the
ocp4-moderate
ocp4-moderate-node
1.1.1.1. OpenShift Virtualization supported cluster version Copiar o linkLink copiado para a área de transferência!
The latest stable release of OpenShift Virtualization 4.14 is 4.14.17.
OpenShift Virtualization 4.14 is supported for use on OpenShift Container Platform 4.14 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
1.1.2. About volume and access modes for virtual machine disks Copiar o linkLink copiado para a área de transferência!
If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.
For best results, use the
ReadWriteMany
Block
-
(RWX) access mode is required for live migration.
ReadWriteMany The
volume mode performs significantly better than theBlockvolume mode. This is because theFilesystemvolume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.FilesystemFor example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes.
You cannot live migrate virtual machines with the following configurations:
-
Storage volume with (RWO) access mode
ReadWriteOnce - Passthrough features such as GPUs
Do not set the
evictionStrategy
LiveMigrate
1.1.3. Single-node OpenShift differences Copiar o linkLink copiado para a área de transferência!
You can install OpenShift Virtualization on single-node OpenShift.
However, you should be aware that Single-node OpenShift does not support the following features:
- High availability
- Pod disruption
- Live migration
- Virtual machines or templates that have an eviction strategy configured
1.2. Supported limits Copiar o linkLink copiado para a área de transferência!
You can refer to tested object maximums when planning your OpenShift Container Platform environment for OpenShift Virtualization. However, approaching the maximum values can reduce performance and increase latency. Ensure that you plan for your specific use case and consider all factors that can impact cluster scaling.
For more information about cluster configuration and options that impact performance, see the OpenShift Virtualization - Tuning & Scaling Guide in the Red Hat Knowledgebase.
1.2.1. Tested maximums for OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
The following limits apply to a large-scale OpenShift Virtualization 4.x environment. They are based on a single cluster of the largest possible size. When you plan an environment, remember that multiple smaller clusters might be the best option for your use case.
1.2.1.1. Virtual machine maximums Copiar o linkLink copiado para a área de transferência!
The following maximums apply to virtual machines (VMs) running on OpenShift Virtualization. These values are subject to the limits specified in Virtualization limits for Red Hat Enterprise Linux with KVM.
| Objective (per VM) | Tested limit | Theoretical limit |
|---|---|---|
| Virtual CPUs | 216 vCPUs | 255 vCPUs |
| Memory | 6 TB | 16 TB |
| Single disk size | 20 TB | 100 TB |
| Hot-pluggable disks | 255 disks | N/A |
Each VM must have at least 512 MB of memory.
1.2.1.2. Host maximums Copiar o linkLink copiado para a área de transferência!
The following maximums apply to the OpenShift Container Platform hosts used for OpenShift Virtualization.
| Objective (per host) | Tested limit | Theoretical limit |
|---|---|---|
| Logical CPU cores or threads | Same as Red Hat Enterprise Linux (RHEL) | N/A |
| RAM | Same as RHEL | N/A |
| Simultaneous live migrations | Defaults to 2 outbound migrations per node, and 5 concurrent migrations per cluster | Depends on NIC bandwidth |
| Live migration bandwidth | No default limit | Depends on NIC bandwidth |
1.2.1.3. Cluster maximums Copiar o linkLink copiado para a área de transferência!
The following maximums apply to objects defined in OpenShift Virtualization.
| Objective (per cluster) | Tested limit | Theoretical limit |
|---|---|---|
| Number of attached PVs per node | N/A | CSI storage provider dependent |
| Maximum PV size | N/A | CSI storage provider dependent |
| Hosts | 500 hosts (100 or fewer recommended) [1] | Same as OpenShift Container Platform |
| Defined VMs | 10,000 VMs [2] | Same as OpenShift Container Platform |
If you use more than 100 nodes, consider using Red Hat Advanced Cluster Management (RHACM) to manage multiple clusters instead of scaling out a single control plane. Larger clusters add complexity, require longer updates, and depending on node size and total object density, they can increase control plane stress.
Using multiple clusters can be beneficial in areas like per-cluster isolation and high availability.
The maximum number of VMs per node depends on the host hardware and resource capacity. It is also limited by the following parameters:
-
Settings that limit the number of pods that can be scheduled to a node. For example: .
maxPods -
The default number of KVM devices. For example: .
devices.kubevirt.io/kvm: 1k
-
Settings that limit the number of pods that can be scheduled to a node. For example:
1.3. Security policies Copiar o linkLink copiado para a área de transferência!
Learn about OpenShift Virtualization security and authorization.
Key points
-
OpenShift Virtualization adheres to the Kubernetes pod security standards profile, which aims to enforce the current best practices for pod security.
restricted - Virtual machine (VM) workloads run as unprivileged pods.
-
Security context constraints (SCCs) are defined for the service account.
kubevirt-controller - TLS certificates for OpenShift Virtualization components are renewed and rotated automatically.
1.3.1. About workload security Copiar o linkLink copiado para a área de transferência!
By default, virtual machine (VM) workloads do not run with root privileges in OpenShift Virtualization, and there are no supported OpenShift Virtualization features that require root privileges.
For each VM, a
virt-launcher
libvirt
libvirt
1.3.2. TLS certificates Copiar o linkLink copiado para a área de transferência!
TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually.
Automatic renewal schedules
TLS certificates are automatically deleted and replaced according to the following schedule:
- KubeVirt certificates are renewed daily.
- Containerized Data Importer controller (CDI) certificates are renewed every 15 days.
- MAC pool certificates are renewed every year.
Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption:
- Migrations
- Image uploads
- VNC and console connections
1.3.3. Authorization Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization uses role-based access control (RBAC) to define permissions for human users and service accounts. The permissions defined for service accounts control the actions that OpenShift Virtualization components can perform.
You can also use RBAC roles to manage user access to virtualization features. For example, an administrator can create an RBAC role that provides the permissions required to launch a virtual machine. The administrator can then restrict access by binding the role to specific users.
1.3.3.1. Default cluster roles for OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
By using cluster role aggregation, OpenShift Virtualization extends the default OpenShift Container Platform cluster roles to include permissions for accessing virtualization objects.
| Default cluster role | OpenShift Virtualization cluster role | OpenShift Virtualization cluster role description |
|---|---|---|
|
|
| A user that can view all OpenShift Virtualization resources in the cluster but cannot create, delete, modify, or access them. For example, the user can see that a virtual machine (VM) is running but cannot shut it down or gain access to its console. |
|
|
| A user that can modify all OpenShift Virtualization resources in the cluster. For example, the user can create VMs, access VM consoles, and delete VMs. |
|
|
| A user that has full permissions to all OpenShift Virtualization resources, including the ability to delete collections of resources. The user can also view and modify the OpenShift Virtualization runtime configuration, which is located in the
|
1.3.3.2. RBAC roles for storage features in OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
The following permissions are granted to the Containerized Data Importer (CDI), including the
cdi-operator
cdi-controller
1.3.3.2.1. Cluster-wide RBAC roles Copiar o linkLink copiado para a área de transferência!
| CDI cluster role | Resources | Verbs |
|---|---|---|
|
|
|
|
|
|
| |
|
|
|
|
|
|
| |
|
|
|
|
|
|
| |
|
|
|
|
| API group | Resources | Verbs |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Allow list:
|
|
|
|
Allow list:
|
|
|
|
|
|
| API group | Resources | Verbs |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.3.3.2.2. Namespaced RBAC roles Copiar o linkLink copiado para a área de transferência!
| API group | Resources | Verbs |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| API group | Resources | Verbs |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.3.3.3. Additional SCCs and permissions for the kubevirt-controller service account Copiar o linkLink copiado para a área de transferência!
Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.
The
virt-controller
virt-launcher
By default,
virt-launcher
default
VirtualMachineInstance
virt-launcher
The
kubevirt-controller
virt-launcher
The
kubevirt-controller
-
scc.AllowHostDirVolumePlugin = true
This allows virtual machines to use the hostpath volume plugin. -
scc.AllowPrivilegedContainer = false
This ensures thepod is not run as a privileged container.virt-launcher scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"}-
allows setting the CPU affinity.
SYS_NICE -
allows DHCP and Slirp operations.
NET_BIND_SERVICE
-
Viewing the SCC and RBAC definitions for the kubevirt-controller
You can view the
SecurityContextConstraints
kubevirt-controller
oc
$ oc get scc kubevirt-controller -o yaml
You can view the RBAC definition for the
kubevirt-controller
oc
$ oc get clusterrole kubevirt-controller -o yaml
1.4. OpenShift Virtualization Architecture Copiar o linkLink copiado para a área de transferência!
The Operator Lifecycle Manager (OLM) deploys operator pods for each component of OpenShift Virtualization:
-
Compute:
virt-operator -
Storage:
cdi-operator -
Network:
cluster-network-addons-operator -
Scaling:
ssp-operator -
Templating:
tekton-tasks-operator
OLM also deploys the
hyperconverged-cluster-operator
hco-webhook
hyperconverged-cluster-cli-download
After all operator pods are successfully deployed, you should create the
HyperConverged
HyperConverged
The
HyperConverged
KubeVirt
virt-controller
virt-handler
virt-api
The OLM deploys the Hostpath Provisioner (HPP) Operator, but it is not functional until you create a
hostpath-provisioner
1.4.1. About the HyperConverged Operator (HCO) Copiar o linkLink copiado para a área de transferência!
The HCO,
hco-operator
| Component | Description |
|---|---|
|
| Validates the
|
|
| Provides the
|
|
| Contains all operators, CRs, and objects needed by OpenShift Virtualization. |
|
| A Scheduling, Scale, and Performance (SSP) CR. This is automatically created by the HCO. |
|
| A Containerized Data Importer (CDI) CR. This is automatically created by the HCO. |
|
| A CR that instructs and is managed by the
|
1.4.2. About the Containerized Data Importer (CDI) Operator Copiar o linkLink copiado para a área de transferência!
The CDI Operator,
cdi-operator
| Component | Description |
|---|---|
|
| Manages the authorization to upload VM disks into PVCs by issuing secure upload tokens. |
|
| Directs external disk upload traffic to the appropriate upload server pod so that it can be written to the correct PVC. Requires a valid upload token. |
|
| Helper pod that imports a virtual machine image into a PVC when creating a data volume. |
1.4.3. About the Cluster Network Addons Operator Copiar o linkLink copiado para a área de transferência!
The Cluster Network Addons Operator,
cluster-network-addons-operator
| Component | Description |
|---|---|
|
| Manages TLS certificates of Kubemacpool’s webhooks. |
|
| Provides a MAC address pooling service for virtual machine (VM) network interface cards (NICs). |
|
| Marks network bridges available on nodes as node resources. |
|
| Installs Container Network Interface (CNI) plugins on cluster nodes, enabling the attachment of VMs to Linux bridges through network attachment definitions. |
1.4.4. About the Hostpath Provisioner (HPP) Operator Copiar o linkLink copiado para a área de transferência!
The HPP Operator,
hostpath-provisioner-operator
| Component | Description |
|---|---|
|
| Provides a worker for each node where the HPP is designated to run. The pods mount the specified backing storage on the node. |
|
| Implements the Container Storage Interface (CSI) driver interface of the HPP. |
|
| Implements the legacy driver interface of the HPP. |
1.4.5. About the Scheduling, Scale, and Performance (SSP) Operator Copiar o linkLink copiado para a área de transferência!
The SSP Operator,
ssp-operator
| Component | Description |
|---|---|
|
| Creates a VM from a template. |
|
| Copies a VM template. |
|
| Creates or removes a VM template. |
|
| Creates or removes data volumes or data sources. |
|
| Runs a script or a command on a VM, then stops or deletes the VM afterward. |
|
| Runs a
|
|
| Runs a
|
|
| Waits for a specific virtual machine instance (VMI) status, then fails or succeeds according to that status. |
|
| Creates a VM from a manifest. |
1.4.6. About the OpenShift Virtualization Operator Copiar o linkLink copiado para a área de transferência!
The OpenShift Virtualization Operator,
virt-operator
| Component | Description |
|---|---|
|
| HTTP API server that serves as the entry point for all virtualization-related flows. |
|
| Observes the creation of a new VM instance object and creates a corresponding pod. When the pod is scheduled on a node,
|
|
| Monitors any changes to a VM and instructs
|
|
| Contains the VM that was created by the user as implemented by
|
Chapter 2. Release notes Copiar o linkLink copiado para a área de transferência!
2.1. OpenShift Virtualization release notes Copiar o linkLink copiado para a área de transferência!
2.1.1. Providing documentation feedback Copiar o linkLink copiado para a área de transferência!
To report an error or to improve our documentation, log in to your Red Hat Jira account and submit a Jira issue.
2.1.2. About Red Hat OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the
icon.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
Learn more about OpenShift Virtualization architecture and deployments.
Prepare your cluster for OpenShift Virtualization.
2.1.2.1. OpenShift Virtualization supported cluster version Copiar o linkLink copiado para a área de transferência!
The latest stable release of OpenShift Virtualization 4.14 is 4.14.17.
OpenShift Virtualization 4.14 is supported for use on OpenShift Container Platform 4.14 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
2.1.2.2. Supported guest operating systems Copiar o linkLink copiado para a área de transferência!
To view the supported guest operating systems for OpenShift Virtualization, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM.
2.1.3. New and changed features Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
- Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4.14.
- Intel and AMD CPUs.
- Creating hosted control plane clusters on OpenShift Virtualization was previously Technology Preview and is now generally available. For more information, see Managing hosted control plane clusters on OpenShift Virtualization in the Red Hat Advanced Cluster Management (RHACM) documentation.
Using OpenShift Virtualization on Amazon Web Services (AWS) bare-metal OpenShift Container Platform clusters was previously Technology Preview and is now generally available.
In addition, OpenShift Virtualization is now supported on Red Hat OpenShift Service on AWS Classic clusters.
For more information, see OpenShift Virtualization on AWS bare metal.
- Using the NVIDIA GPU Operator to provision worker nodes for GPU-enabled VMs was previously Technology Preview and is now generally available. For more information, see Configuring the NVIDIA GPU Operator.
- As a cluster administrator, you can back up and restore applications running on OpenShift Virtualization by using the OpenShift API for Data Protection (OADP).
- You can add a static authorized SSH key to a project by using the web console. The key is then added to all VMs that you create in the project.
-
OpenShift Virtualization now supports persisting the virtual Trusted Platform Module (vTPM) device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the attribute in the
vmStateStorageClasscustom resource (CR).HyperConverged
- You can enable dynamic SSH key injection for RHEL 9 VMs. Then, you can update the authorized SSH keys at runtime.
- You can now enable volume snapshots as boot sources.
The access mode and volume mode fields in storage profiles are populated automatically with their optimal values for the following additional Containerized Storage Interface provisioners:
- Dell PowerFlex
- Dell PowerMax
- Dell PowerScale
- Dell Unity
- Dell PowerStore
- Hitachi Virtual Storage Platform
- IBM Fusion Hyper-Converged Infrastructure
- IBM Fusion HCI with Fusion Data Foundation or Fusion Global Data Platform
- IBM Fusion Software-Defined Storage
- IBM FlashSystems
- Hewlett Packard Enterprise 3PAR
- Hewlett Packard Enterprise Nimble
- Hewlett Packard Enterprise Alletra
- Hewlett Packard Enterprise Primera
- You can use a custom scheduler to schedule a virtual machine (VM) on a node.
- Garbage collection for data volumes is disabled by default.
- You can add a static authorized SSH key to a project by using the web console. The key is then added to all VMs that you create in the project.
The following runbooks have been changed:
-
SingleStackIPv6UnsupportedandVMStorageClassWarninghave been added. -
has been renamed
KubeMacPoolDownKubemacpoolDown. -
has been renamed
KubevirtHyperconvergedClusterOperatorInstallationNotCompletedAlertHCOInstallationIncomplete. -
has been renamed
KubevirtHyperconvergedClusterOperatorCRModificationKubeVirtCRModified. -
has been renamed
KubevirtHyperconvergedClusterOperatorUSModificationUnsupportedHCOModification. -
has been renamed
SSPOperatorDownSSPDown.
-
2.1.3.1. Quick starts Copiar o linkLink copiado para a área de transferência!
-
Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the keyword in the Filter field.
virtualization
2.1.3.2. Networking Copiar o linkLink copiado para a área de transferência!
- You can connect a virtual machine (VM) to an OVN-Kubernetes secondary network by using the web console or the CLI.
2.1.3.3. Web console Copiar o linkLink copiado para a área de transferência!
- Cluster administrators can now enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console.
- You can now force stop an unresponsive VM from the action menu. To force stop a VM, select Stop and then Force stop from the action menu.
- The DataSources and the Bootable volumes pages have been merged into the Bootable volumes page so that you can manage these similar resources in a single location.
- Cluster administrators can enable or disable Technology Preview features on the Settings tab on the Virtualization → Overview page.
- You can now generate a temporary token to access the VNC of a VM.
2.1.4. Deprecated and removed features Copiar o linkLink copiado para a área de transferência!
2.1.4.1. Deprecated features Copiar o linkLink copiado para a área de transferência!
Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.
-
The RHEL 8 RPM is deprecated. Download the
kubevirt-virtctlbinary from the OpenShift Container Platform web console instead of using the command line. The RPM will be removed in a future release.virtctl
-
The is deprecated and Tekton tasks and example pipelines are now deployed by the
tekton-tasks-operator.ssp-operator
-
The ,
copy-template, andmodify-vm-templatetasks are deprecated.create-vm-from-template
- Many OpenShift Virtualization metrics have changed or will change in a future version. These changes could affect your custom dashboards. See OpenShift Virtualization 4.14 metric changes for details. (BZ#2179660)
- Support for Windows Server 2012 R2 templates is deprecated.
2.1.4.2. Removed features Copiar o linkLink copiado para a área de transferência!
Removed features are not supported in the current release.
- Support for the legacy HPP custom resource, and the associated storage class, has been removed for all new deployments. In OpenShift Virtualization 4.14, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. A legacy HPP custom resource is supported only if it had been installed on a previous version of OpenShift Virtualization.
-
Installing the client as an RPM is no longer supported for Red Hat Enterprise Linux (RHEL) 7 and RHEL 9.
virtctl
- CentOS 7 and CentOS Stream 8 are now in the End of Life phase. As a consequence, the container images for these operating systems have been removed from OpenShift Virtualization and are no longer community supported.
2.1.5. Technology Preview features Copiar o linkLink copiado para a área de transferência!
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
- You can now install and edit customized instance types and preferences to create a VM from a volume or PersistentVolumeClaim (PVC).
- You can now configure a VM eviction strategy for the entire cluster.
- You can hot plug a bridge network interface to a running virtual machine (VM). Hot plugging and hot unplugging is supported only for VMs created with OpenShift Virtualization 4.14 or later.
2.1.6. Bug fixes Copiar o linkLink copiado para a área de transferência!
-
The mediated devices configuration API in the custom resource (CR) has been updated to improve consistency. The field that was previously named
HyperConvergedis now namedmediatedDevicesTypesto align with the naming convention used for themediatedDeviceTypesfield. (BZ#2054863)nodeMediatedDeviceTypes
-
Virtual machines created from common templates on a Single Node OpenShift (SNO) cluster no longer display a alert when the cluster-level eviction strategy is
VMCannotBeEvictedfor SNO. (BZ#2092412)None
- Windows 11 virtual machines now boot on clusters running in FIPS mode. (BZ#2089301)
-
When you use two pods with different SELinux contexts, VMs with the storage class no longer fail to migrate. (BZ#2092271)
ocs-storagecluster-cephfs
- If you stop a node on a cluster and then use the Node Health Check Operator to bring the node back up, connectivity to Multus is retained. (OCPBUGS-8398)
-
When restoring a VM snapshot for storage whose binding mode is , the restored PVCs no longer remain in the
WaitForFirstConsumerstate and the restore operation proceeds. (BZ#2149654)Pending
2.1.7. Known issues Copiar o linkLink copiado para a área de transferência!
Monitoring
The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then
sends aopenshift-monitoringalert every 60 minutes for virtual machine images that use thePodDisruptionBudgetAtLimiteviction strategy. (BZ#2026733)LiveMigrate- As a workaround, silence alerts.
Networking
If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)
- As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
-
You cannot SSH into a VM when using the option in your
networkType: OVNKubernetesfile. (BZ#2165895)install-config.yaml
- You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
When you update from OpenShift Container Platform 4.12 to a newer minor version, VMs that use the
Container Network Interface (CNI) fail to live migrate. (https://access.redhat.com/solutions/7069807)cnv-bridge-
As a workaround, change the field in your
spec.config.typemanifest fromNetworkAttachmentDefinitiontocnv-bridgebefore performing the update.bridge
-
As a workaround, change the
Nodes
-
Uninstalling OpenShift Virtualization does not remove the node labels created by OpenShift Virtualization. You must remove the labels manually. (CNV-22036)
feature.node.kubevirt.io
- In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. (BZ#2151169)
Storage
In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)
- As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
If you clone more than 100 VMs using the
cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones might also fail. (BZ#2055595)csi-clone-
As a workaround, you can restart the to purge the VM clones.
ceph-mgr
-
As a workaround, you can restart the
If you use Portworx as your storage solution on AWS and create a VM disk image, the created image might be smaller than expected due to the filesystem overhead being accounted for twice. (BZ#2237287)
- As a workaround, you can manually expand the Persistent Volume Claim (PVC) to increase the available space after the initial provisioning process completes.
If you simultaneously clone more than 1000 VMs using the provided DataSources in the
namespace, it is possible that not all of the VMs will move to a running state. (BZ#2216038)openshift-virtualization-os-images- As a workaround, deploy VMs in smaller batches.
- Live migration cannot be enabled for a virtual machine instance (VMI) after a hotplug volume has been added and removed. (BZ#2247593)
Virtualization
- Live migration fails if the VM name exceeds 47 characters. (CNV-61066)
OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)
- As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
With the release of the RHSA-2023:3722 advisory, the TLS
(EMS) extension (RFC 7627) is mandatory for TLS 1.2 connections on FIPS-enabled RHEL 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected. (BZ#2157951)Extended Master SecretLegacy OpenSSL clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2.
As a workaround, upgrade legacy OpenSSL clients to a version that supports TLS 1.3 and configure OpenShift Virtualization to use TLS 1.3, with the
TLS security profile type, for FIPS mode.Modern
Web console
If you upgrade OpenShift Container Platform 4.13 to 4.14 without upgrading OpenShift Virtualization, the Virtualization pages of the web console crash. (OCPBUGS-22853)
You must upgrade the OpenShift Virtualization Operator to 4.14 manually or set your subscription approval strategy to "Automatic."
Chapter 3. Getting started Copiar o linkLink copiado para a área de transferência!
3.1. Getting started with OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
You can explore the features and functionalities of OpenShift Virtualization by installing and configuring a basic environment.
Cluster configuration procedures require
cluster-admin
3.1.1. Planning and installing OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Plan and install OpenShift Virtualization on an OpenShift Container Platform cluster:
Planning and installation resources
3.1.2. Creating and managing virtual machines Copiar o linkLink copiado para a área de transferência!
Create a virtual machine (VM):
Create a VM from a Red Hat image.
You can create a VM by using a Red Hat template or an instance type.
ImportantCreating a VM from an instance type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Create a VM from a custom image.
You can create a VM by importing a custom image from a container registry or a web page, by uploading an image from your local machine, or by cloning a persistent volume claim (PVC).
Connect a VM to a secondary network:
- Linux bridge network.
- Open Virtual Network (OVN)-Kubernetes secondary network.
Single Root I/O Virtualization (SR-IOV) network.
NoteVMs are connected to the pod network by default.
Connect to a VM:
- Connect to the serial console or VNC console of a VM.
- Connect to a VM by using SSH.
- Connect to the desktop viewer for Windows VMs.
Manage a VM:
3.1.3. Next steps Copiar o linkLink copiado para a área de transferência!
3.2. Using the virtctl and libguestfs CLI tools Copiar o linkLink copiado para a área de transferência!
You can manage OpenShift Virtualization resources by using the
virtctl
You can access and modify virtual machine (VM) disk images by using the libguestfs command-line tool. You deploy
libguestfs
virtctl libguestfs
3.2.1. Installing virtctl Copiar o linkLink copiado para a área de transferência!
To install
virtctl
virtctl
To install
virtctl
kubevirt-virtctl
3.2.1.1. Installing the virtctl binary on RHEL 9, Linux, Windows, or macOS Copiar o linkLink copiado para a área de transferência!
You can download the
virtctl
Procedure
- Navigate to the Virtualization → Overview page in the web console.
-
Click the Download virtctl link to download the binary for your operating system.
virtctl Install
:virtctlFor RHEL 9 and other Linux operating systems:
Decompress the archive file:
$ tar -xvf <virtctl-version-distribution.arch>.tar.gzRun the following command to make the
binary executable:virtctl$ chmod +x <path/virtctl-file-name>Move the
binary to a directory in yourvirtctlenvironment variable.PATHYou can check your path by running the following command:
$ echo $PATHSet the
environment variable:KUBECONFIG$ export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig
For Windows:
- Decompress the archive file.
-
Navigate the extracted folder hierarchy and double-click the executable file to install the client.
virtctl Move the
binary to a directory in yourvirtctlenvironment variable.PATHYou can check your path by running the following command:
C:\> path
For macOS:
- Decompress the archive file.
Move the
binary to a directory in yourvirtctlenvironment variable.PATHYou can check your path by running the following command:
echo $PATH
3.2.1.2. Installing the virtctl RPM on RHEL 8 Copiar o linkLink copiado para a área de transferência!
You can install the
virtctl
kubevirt-virtctl
Prerequisites
- Each host in your cluster must be registered with Red Hat Subscription Manager (RHSM) and have an active OpenShift Container Platform subscription.
Procedure
Enable the OpenShift Virtualization repository by using the
CLI tool to run the following command:subscription-manager# subscription-manager repos --enable cnv-4.14-for-rhel-8-x86_64-rpmsInstall the
package by running the following command:kubevirt-virtctl# yum install kubevirt-virtctl
3.2.2. virtctl commands Copiar o linkLink copiado para a área de transferência!
The
virtctl
The virtual machine (VM) commands also apply to virtual machine instances (VMIs) unless otherwise specified.
3.2.2.1. virtctl information commands Copiar o linkLink copiado para a área de transferência!
You use
virtctl
virtctl
| Command | Description |
|---|---|
|
| View the
|
|
| View a list of
|
|
| View a list of options for a specific command. |
|
| View a list of global command options for any
|
3.2.2.2. VM information commands Copiar o linkLink copiado para a área de transferência!
You can use
virtctl
| Command | Description |
|---|---|
|
| View the file systems available on a guest machine. |
|
| View information about the operating systems on a guest machine. |
|
| View the logged-in users on a guest machine. |
3.2.2.3. VM management commands Copiar o linkLink copiado para a área de transferência!
You use
virtctl
| Command | Description |
|---|---|
|
| Create a
|
|
| Start a VM. |
|
| Start a VM in a paused state. This option enables you to interrupt the boot process from the VNC console. |
|
| Stop a VM. |
|
| Force stop a VM. This option might cause data inconsistency or data loss. |
|
| Pause a VM. The machine state is kept in memory. |
|
| Unpause a VM. |
|
| Migrate a VM. |
|
| Cancel a VM migration. |
|
| Restart a VM. |
|
| Create an
|
|
| Create a
|
3.2.2.4. VM connection commands Copiar o linkLink copiado para a área de transferência!
You use
virtctl
| Command | Description |
|---|---|
|
| Connect to the serial console of a VM. |
|
| Create a service that forwards a designated port of a VM and expose the service on the specified port of the node. Example:
|
|
| Copy a file from your machine to a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key. |
|
| Copy a file from a VM to your machine. This command uses the private key of an SSH key pair. The VM must be configured with the public key. |
|
| Open an SSH connection with a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key. |
|
| Connect to the VNC console of a VM. You must have
|
|
| Display the port number and connect manually to a VM by using any viewer through the VNC connection. |
|
| Specify a port number to run the proxy on the specified port, if that port is available. If a port number is not specified, the proxy runs on a random port. |
3.2.2.5. VM export commands Copiar o linkLink copiado para a área de transferência!
Use
virtctl vmexport
| Command | Description |
|---|---|
|
| Create a
|
|
| Delete a
|
|
| Download the volume defined in a
Optional:
|
|
| Create a
|
|
| Retrieve the manifest for an existing export. The manifest does not include the header secret. |
|
| Create a VM export for a VM example, and retrieve the manifest. The manifest does not include the header secret. |
|
| Create a VM export for a VM snapshot example, and retrieve the manifest. The manifest does not include the header secret. |
|
| Retrieve the manifest for an existing export. The manifest includes the header secret. |
|
| Retrieve the manifest for an existing export in json format. The manifest does not include the header secret. |
|
| Retrieve the manifest for an existing export. The manifest includes the header secret and writes it to the file specified. |
3.2.2.6. VM memory dump commands Copiar o linkLink copiado para a área de transferência!
You can use the
virtctl memory-dump
--create-claim
Prerequisites
-
The PVC volume mode must be .
FileSystem The PVC must be large enough to contain the memory dump.
The formula for calculating the PVC size is
, where(VMMemorySize + 100Mi) * FileSystemOverheadis the memory dump overhead.100MiYou must enable the hot plug feature gate in the
custom resource by running the following command:HyperConverged$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "add", "path": "/spec/featureGates", \ "value": "HotplugVolumes"}]'
Downloading the memory dump
You must use the
virtctl vmexport download
$ virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> \
--volume=<volume_name> --output=<output_file>
| Command | Description |
|---|---|
|
| Save the memory dump of a VM on a PVC. The memory dump status is displayed in the
Optional:
|
|
| Rerun the
This command overwrites the previous memory dump. |
|
| Remove a memory dump. You must remove a memory dump manually if you want to change the target PVC. This command removes the association between the VM and the PVC, so that the memory dump is not displayed in the
|
3.2.2.7. Hot plug and hot unplug commands Copiar o linkLink copiado para a área de transferência!
You use
virtctl
| Command | Description |
|---|---|
|
| Hot plug a data volume or persistent volume claim (PVC). Optional:
|
|
| Hot unplug a virtual disk. |
|
| Hot plug a Linux bridge network interface. |
|
| Hot unplug a Linux bridge network interface. |
3.2.2.8. Image upload commands Copiar o linkLink copiado para a área de transferência!
You use the
virtctl image-upload
| Command | Description |
|---|---|
|
| Upload a VM image to a data volume that already exists. |
|
| Upload a VM image to a new data volume of a specified requested size. |
3.2.3. Deploying libguestfs by using virtctl Copiar o linkLink copiado para a área de transferência!
You can use the
virtctl guestfs
libguestfs-tools
Procedure
To deploy a container with
, mount the PVC, and attach a shell to it, run the following command:libguestfs-tools$ virtctl guestfs -n <namespace> <pvc_name>ImportantThe
argument is required. If you do not include it, an error message appears.<pvc_name>
3.2.3.1. Libguestfs and virtctl guestfs commands Copiar o linkLink copiado para a área de transferência!
Libguestfs
libguestfs
You can also use the
virtctl guestfs
virt-
| Command | Description |
|---|---|
|
| Edit a file interactively in your terminal. |
|
| Inject an ssh key into the guest and create a login. |
|
| See how much disk space is used by a VM. |
|
| See the full list of all RPMs installed on a guest by creating an output file containing the full list. |
|
| Display the output file list of all RPMs created using the
|
|
| Seal a virtual machine disk image to be used as a template. |
By default,
virtctl guestfs
| Flag Option | Description |
|---|---|
|
| Provides help for
|
|
| To use a PVC from a specific namespace. If you do not use the
If you do not include a
|
|
| Lists the
You can configure the container to use a custom image by using the
|
|
| Indicates that
By default,
If a cluster does not have any
If not set, the
|
|
| Shows the pull policy for the
You can also overwrite the image’s pull policy by setting the
|
The command also checks if a PVC is in use by another pod, in which case an error message appears. However, once the
libguestfs-tools
virtctl guestfs
The
virtctl guestfs
3.3. Web console overview Copiar o linkLink copiado para a área de transferência!
The Virtualization section of the OpenShift Container Platform web console contains the following pages for managing and monitoring your OpenShift Virtualization environment.
| Page | Description |
|---|---|
| Manage and monitor the OpenShift Virtualization environment. | |
| Create virtual machines from a catalog of templates. | |
| Create and manage virtual machines. | |
| Create and manage templates. | |
| Create and manage virtual machine instance types. | |
| Create and manage virtual machine preferences. | |
| Create and manage DataSources for bootable volumes. | |
| Create and manage migration policies for workloads. |
| Icon | Description |
|---|---|
|
| Edit icon |
|
| Link icon |
3.3.1. Overview page Copiar o linkLink copiado para a área de transferência!
The Overview page displays resources, metrics, migration progress, and cluster-level settings.
Example 3.1. Overview page
| Element | Description |
|---|---|
|
Download virtctl | Download the
|
| Resources, usage, alerts, and status. | |
| Top consumers of CPU, memory, and storage resources. | |
| Status of live migrations. | |
| The Settings tab contains the Cluster tab and the User tab. | |
| Settings → Cluster tab | OpenShift Virtualization version, update status, live migration, templates project, preview features, and load balancer service settings. |
| Settings → User tab | Authorized SSH keys, user permissions, and welcome information settings. |
3.3.1.1. Overview tab Copiar o linkLink copiado para a área de transferência!
The Overview tab displays resources, usage, alerts, and status.
Example 3.2. Overview tab
| Element | Description |
|---|---|
| Getting started resources card |
|
| Memory tile | Memory usage, with a chart showing the last 7 days' trend. |
| Storage tile | Storage usage, with a chart showing the last 7 days' trend. |
| VirtualMachines tile | Number of virtual machines, with a chart showing the last 7 days' trend. |
| vCPU usage tile | vCPU usage, with a chart showing the last 7 days' trend. |
| VirtualMachine statuses tile | Number of virtual machines, grouped by status. |
| Alerts tile | OpenShift Virtualization alerts, grouped by severity. |
| VirtualMachines per resource chart | Number of virtual machines created from templates and instance types. |
3.3.1.2. Top consumers tab Copiar o linkLink copiado para a área de transferência!
The Top consumers tab displays the top consumers of CPU, memory, and storage.
Example 3.3. Top consumers tab
| Element | Description |
|---|---|
|
View virtualization dashboard | Link to Observe → Dashboards, which displays the top consumers for OpenShift Virtualization. |
| Time period list | Select a time period to filter the results. |
| Top consumers list | Select the number of top consumers to filter the results. |
| CPU chart | Virtual machines with the highest CPU usage. |
| Memory chart | Virtual machines with the highest memory usage. |
| Memory swap traffic chart | Virtual machines with the highest memory swap traffic. |
| vCPU wait chart | Virtual machines with the highest vCPU wait periods. |
| Storage throughput chart | Virtual machines with the highest storage throughput usage. |
| Storage IOPS chart | Virtual machines with the highest storage input/output operations per second usage. |
3.3.1.3. Migrations tab Copiar o linkLink copiado para a área de transferência!
The Migrations tab displays the status of virtual machine migrations.
Example 3.4. Migrations tab
| Element | Description |
|---|---|
| Time period list | Select a time period to filter virtual machine migrations. |
| VirtualMachineInstanceMigrations information table | List of virtual machine migrations. |
3.3.1.4. Settings tab Copiar o linkLink copiado para a área de transferência!
The Settings tab displays cluster-wide settings.
Example 3.5. Tabs on the Settings tab
| Tab | Description |
|---|---|
| OpenShift Virtualization version and update status, live migration, templates project, preview features, and load balancer service settings. | |
| Authorized SSH key management, user permissions, and welcome information settings. |
3.3.1.4.1. Cluster tab Copiar o linkLink copiado para a área de transferência!
The Cluster tab displays the OpenShift Virtualization version and update status. You configure preview features, live migration, and other settings on the Cluster tab.
Example 3.6. Cluster tab
| Element | Description |
|---|---|
| Installed version | OpenShift Virtualization version. |
| Update status | OpenShift Virtualization update status. |
| Channel | OpenShift Virtualization update channel. |
| Preview features section | Expand this section to manage preview features. Preview features are disabled by default and must not be enabled in production environments. |
| Live Migration section | Expand this section to configure live migration settings. |
| Live Migration → Max. migrations per cluster field | Select the maximum number of live migrations per cluster. |
| Live Migration → Max. migrations per node field | Select the maximum number of live migrations per node. |
| Live Migration → Live migration network list | Select a dedicated secondary network for live migration. |
| Automatic subscription of new RHEL VirtualMachines section | Expand this section to enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines. To enable this feature, you need cluster administrator permissions, an organization ID, and an activation key. |
| LoadBalancer section | Expand this section to enable the creation of load balancer services for SSH access to virtual machines. The cluster must have a load balancer configured. |
| Template project section | Expand this section to select a project for Red Hat templates. The default project is
To store Red Hat templates in multiple projects, clone the template and then select a project for the cloned template. |
3.3.1.4.2. User tab Copiar o linkLink copiado para a área de transferência!
You view user permissions and manage authorized SSH keys and welcome information on the User tab.
Example 3.7. User tab
| Element | Description |
|---|---|
| Manage SSH keys section | Expand this section to add authorized SSH keys to a project. The keys are added automatically to all virtual machines that you subsequently create in the selected project. |
| Permissions section | Expand this section to view cluster-wide user permissions. |
| Welcome information section | Expand this section to show or hide the Welcome information dialog. |
3.3.2. Catalog page Copiar o linkLink copiado para a área de transferência!
You create a virtual machine from a template or instance type on the Catalog page.
Example 3.8. Catalog page
| Element | Description |
|---|---|
| Displays a catalog of templates for creating a virtual machine. | |
| Displays bootable volumes and instance types for creating a virtual machine. |
3.3.2.1. Template catalog tab Copiar o linkLink copiado para a área de transferência!
You select a template on the Template catalog tab to create a virtual machine.
Example 3.9. Template catalog tab
| Element | Description |
|---|---|
| Template project list | Select the project in which Red Hat templates are located. By default, Red Hat templates are stored in the
|
| All items|Default templates | Click All items to display all available templates. |
| Boot source available checkbox | Select the checkbox to display templates with an available boot source. |
| Operating system checkboxes | Select checkboxes to display templates with selected operating systems. |
| Workload checkboxes | Select checkboxes to display templates with selected workloads. |
| Search field | Search templates by keyword. |
| Template tiles | Click a template tile to view template details and to create a virtual machine. |
3.3.2.2. InstanceTypes tab Copiar o linkLink copiado para a área de transferência!
You create a virtual machine from an instance type on the InstanceTypes tab.
Creating a virtual machine from an instance type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
| Element | Description |
|---|---|
| Volumes project field | Project in which bootable volumes are stored. The default is
|
| Add volume button | Click to upload a new volume or to use an existing persistent volume claim. |
| Filter field | Filter boot sources by operating system or resource. |
| Search field | Search boot sources by name. |
| Manage columns icon | Select up to 9 columns to display in the table. |
| Volume table | Select a bootable volume for your virtual machine. |
| Red Hat provided tab | Select an instance type provided by Red Hat. |
| User provided tab | Select an instance type that you created on the InstanceType page. |
| VirtualMachine details pane | Displays the virtual machine settings. |
| Name field | Optional: Enter the virtual machine name. |
| SSH key name | Click the edit icon to add a public SSH key. |
| Start this VirtualMachine after creation checkbox | Clear this checkbox to prevent the virtual machine from starting automatically. |
| Create VirtualMachine button | Creates a virtual machine. |
| YAML & CLI button | Displays the YAML configuration file and the
|
3.3.3. VirtualMachines page Copiar o linkLink copiado para a área de transferência!
You create and manage virtual machines on the VirtualMachines page.
Example 3.10. VirtualMachines page
| Element | Description |
|---|---|
| Create button | Create a virtual machine from a template, volume, or YAML configuration file. |
| Filter field | Filter virtual machines by status, template, operating system, or node. |
| Search field | Search for virtual machines by name or by label. |
| Manage columns icon | Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list. |
| Virtual machines table | List of virtual machines.
Click the actions menu
Click a virtual machine to navigate to the VirtualMachine details page. |
3.3.3.1. VirtualMachine details page Copiar o linkLink copiado para a área de transferência!
You configure a virtual machine on the VirtualMachine details page.
Example 3.11. VirtualMachine details page
| Element | Description |
|---|---|
| Actions menu | Click the Actions menu to select Stop, Restart, Pause, Clone, Migrate, Copy SSH command, Edit labels, Edit annotations, or Delete. If you select Stop, Force stop replaces Stop in the action menu. Use Force stop to initiate an immediate shutdown if the operating system becomes unresponsive. |
| Resource usage, alerts, disks, and devices. | |
| Virtual machine details and configurations. | |
| Memory, CPU, storage, network, and migration metrics. | |
| Virtual machine YAML configuration file. | |
| Contains the Disks, Network interfaces, Scheduling, Environment, and Scripts tabs. | |
| Disks. | |
| Network interfaces. | |
| Scheduling a virtual machine to run on specific nodes. | |
| Config map, secret, and service account management. | |
| Cloud-init settings, authorized SSH key and dynamic key injection for Linux virtual machines, Sysprep settings for Windows virtual machines. | |
| Virtual machine event stream. | |
| Console session management. | |
| Snapshot management. | |
| Status conditions and volume snapshot status. |
3.3.3.1.1. Overview tab Copiar o linkLink copiado para a área de transferência!
The Overview tab displays resource usage, alerts, and configuration information.
Example 3.12. Overview tab
| Element | Description |
|---|---|
| Details tile | General virtual machine information. |
| Utilization tile | CPU, Memory, Storage, and Network transfer charts. By default, Network transfer displays the sum of all networks. To view the breakdown for a specific network, click Breakdown by network. |
| Hardware devices tile | GPU and host devices. |
| Alerts tile | OpenShift Virtualization alerts, grouped by severity. |
| Snapshots tile |
Take snapshot |
| Network interfaces tile | Network interfaces table. |
| Disks tile | Disks table. |
3.3.3.1.2. Details tab Copiar o linkLink copiado para a área de transferência!
You view information about the virtual machine and edit labels, annotations, and other metadata and on the Details tab.
Example 3.13. Details tab
| Element | Description |
|---|---|
| YAML switch | Set to ON to view your live changes in the YAML configuration file. |
| Name | Virtual machine name. |
| Namespace | Virtual machine namespace or project. |
| Labels | Click the edit icon to edit the labels. |
| Annotations | Click the edit icon to edit the annotations. |
| Description | Click the edit icon to enter a description. |
| Operating system | Operating system name. |
| CPU|Memory | Click the edit icon to edit the CPU|Memory request. Restart the virtual machine to apply the change. The number of CPUs is calculated by using the following formula:
|
| Machine type | Machine type. |
| Boot mode | Click the edit icon to edit the boot mode. Restart the virtual machine to apply the change. |
| Start in pause mode | Click the edit icon to enable this setting. Restart the virtual machine to apply the change. |
| Template | Name of the template used to create the virtual machine. |
| Created at | Virtual machine creation date. |
| Owner | Virtual machine owner. |
| Status | Virtual machine status. |
| Pod |
|
| VirtualMachineInstance | Virtual machine instance name. |
| Boot order | Click the edit icon to select a boot source. Restart the virtual machine to apply the change. |
| IP address | IP address of the virtual machine. |
| Hostname | Hostname of the virtual machine. Restart the virtual machine to apply the change. |
| Time zone | Time zone of the virtual machine. |
| Node | Node on which the virtual machine is running. |
| Workload profile | Click the edit icon to edit the workload profile. |
| SSH access | These settings apply to Linux. |
| SSH using virtctl | Click the copy icon to copy the
|
| SSH service type | Select SSH over LoadBalancer. After you create a service, the SSH command is displayed. Click the copy icon to copy the command to the clipboard. |
| GPU devices | Click the edit icon to add a GPU device. Restart the virtual machine to apply the change. |
| Host devices | Click the edit icon to add a host device. Restart the virtual machine to apply the change. |
| Headless mode | Click the edit icon to set headless mode to ON and to disable VNC console. Restart the virtual machine to apply the change. |
| Services | Displays a list of services if QEMU guest agent is installed. |
| Active users | Displays a list of active users if QEMU guest agent is installed. |
3.3.3.1.3. Metrics tab Copiar o linkLink copiado para a área de transferência!
The Metrics tab displays memory, CPU, storage, network, and migration usage charts.
Example 3.14. Metrics tab
| Element | Description |
|---|---|
| Time range list | Select a time range to filter the results. |
|
Virtualization dashboard | Link to the Workloads tab of the current project. |
| Utilization | Memory and CPU charts. |
| Storage | Storage total read/write and Storage IOPS total read/write charts. |
| Network | Network in, Network out, Network bandwidth, and Network interface charts. Select All networks or a specific network from the Network interface list. |
| Migration | Migration and KV data transfer rate charts. |
3.3.3.1.4. YAML tab Copiar o linkLink copiado para a área de transferência!
You configure the virtual machine by editing the YAML file on the YAML tab.
Example 3.15. YAML tab
| Element | Description |
|---|---|
| Save button | Save changes to the YAML file. |
| Reload button | Discard your changes and reload the YAML file. |
| Cancel button | Exit the YAML tab. |
| Download button | Download the YAML file to your local machine. |
3.3.3.1.5. Configuration tab Copiar o linkLink copiado para a área de transferência!
You configure scheduling, network interfaces, disks, and other options on the Configuration tab.
Example 3.16. Tabs on the Configuration tab
| Element | Description |
|---|---|
| YAML switch | Set to ON to view your live changes in the YAML configuration file. |
| Disks. | |
| Network interfaces. | |
| Scheduling and resource requirements. | |
| Config maps, secrets, and service accounts. | |
| Cloud-init settings, authorized SSH key for Linux virtual machines, Sysprep answer file for Windows virtual machines. |
3.3.3.1.5.1. Disks tab Copiar o linkLink copiado para a área de transferência!
You manage disks on the Disks tab.
Example 3.17. Disks tab
| Setting | Description |
|---|---|
| Add disk button | Add a disk to the virtual machine. |
| Filter field | Filter by disk type. |
| Search field | Search for a disk by name. |
| Mount Windows drivers disk checkbox | Select to mount a
|
| Disks table | List of virtual machine disks.
Click the actions menu
|
| File systems table | List of virtual machine file systems. |
3.3.3.1.5.2. Network interfaces tab Copiar o linkLink copiado para a área de transferência!
You manage network interfaces on the Network interfaces tab.
Example 3.18. Network interfaces tab
| Setting | Description |
|---|---|
| Add network interface button | Add a network interface to the virtual machine. |
| Filter field | Filter by interface type. |
| Search field | Search for a network interface by name or by label. |
| Network interface table | List of network interfaces.
Click the actions menu
|
3.3.3.1.5.3. Scheduling tab Copiar o linkLink copiado para a área de transferência!
You configure virtual machines to run on specific nodes on the Scheduling tab.
Restart the virtual machine to apply changes.
Example 3.19. Scheduling tab
| Setting | Description |
|---|---|
| Node selector | Click the edit icon to add a label to specify qualifying nodes. |
| Tolerations | Click the edit icon to add a toleration to specify qualifying nodes. |
| Affinity rules | Click the edit icon to add an affinity rule. |
| Descheduler switch | Enable or disable the descheduler. The descheduler evicts a running pod so that the pod can be rescheduled onto a more suitable node. This field is disabled if the virtual machine cannot be live migrated. |
| Dedicated resources | Click the edit icon to select Schedule this workload with dedicated resources (guaranteed policy). |
| Eviction strategy | Click the edit icon to select LiveMigrate as the virtual machine eviction strategy. |
3.3.3.1.5.4. Environment tab Copiar o linkLink copiado para a área de transferência!
You manage config maps, secrets, and service accounts on the Environment tab.
Example 3.20. Environment tab
| Element | Description |
|---|---|
|
Add Config Map, Secret or Service Account | Click the link and select a config map, secret, or service account from the resource list. |
3.3.3.1.5.5. Scripts tab Copiar o linkLink copiado para a área de transferência!
You manage cloud-init settings, add SSH keys, or configure Sysprep for Windows virtual machines on the Scripts tab.
Restart the virtual machine to apply changes.
Example 3.21. Scripts tab
| Element | Description |
|---|---|
| Cloud-init | Click the edit icon to edit the cloud-init settings. |
| Authorized SSH key | Click the edit icon to add a public SSH key to a Linux virtual machine. The key is added as a cloud-init data source at first boot. |
| Dynamic SSH key injection switch | Set Dynamic SSH key injection to on to enable dynamic public SSH key injection. Then, you can add or revoke the key at runtime. Dynamic SSH key injection is only supported by Red Hat Enterprise Linux (RHEL) 9. If you manually disable this setting, the virtual machine inherits the SSH key settings of the image from which it was created. |
| Sysprep | Click the edit icon to upload an
|
3.3.3.1.6. Events tab Copiar o linkLink copiado para a área de transferência!
The Events tab displays a list of virtual machine events.
3.3.3.1.7. Console tab Copiar o linkLink copiado para a área de transferência!
You can open a console session to the virtual machine on the Console tab.
Example 3.22. Console tab
| Element | Description |
|---|---|
| Guest login credentials section | Expand Guest login credentials to view the credentials created with
|
| Console list | Select VNC console or Serial console. The Desktop viewer option is displayed for Windows virtual machines. You must install an RDP client on a machine on the same network. |
| Send key list | Select a key-stroke combination to send to the console. |
| Disconnect button | Disconnect the console connection. You must manually disconnect the console connection if you open a new console session. Otherwise, the first console session continues to run in the background. |
| Paste button | Paste a string from your clipboard to the VNC console. |
3.3.3.1.8. Snapshots tab Copiar o linkLink copiado para a área de transferência!
You create snapshots and restore virtual machines from snapshots on the Snapshots tab.
Example 3.23. Snapshots tab
| Element | Description |
|---|---|
| Take snapshot button | Create a snapshot. |
| Filter field | Filter snapshots by status. |
| Search field | Search for snapshots by name or by label. |
| Snapshot table | List of snapshots Click the snapshot name to edit the labels or annotations.
Click the actions menu
|
3.3.3.1.9. Diagnostics tab Copiar o linkLink copiado para a área de transferência!
You view the status conditions and volume snapshot status on the Diagnostics tab.
Example 3.24. Diagnostics tab
| Element | Description |
|---|---|
| Status conditions table | Display a list of conditions that are reported for the virtual machine. |
| Filter field | Filter status conditions by category and condition. |
| Search field | Search status conditions by reason. |
| Manage columns icon | Select up to 9 columns to display in the table. |
| Volume snapshot status table | List of volumes, their snapshot enablement status, and reason. |
3.3.4. Templates page Copiar o linkLink copiado para a área de transferência!
You create, edit, and clone virtual machine templates on the VirtualMachine Templates page.
You cannot edit a Red Hat template. However, you can clone a Red Hat template and edit it to create a custom template.
Example 3.25. VirtualMachine Templates page
| Element | Description |
|---|---|
| Create Template button | Create a template by editing a YAML configuration file. |
| Filter field | Filter templates by type, boot source, template provider, or operating system. |
| Search field | Search for templates by name or by label. |
| Manage columns icon | Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list. |
| Virtual machine templates table | List of virtual machine templates.
Click the actions menu
|
3.3.4.1. Template details page Copiar o linkLink copiado para a área de transferência!
You view template settings and edit custom templates on the Template details page.
Example 3.26. Template details page
| Element | Description |
|---|---|
| YAML switch | Set to ON to view your live changes in the YAML configuration file. |
| Actions menu | Click the Actions menu to select Edit, Clone, Edit boot source, Edit boot source reference, Edit labels, Edit annotations, or Delete. |
| Template settings and configurations. | |
| YAML configuration file. | |
| Scheduling configurations. | |
| Network interface management. | |
| Disk management. | |
| Cloud-init, SSH key, and Sysprep management. | |
| Name and cloud user password management. |
3.3.4.1.1. Details tab Copiar o linkLink copiado para a área de transferência!
You configure a custom template on the Details tab.
Example 3.27. Details tab
| Element | Description |
|---|---|
| Name | Template name. |
| Namespace | Template namespace. |
| Labels | Click the edit icon to edit the labels. |
| Annotations | Click the edit icon to edit the annotations. |
| Display name | Click the edit icon to edit the display name. |
| Description | Click the edit icon to enter a description. |
| Operating system | Operating system name. |
| CPU|Memory | Click the edit icon to edit the CPU|Memory request. The number of CPUs is calculated by using the following formula:
|
| Machine type | Template machine type. |
| Boot mode | Click the edit icon to edit the boot mode. |
| Base template | Name of the base template used to create this template. |
| Created at | Template creation date. |
| Owner | Template owner. |
| Boot order | Template boot order. |
| Boot source | Boot source availability. |
| Provider | Template provider. |
| Support | Template support level. |
| GPU devices | Click the edit icon to add a GPU device. |
| Host devices | Click the edit icon to add a host device. |
| Headless mode | Click the edit icon to set headless mode to ON and to disable VNC console. |
3.3.4.1.2. YAML tab Copiar o linkLink copiado para a área de transferência!
You configure a custom template by editing the YAML file on the YAML tab.
Example 3.28. YAML tab
| Element | Description |
|---|---|
| Save button | Save changes to the YAML file. |
| Reload button | Discard your changes and reload the YAML file. |
| Cancel button | Exit the YAML tab. |
| Download button | Download the YAML file to your local machine. |
3.3.4.1.3. Scheduling tab Copiar o linkLink copiado para a área de transferência!
You configure scheduling on the Scheduling tab.
Example 3.29. Scheduling tab
| Setting | Description |
|---|---|
| Node selector | Click the edit icon to add a label to specify qualifying nodes. |
| Tolerations | Click the edit icon to add a toleration to specify qualifying nodes. |
| Affinity rules | Click the edit icon to add an affinity rule. |
| Descheduler switch | Enable or disable the descheduler. The descheduler evicts a running pod so that the pod can be rescheduled onto a more suitable node. |
| Dedicated resources | Click the edit icon to select Schedule this workload with dedicated resources (guaranteed policy). |
| Eviction strategy | Click the edit icon to select LiveMigrate as the virtual machine eviction strategy. |
3.3.4.1.4. Network interfaces tab Copiar o linkLink copiado para a área de transferência!
You manage network interfaces on the Network interfaces tab.
Example 3.30. Network interfaces tab
| Setting | Description |
|---|---|
| Add network interface button | Add a network interface to the template. |
| Filter field | Filter by interface type. |
| Search field | Search for a network interface by name or by label. |
| Network interface table | List of network interfaces.
Click the actions menu
|
3.3.4.1.5. Disks tab Copiar o linkLink copiado para a área de transferência!
You manage disks on the Disks tab.
Example 3.31. Disks tab
| Setting | Description |
|---|---|
| Add disk button | Add a disk to the template. |
| Filter field | Filter by disk type. |
| Search field | Search for a disk by name. |
| Disks table | List of template disks.
Click the actions menu
|
3.3.4.1.6. Scripts tab Copiar o linkLink copiado para a área de transferência!
You manage the cloud-init settings, SSH keys, and Sysprep answer files on the Scripts tab.
Example 3.32. Scripts tab
| Element | Description |
|---|---|
| Cloud-init | Click the edit icon to edit the cloud-init settings. |
| Authorized SSH key | Click the edit icon to create a new secret or to attach an existing secret to a Linux virtual machine. |
| Sysprep | Click the edit icon to upload an
|
3.3.4.1.7. Parameters tab Copiar o linkLink copiado para a área de transferência!
You edit selected template settings on the Parameters tab.
Example 3.33. Parameters tab
| Element | Description |
|---|---|
| NAME | Set the name parameters for a virtual machine created from this template. |
| CLOUD_USER_PASSWORD | Set the cloud user password parameters for a virtual machine created from this template. |
3.3.5. InstanceTypes page Copiar o linkLink copiado para a área de transferência!
You view and manage virtual machine instance types on the InstanceTypes page.
Example 3.34. VirtualMachineClusterInstancetypes page
| Element | Description |
|---|---|
| Create button | Create an instance type by editing a YAML configuration file. |
| Search field | Search for an instance type by name or by label. |
| Manage columns icon | Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list. |
| Instance types table | List of instance.
Click the actions menu
|
Click an instance type to view the VirtualMachineClusterInstancetypes details page.
3.3.5.1. VirtualMachineClusterInstancetypes details page Copiar o linkLink copiado para a área de transferência!
You configure an instance type on the VirtualMachineClusterInstancetypes details page.
Example 3.35. VirtualMachineClusterInstancetypes details page
| Element | Description |
|---|---|
| Details tab | Configure an instance type by editing a form. |
| YAML tab | Configure an instance type by editing a YAML configuration file. |
| Actions menu | Select Edit labels, Edit annotations, Edit VirtualMachineClusterInstancetype, or Delete VirtualMachineClusterInstancetype. |
3.3.5.1.1. Details tab Copiar o linkLink copiado para a área de transferência!
You configure an instance type by editing a form on the Details tab.
Example 3.36. Details tab
| Element | Description |
|---|---|
| Name | VirtualMachineClusterInstancetype name. |
| Labels | Click the edit icon to edit the labels. |
| Annotations | Click the edit icon to edit the annotations. |
| Created at | Instance type creation date. |
| Owner | Instance type owner. |
3.3.5.1.2. YAML tab Copiar o linkLink copiado para a área de transferência!
You configure an instance type by editing the YAML file on the YAML tab.
Example 3.37. YAML tab
| Element | Description |
|---|---|
| Save button | Save changes to the YAML file. |
| Reload button | Discard your changes and reload the YAML file. |
| Cancel button | Exit the YAML tab. |
| Download button | Download the YAML file to your local machine. |
3.3.6. Preferences page Copiar o linkLink copiado para a área de transferência!
You view and manage virtual machine preferences on the Preferences page.
Example 3.38. VirtualMachineClusterPreferences page
| Element | Description |
|---|---|
| Create button | Create a preference by editing a YAML configuration file. |
| Search field | Search for a preference by name or by label. |
| Manage columns icon | Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list. |
| Preferences table | List of preferences.
Click the actions menu
|
Click a preference to view the VirtualMachineClusterPreference details page.
3.3.6.1. VirtualMachineClusterPreference details page Copiar o linkLink copiado para a área de transferência!
You configure a preference on the VirtualMachineClusterPreference details page.
Example 3.39. VirtualMachineClusterPreference details page
| Element | Description |
|---|---|
| Details tab | Configure a preference by editing a form. |
| YAML tab | Configure a preference by editing a YAML configuration file. |
| Actions menu | Select Edit labels, Edit annotations, Edit VirtualMachineClusterPreference, or Delete VirtualMachineClusterPreference. |
3.3.6.1.1. Details tab Copiar o linkLink copiado para a área de transferência!
You configure a preference by editing a form on the Details tab.
Example 3.40. Details tab
| Element | Description |
|---|---|
| Name | VirtualMachineClusterPreference name. |
| Labels | Click the edit icon to edit the labels. |
| Annotations | Click the edit icon to edit the annotations. |
| Created at | Preference creation date. |
| Owner | Preference owner. |
3.3.6.1.2. YAML tab Copiar o linkLink copiado para a área de transferência!
You configure a preference type by editing the YAML file on the YAML tab.
Example 3.41. YAML tab
| Element | Description |
|---|---|
| Save button | Save changes to the YAML file. |
| Reload button | Discard your changes and reload the YAML file. |
| Cancel button | Exit the YAML tab. |
| Download button | Download the YAML file to your local machine. |
3.3.7. Bootable volumes page Copiar o linkLink copiado para a área de transferência!
You view and manage available bootable volumes on the Bootable volumes page.
Example 3.42. Bootable volumes page
| Element | Description |
|---|---|
| Add volume button | Add a bootable volume by completing a form or by editing a YAML configuration file. |
| Filter field | Filter bootable volumes by operating system and resource type. |
| Search field | Search for bootable volumes by name or by label. |
| Manage columns icon | Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list. |
| Bootable volumes table | List of bootable volumes.
Click the actions menu
|
Click a bootable volume to view the PersistentVolumeClaim details page.
3.3.7.1. PersistentVolumeClaim details page Copiar o linkLink copiado para a área de transferência!
You configure the persistent volume claim (PVC) of a bootable volume on the PersistentVolumeClaim details page.
Example 3.43. PersistentVolumeClaim details page
| Element | Description |
|---|---|
| Details tab | Configure the PVC by editing a form. |
| YAML tab | Configure the PVC by editing a YAML configuration file. |
| Events tab | The Events tab displays a list of PVC events. |
| VolumeSnapshots tab | The VolumeSnapshots tab displays a list of volume snapshots. |
| Actions menu | Select Expand PVC, Create snapshot, Clone PVC, Edit labels, Edit annotations, Edit PersistentVolumeClaim or Delete PersistentVolumeClaim. |
3.3.7.1.1. Details tab Copiar o linkLink copiado para a área de transferência!
You configure the persistent volume claim (PVC) of the bootable volume by editing a form on the Details tab.
Example 3.44. Details tab
| Element | Description |
|---|---|
| Name | PVC name. |
| Namespace | PVC namespace. |
| Labels | Click the edit icon to edit the labels. |
| Annotations | Click the edit icon to edit the annotations. |
| Created at | PVC creation date. |
| Owner | PVC owner. |
| Status | Status of the PVC, for example, Bound. |
| Requested capacity | Requested capacity of the PVC. |
| Capacity | Capacity of the PVC. |
| Used | Used space of the PVC. |
| Access modes | PVC access modes. |
| Volume mode | PVC volume mode. |
| StorageClasses | PVC storage class. |
| PersistentVolumes | Persistent volume associated with the PVC. |
| Conditions table | Displays the status of the PVC. |
3.3.7.1.2. YAML tab Copiar o linkLink copiado para a área de transferência!
You configure the persistent volume claim of the bootable volume by editing the YAML file on the YAML tab.
Example 3.45. YAML tab
| Element | Description |
|---|---|
| Save button | Save changes to the YAML file. |
| Reload button | Discard your changes and reload the YAML file. |
| Cancel button | Exit the YAML tab. |
| Download button | Download the YAML file to your local machine. |
3.3.8. MigrationPolicies page Copiar o linkLink copiado para a área de transferência!
You manage migration policies for workloads on the MigrationPolicies page.
Example 3.46. MigrationPolicies page
| Element | Description |
|---|---|
| Create MigrationPolicy | Create a migration policy by entering configurations and labels in a form or by editing a YAML file. |
| Search field | Search for a migration policy by name or by label. |
| Manage columns icon | Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list. |
| MigrationPolicies table | List of migration policies.
Click the actions menu
|
Click a migration policy to view the MigrationPolicy details page.
3.3.8.1. MigrationPolicy details page Copiar o linkLink copiado para a área de transferência!
You configure a migration policy on the MigrationPolicy details page.
Example 3.47. MigrationPolicy details page
| Element | Description |
|---|---|
| Details tab | Configure a migration policy by editing a form. |
| YAML tab | Configure a migration policy by editing a YAML configuration file. |
| Actions menu | Select Edit or Delete. |
3.3.8.1.1. Details tab Copiar o linkLink copiado para a área de transferência!
You configure a custom template on the Details tab.
Example 3.48. Details tab
| Element | Description |
|---|---|
| Name | Migration policy name. |
| Description | Migration policy description. |
| Configurations | Click the edit icon to update the migration policy configurations. |
| Bandwidth per migration | Bandwidth request per migration. For unlimited bandwidth, set the value to
|
| Auto converge | When auto converge is enabled, the performance and availability of the virtual machines might be reduced to ensure that migration is successful. |
| Post-copy | Post-copy policy. |
| Completion timeout | Completion timeout value in seconds. |
| Project labels | Click Edit to edit the project labels. |
| VirtualMachine labels | Click Edit to edit the virtual machine labels. |
3.3.8.1.2. YAML tab Copiar o linkLink copiado para a área de transferência!
You configure the migration polic by editing the YAML file on the YAML tab.
Example 3.49. YAML tab
| Element | Description |
|---|---|
| Save button | Save changes to the YAML file. |
| Reload button | Discard your changes and reload the YAML file. |
| Cancel button | Exit the YAML tab. |
| Download button | Download the YAML file to your local machine. |
Chapter 4. Installing Copiar o linkLink copiado para a área de transferência!
4.1. Preparing your cluster for OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements.
- Installation method considerations
- You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.
- Red Hat OpenShift Data Foundation
- If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.
- IPv6
- You cannot run OpenShift Virtualization on a single-stack IPv6 cluster.
FIPS mode
If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.
4.1.1. Supported platforms Copiar o linkLink copiado para a área de transferência!
You can use the following platforms with OpenShift Virtualization:
- On-premise bare metal servers. See Planning a bare metal cluster for OpenShift Virtualization.
IBM Cloud® Bare Metal Servers. See Deploy OpenShift Virtualization on IBM Cloud® Bare Metal nodes.
ImportantInstalling OpenShift Virtualization on IBM Cloud® Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Bare metal instances or servers offered by other cloud providers are not supported.
4.1.1.1. OpenShift Virtualization on AWS bare metal Copiar o linkLink copiado para a área de transferência!
You can run OpenShift Virtualization on an Amazon Web Services (AWS) bare-metal OpenShift Container Platform cluster.
OpenShift Virtualization is also supported on Red Hat OpenShift Service on AWS (ROSA) Classic clusters, which have the same configuration requirements as AWS bare-metal clusters.
Before you set up your cluster, review the following summary of supported features and limitations:
- Installing
You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes by editing the
file. For example, you can use theinstall-config.yamltype value for a machine based on x86_64 architecture.c5n.metalFor more information, see the OpenShift Container Platform documentation about installing on AWS.
- Accessing virtual machines (VMs)
-
There is no change to how you access VMs by using the CLI tool or the OpenShift Container Platform web console.
virtctl You can expose VMs by using a
orNodePortservice.LoadBalancerNoteThe load balancer approach is preferable because OpenShift Container Platform automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, OpenShift Container Platform removes the load balancer and its associated resources.
- Networking
- You cannot use Single Root I/O Virtualization (SR-IOV) or bridge Container Network Interface (CNI) networks, including virtual LAN (VLAN). If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks.
- Storage
You can use any storage solution that is certified by the storage vendor to work with the underlying platform.
ImportantAWS bare-metal and ROSA clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor.
- Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations. Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities.
- Hosted control planes (HCPs)
- HCPs for OpenShift Virtualization are not currently supported on AWS infrastructure.
4.1.2. Hardware and operating system requirements Copiar o linkLink copiado para a área de transferência!
Review the following hardware and operating system requirements for OpenShift Virtualization.
4.1.2.1. CPU requirements Copiar o linkLink copiado para a área de transferência!
Supported by Red Hat Enterprise Linux (RHEL) 9.
See Red Hat Ecosystem Catalog for supported CPUs.
NoteIf your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.
See Configuring a required node affinity rule for details.
- Support for AMD and Intel 64-bit architectures (x86-64-v2).
- Support for Intel 64 or AMD64 CPU extensions.
- Intel VT or AMD-V hardware virtualization extensions enabled.
- NX (no execute) flag enabled.
4.1.2.2. Operating system requirements Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes.
See About RHCOS for details.
NoteRHEL worker nodes are not supported.
4.1.2.3. Storage requirements Copiar o linkLink copiado para a área de transferência!
- Supported by OpenShift Container Platform. See Optimizing storage.
- You must create a default OpenShift Virtualization or OpenShift Container Platform storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both OpenShift Virtualization and OpenShift Container Platform default storage classes exist, the OpenShift Virtualization class takes precedence when creating VM disks.
You must specify a default storage class for the cluster. See Managing the default storage class. If the default storage class provisioner supports the
ReadWriteMany
If the storage provisioner supports snapshots, there must be a VolumeSnapshotClass object associated with the default storage class.
4.1.2.3.1. About volume and access modes for virtual machine disks Copiar o linkLink copiado para a área de transferência!
If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.
For best results, use the
ReadWriteMany
Block
-
(RWX) access mode is required for live migration.
ReadWriteMany The
volume mode performs significantly better than theBlockvolume mode. This is because theFilesystemvolume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.FilesystemFor example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes.
You cannot live migrate virtual machines with the following configurations:
-
Storage volume with (RWO) access mode
ReadWriteOnce - Passthrough features such as GPUs
Do not set the
evictionStrategy
LiveMigrate
4.1.3. Live migration requirements Copiar o linkLink copiado para a área de transferência!
-
Shared storage with (RWX) access mode.
ReadWriteMany Sufficient RAM and network bandwidth.
NoteYou must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)The default number of migrations that can run in parallel in the cluster is 5.
- If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
4.1.4. Physical resource overhead requirements Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.
The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.
Memory overhead
Calculate the memory overhead values for OpenShift Virtualization by using the equations below.
Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per worker node ≈ 360 MiB
Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.
Virtual machine memory overhead
Memory overhead per virtual machine ≈ (0.002 × requested memory) \
+ 218 MiB \
+ 8 MiB × (number of vCPUs) \
+ 16 MiB × (number of graphics devices) \
+ (additional memory overhead)
+ *
218 MiB
virt-launcher
8 MiB × (number of vCPUs)
16 MiB × (number of graphics devices)
CPU overhead
Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.
Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores
OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.
Virtual machine CPU overhead
If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.
Storage overhead
Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.
Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB
10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.
Virtual machine storage overhead
Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.
Example
As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.
4.1.5. Single-node OpenShift differences Copiar o linkLink copiado para a área de transferência!
You can install OpenShift Virtualization on single-node OpenShift.
However, you should be aware that Single-node OpenShift does not support the following features:
- High availability
- Pod disruption
- Live migration
- Virtual machines or templates that have an eviction strategy configured
4.1.6. Object maximums Copiar o linkLink copiado para a área de transferência!
You must consider the following tested object maximums when planning your cluster:
4.1.7. Cluster high-availability options Copiar o linkLink copiado para a área de transferência!
You can configure one of the following high-availability (HA) options for your cluster:
Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.
NoteIn OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with a properly configured
resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See Run strategies for more detailed information about the potential outcomes and how run strategies affect those outcomes.MachineHealthCheck-
Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
NodeHealthCheck High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run
.oc delete node <lost_node>NoteWithout an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.
4.2. Installing OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.
If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager (OLM) for restricted networks.
If you have limited internet connectivity, you can configure proxy support in OLM to access the OperatorHub.
4.2.1. Installing the OpenShift Virtualization Operator Copiar o linkLink copiado para a área de transferência!
Install the OpenShift Virtualization Operator by using the OpenShift Container Platform web console or the command line.
4.2.1.1. Installing the OpenShift Virtualization Operator by using the web console Copiar o linkLink copiado para a área de transferência!
You can deploy the OpenShift Virtualization Operator by using the OpenShift Container Platform web console.
Prerequisites
- Install OpenShift Container Platform 4.14 on your cluster.
-
Log in to the OpenShift Container Platform web console as a user with permissions.
cluster-admin
Procedure
- From the Administrator perspective, click Operators → OperatorHub.
- In the Filter by keyword field, type Virtualization.
- Select the OpenShift Virtualization Operator tile with the Red Hat source label.
- Read the information about the Operator and click Install.
On the Install Operator page:
- Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory
namespace, which is automatically created if it does not exist.openshift-cnvWarningAttempting to install the OpenShift Virtualization Operator in a namespace other than
causes the installation to fail.openshift-cnvFor Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.
While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.
WarningBecause OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.
-
Click Install to make the Operator available to the namespace.
openshift-cnv - When the Operator installs successfully, click Create HyperConverged.
- Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
- Click Create to launch OpenShift Virtualization.
Verification
- Navigate to the Workloads → Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.
4.2.1.2. Installing the OpenShift Virtualization Operator by using the command line Copiar o linkLink copiado para a área de transferência!
Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster.
4.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI Copiar o linkLink copiado para a área de transferência!
Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the
openshift-cnv
To subscribe, configure
Namespace
OperatorGroup
Subscription
Prerequisites
- Install OpenShift Container Platform 4.14 on your cluster.
-
Install the OpenShift CLI ().
oc -
Log in as a user with privileges.
cluster-admin
Procedure
Create a YAML file that contains the following manifest:
apiVersion: v1 kind: Namespace metadata: name: openshift-cnv labels: openshift.io/cluster-monitoring: "true" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.14.17 channel: "stable"Using the
channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.stableCreate the required
,Namespace, andOperatorGroupobjects for OpenShift Virtualization by running the following command:Subscription$ oc apply -f <filename>.yaml
Verification
You must verify that the subscription creation was successful before you can proceed with installing OpenShift Virtualization.
Check that the
(CSV) object was created successfully. Run the following command and verify the output:ClusterServiceVersion$ oc get csv -n openshift-cnvIf the CSV was created successfully, the output shows an entry that contains a
value ofNAME, akubevirt-hyperconverged-operator-*value ofDISPLAY, and aOpenShift Virtualizationvalue ofPHASE, as shown in the following example output:SucceededExample output:
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.14.17 OpenShift Virtualization 4.14.17 kubevirt-hyperconverged-operator.v4.13.0 SucceededCheck that the
custom resource (CR) has the correct version. Run the following command and verify the output:HyperConverged$ oc get hco -n openshift-cnv kubevirt-hyperconverged -o json | jq .status.versionsExample output:
{ "name": "operator", "version": "4.14.17" }Verify the
CR conditions. Run the following command and check the output:HyperConverged$ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq -r '.status.conditions[] | {type,status}'Example output:
{ "type": "ReconcileComplete", "status": "True" } { "type": "Available", "status": "True" } { "type": "Progressing", "status": "False" } { "type": "Degraded", "status": "False" } { "type": "Upgradeable", "status": "True" }
You can configure certificate rotation parameters in the YAML file.
4.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI Copiar o linkLink copiado para a área de transferência!
You can deploy the OpenShift Virtualization Operator by using the
oc
Prerequisites
-
Subscribe to the OpenShift Virtualization catalog in the namespace.
openshift-cnv -
Log in as a user with privileges.
cluster-admin
Procedure
Create a YAML file that contains the following manifest:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:Deploy the OpenShift Virtualization Operator by running the following command:
$ oc apply -f <file_name>.yaml
Verification
Ensure that OpenShift Virtualization deployed successfully by watching the
of the cluster service version (CSV) in thePHASEnamespace. Run the following command:openshift-cnv$ watch oc get csv -n openshift-cnvThe following output displays if deployment was successful:
Example output
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.14.17 OpenShift Virtualization 4.14.17 Succeeded
4.2.2. Next steps Copiar o linkLink copiado para a área de transferência!
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
4.3. Uninstalling OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
You uninstall OpenShift Virtualization by using the web console or the command-line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.
4.3.1. Uninstalling OpenShift Virtualization by using the web console Copiar o linkLink copiado para a área de transferência!
You uninstall OpenShift Virtualization by using the web console to perform the following tasks:
You must first delete all virtual machines, and virtual machine instances.
You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
4.3.1.1. Deleting the HyperConverged custom resource Copiar o linkLink copiado para a área de transferência!
To uninstall OpenShift Virtualization, you first delete the
HyperConverged
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin
Procedure
- Navigate to the Operators → Installed Operators page.
- Select the OpenShift Virtualization Operator.
- Click the OpenShift Virtualization Deployment tab.
-
Click the Options menu
beside and select Delete HyperConverged.kubevirt-hyperconverged - Click Delete in the confirmation window.
4.3.1.2. Deleting Operators from a cluster using the web console Copiar o linkLink copiado para a área de transferência!
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster web console using an account with permissions.
cluster-admin
Procedure
- Navigate to the Operators → Installed Operators page.
- Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
4.3.1.3. Deleting a namespace using the web console Copiar o linkLink copiado para a área de transferência!
You can delete a namespace by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin
Procedure
- Navigate to Administration → Namespaces.
- Locate the namespace that you want to delete in the list of namespaces.
-
On the far right side of the namespace listing, select Delete Namespace from the Options menu
.
- When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
- Click Delete.
4.3.1.4. Deleting OpenShift Virtualization custom resource definitions Copiar o linkLink copiado para a área de transferência!
You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin
Procedure
- Navigate to Administration → CustomResourceDefinitions.
-
Select the Label filter and enter in the Search field to display the OpenShift Virtualization CRDs.
operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -
Click the Options menu
beside each CRD and select Delete CustomResourceDefinition.
4.3.2. Uninstalling OpenShift Virtualization by using the CLI Copiar o linkLink copiado para a área de transferência!
You can uninstall OpenShift Virtualization by using the OpenShift CLI (
oc
Prerequisites
-
You have access to the OpenShift Container Platform cluster using an account with permissions.
cluster-admin -
You have installed the OpenShift CLI ().
oc - You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
Procedure
Delete the
custom resource:HyperConverged$ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnvDelete the OpenShift Virtualization Operator subscription:
$ oc delete subscription hco-operatorhub -n openshift-cnvDelete the OpenShift Virtualization
resource:ClusterServiceVersion$ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnvDelete the OpenShift Virtualization namespace:
$ oc delete namespace openshift-cnvList the OpenShift Virtualization custom resource definitions (CRDs) by running the
command with theoc delete crdoption:dry-run$ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnvExample output
customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "hyperconvergeds.hco.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "kubevirts.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "ssps.ssp.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "tektontasks.tektontasks.kubevirt.io" deleted (dry run)Delete the CRDs by running the
command without theoc delete crdoption:dry-run$ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Chapter 5. Postinstallation configuration Copiar o linkLink copiado para a área de transferência!
5.1. Postinstallation configuration Copiar o linkLink copiado para a área de transferência!
The following procedures are typically performed after OpenShift Virtualization is installed. You can configure the components that are relevant for your environment:
- Node placement rules for OpenShift Virtualization Operators, workloads, and controllers
- Installing the Kubernetes NMState and SR-IOV Operators
- Configuring a Linux bridge network for external access to virtual machines (VMs)
- Configuring a dedicated secondary network for live migration
- Configuring an SR-IOV network
- Enabling the creation of load balancer services by using the OpenShift Container Platform web console
- Defining a default storage class for the Container Storage Interface (CSI)
- Configuring local storage by using the Hostpath Provisioner (HPP)
5.2. Specifying nodes for OpenShift Virtualization components Copiar o linkLink copiado para a área de transferência!
The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement rules for some components after installing OpenShift Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads.
5.2.1. About node placement rules for OpenShift Virtualization components Copiar o linkLink copiado para a área de transferência!
You can use node placement rules for the following tasks:
- Deploy virtual machines only on nodes intended for virtualization workloads.
- Deploy Operators only on infrastructure nodes.
- Maintain separation between workloads.
Depending on the object, you can use one or more of the following rule types:
nodeSelector- Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity- Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied.
tolerations- Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
5.2.2. Applying node placement rules Copiar o linkLink copiado para a área de transferência!
You can apply node placement rules by editing a
Subscription
HyperConverged
HostPathProvisioner
Prerequisites
-
You have installed the OpenShift CLI ().
oc - You are logged in with cluster administrator permissions.
Procedure
Edit the object in your default editor by running the following command:
$ oc edit <resource_type> <resource_name> -n openshift-cnv- Save the file to apply the changes.
5.2.3. Node placement rule examples Copiar o linkLink copiado para a área de transferência!
You can specify node placement rules for a OpenShift Virtualization component by editing a
Subscription
HyperConverged
HostPathProvisioner
5.2.3.1. Subscription object node placement rule examples Copiar o linkLink copiado para a área de transferência!
To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the
Subscription
Currently, you cannot configure node placement rules for the
Subscription
The
Subscription
affinity
Example Subscription object with nodeSelector rule
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.14.17
channel: "stable"
config:
nodeSelector:
example.io/example-infra-key: example-infra-value
OLM deploys the OpenShift Virtualization Operators on nodes labeled
example.io/example-infra-key = example-infra-value
Example Subscription object with tolerations rule
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.14.17
channel: "stable"
config:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
OLM deploys OpenShift Virtualization Operators on nodes labeled
key = virtualization:NoSchedule
5.2.3.2. HyperConverged object node placement rule example Copiar o linkLink copiado para a área de transferência!
To specify the nodes where OpenShift Virtualization deploys its components, you can edit the
nodePlacement
HyperConverged
Example HyperConverged object with nodeSelector rule
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement:
nodeSelector:
example.io/example-infra-key: example-infra-value
workloads:
nodePlacement:
nodeSelector:
example.io/example-workloads-key: example-workloads-value
-
Infrastructure resources are placed on nodes labeled .
example.io/example-infra-key = example-infra-value -
Workloads are placed on nodes labeled .
example.io/example-workloads-key = example-workloads-value
Example HyperConverged object with affinity rule
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-infra-key
operator: In
values:
- example-infra-value
workloads:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-workloads-key
operator: In
values:
- example-workloads-value
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: example.io/num-cpus
operator: Gt
values:
- 8
-
Infrastructure resources are placed on nodes labeled .
example.io/example-infra-key = example-value -
Workloads are placed on nodes labeled .
example.io/example-workloads-key = example-workloads-value - Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.
Example HyperConverged object with tolerations rule
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
workloads:
nodePlacement:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
Nodes reserved for OpenShift Virtualization components are labeled with the
key = virtualization:NoSchedule
5.2.3.3. HostPathProvisioner object node placement rule example Copiar o linkLink copiado para a área de transferência!
You can edit the
HostPathProvisioner
You must schedule the hostpath provisioner (HPP) and the OpenShift Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines.
After you deploy a virtual machine (VM) with the HPP storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM.
You can configure node placement rules by specifying
nodeSelector
affinity
tolerations
spec.workload
HostPathProvisioner
Example HostPathProvisioner object with nodeSelector rule
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload:
nodeSelector:
example.io/example-workloads-key: example-workloads-value
Workloads are placed on nodes labeled
example.io/example-workloads-key = example-workloads-value
5.3. Postinstallation network configuration Copiar o linkLink copiado para a área de transferência!
By default, OpenShift Virtualization is installed with a single, internal pod network.
After you install OpenShift Virtualization, you can install networking Operators and configure additional networks.
5.3.1. Installing networking Operators Copiar o linkLink copiado para a área de transferência!
You must install the Kubernetes NMState Operator to configure a Linux bridge network for live migration or external access to virtual machines (VMs). For installation instructions, see Installing the Kubernetes NMState Operator by using the web console.
You can install the SR-IOV Operator to manage SR-IOV network devices and network attachments. For installation instructions, see Installing the SR-IOV Network Operator.
You can add the MetalLB Operator to manage the lifecycle for an instance of MetalLB on your cluster. For installation instructions, see Installing the MetalLB Operator from the OperatorHub using the web console.
5.3.2. Configuring a Linux bridge network Copiar o linkLink copiado para a área de transferência!
After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).
5.3.2.1. Creating a Linux bridge NNCP Copiar o linkLink copiado para a área de transferência!
You can create a
NodeNetworkConfigurationPolicy
Prerequisites
- You have installed the Kubernetes NMState Operator.
Procedure
Create the
manifest. This example includes sample values that you must replace with your own information.NodeNetworkConfigurationPolicyapiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy spec: desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge state: up ipv4: enabled: false bridge: options: stp: enabled: false port: - name: eth1-
defines the name of the node network configuration policy.
metadata.name -
defines the name of the new Linux bridge.
spec.desiredState.interfaces.name -
is an optional field that can be used to define a human-readable description for the bridge.
spec.desiredState.interfaces.description -
defines the interface type. In this example, the type is a Linux bridge.
spec.desiredState.interfaces.type -
defines the requested state for the interface after creation.
spec.desiredState.interfaces.state -
defines whether the ipv4 protocol is active. Setting this to
spec.desiredState.interfaces.ipv4.enableddisables IPv4 addressing on this bridge.false -
defines whether STP is active. Setting this to
spec.desiredState.interfaces.bridge.options.stp.enableddisables STP on this bridge.false -
defines the node NIC to which the bridge is attached.
spec.desiredState.interfaces.bridge.port.name
-
5.3.2.2. Creating a Linux bridge NAD by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console.
A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Procedure
- In the web console, click Networking → NetworkAttachmentDefinitions.
Click Create Network Attachment Definition.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
- Enter a unique Name and optional Description.
- Select CNV Linux bridge from the Network Type list.
- Enter the name of the bridge in the Bridge Name field.
- Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
- Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
- Click Create.
5.3.3. Configuring a network for live migration Copiar o linkLink copiado para a área de transferência!
After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
5.3.3.1. Configuring a dedicated secondary network for live migration Copiar o linkLink copiado para a área de transferência!
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the
NetworkAttachmentDefinition
HyperConverged
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You logged in to the cluster as a user with the role.
cluster-admin - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
manifest according to the following example:NetworkAttachmentDefinitionExample configuration file
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "10.200.5.0/24" } }'-
specifies the name of the
metadata.nameobject.NetworkAttachmentDefinition -
specifies the name of the NIC to be used for live migration.
config.master -
specifies the name of the CNI plugin that provides the network for the NAD.
config.type -
specifies an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
config.range
-
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the name of the
object to theNetworkAttachmentDefinitionstanza of thespec.liveMigrationConfigCR:HyperConvergedExample
HyperConvergedmanifestapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ...-
specifies the name of the Multus
spec.liveMigrationConfig.networkobject to be used for live migrations.NetworkAttachmentDefinition
-
-
Save your changes and exit the editor. The pods restart and connect to the secondary network.
virt-handler
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
5.3.3.2. Selecting a dedicated network by using the web console Copiar o linkLink copiado para a área de transferência!
You can select a dedicated network for live migration by using the OpenShift Container Platform web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Go to Virtualization > Overview in the OpenShift Container Platform web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
5.3.4. Configuring an SR-IOV network Copiar o linkLink copiado para a área de transferência!
After you install the SR-IOV Operator, you can configure an SR-IOV network.
5.3.4.1. Configuring SR-IOV network devices Copiar o linkLink copiado para a área de transferência!
The SR-IOV Network Operator adds the
SriovNetworkNodePolicy.sriovnetwork.openshift.io
SriovNetworkNodePolicy
When applying the configuration specified in a
SriovNetworkNodePolicy
It might take several minutes for a configuration change to apply.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You have access to the cluster as a user with the role.
cluster-admin - You have installed the SR-IOV Network Operator.
- You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
- You have not selected any control plane nodes for SR-IOV network device configuration.
Procedure
Create an
object, and then save the YAML in theSriovNetworkNodePolicyfile. Replace<name>-sriov-node-network.yamlwith the name for this configuration.<name>apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> mtu: <mtu> numVfs: <num> nicSelector: vendor: "<vendor_code>" deviceID: "<device_id>" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci isRdma: false-
specifies a name for the
metadata.nameobject.SriovNetworkNodePolicy -
specifies the namespace where the SR-IOV Network Operator is installed.
metadata.namespace -
specifies the resource name of the SR-IOV device plugin. You can create multiple
spec.resourceNameobjects for a resource name.SriovNetworkNodePolicy -
specifies the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
spec.nodeSelector.feature.node.kubernetes.io/network-sriov.capable -
is an optional field that specifies an integer value between
spec.priorityand0. A smaller number gets higher priority, so a priority of99is higher than a priority of10. The default value is99.99 -
is an optional field that specifies a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
spec.mtu -
specifies the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than
spec.numVfs.127 - selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters.
spec.nicSelectorNoteIt is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify
, you must also specify a value forrootDevices,vendor, ordeviceID.pfNamesIf you specify both
andpfNamesat the same time, ensure that they point to an identical device.rootDevices -
is an optional field that specifies the vendor hex code of the SR-IOV network device. The only allowed values are either
spec.nicSelector.vendoror8086.15b3 -
is an optional field that specifies the device hex code of SR-IOV network device. The only allowed values are
spec.nicSelector.deviceID,158b,1015.1017 -
is an optional field that specifies an array of one or more physical function (PF) names for the Ethernet device.
spec.nicSelector.pfNames -
is an optional field that specifies an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format:
spec.nicSelector.rootDevices.0000:02:00.1 -
specifies the driver type. The
spec.deviceTypedriver type is required for virtual functions in OpenShift Virtualization.vfio-pci - is an optional field that specifies whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set
spec.isRdmatoisRdma. The default value isfalse.falseNoteIf
flag is set toisRDMA, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.true
-
-
Optional: Label the SR-IOV capable cluster nodes with if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes".
SriovNetworkNodePolicy.Spec.NodeSelector Create the
object. When running the following command, replaceSriovNetworkNodePolicywith the name for this configuration:<name>$ oc create -f <name>-sriov-node-network.yamlAfter applying the configuration update, all the pods in
namespace transition to thesriov-network-operatorstatus.RunningTo verify that the SR-IOV network device is configured, enter the following command. Replace
with the name of a node with the SR-IOV network device that you just configured.<node_name>$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'
5.3.5. Enabling load balancer service creation by using the web console Copiar o linkLink copiado para a área de transferência!
You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You have logged in as a user with the role.
cluster-admin - You created a network attachment definition for the network.
Procedure
- Go to Virtualization → Overview.
- On the Settings tab, click Cluster.
- Expand LoadBalancer service and select Enable the creation of LoadBalancer services for SSH connections to VirtualMachines.
5.4. Postinstallation storage configuration Copiar o linkLink copiado para a área de transferência!
The following storage configuration tasks are mandatory:
- You must configure a default storage class for your cluster. Otherwise, the cluster cannot receive automated boot source updates.
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
Optional: You can configure local storage by using the hostpath provisioner (HPP).
See the storage configuration overview for more options, including configuring the Containerized Data Importer (CDI), data volumes, and automatic boot source updates.
5.4.1. Configuring local storage by using the HPP Copiar o linkLink copiado para a área de transferência!
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP Operator creates the HPP provisioner.
The HPP is a local storage provisioner designed for OpenShift Virtualization. To use the HPP, you must create an HPP custom resource (CR).
HPP storage pools must not be in the same partition as the operating system. Otherwise, the storage pools might fill the operating system partition. If the operating system partition is full, performance can be effected or the node can become unstable or unusable.
5.4.1.1. Creating a storage class for the CSI driver with the storagePools stanza Copiar o linkLink copiado para a área de transferência!
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a
StorageClass
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the
StorageClass
volumeBindingMode
WaitForFirstConsumer
Procedure
Create a
file to define the storage class:storageclass_csi.yamlapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete1 volumeBindingMode: WaitForFirstConsumer2 parameters: storagePool: my-storage-pool3 -
specifies whether the underlying storage is deleted or retained when a user deletes a PVC. The two possible
reclaimPolicyvalues arereclaimPolicyandDelete. If you do not specify a value, the default value isRetain.Delete -
specifies the timing of PV creation. The
volumeBindingModeconfiguration in this example means that PV creation is delayed until a pod is scheduled to a specific node.WaitForFirstConsumer -
specifies the name of the storage pool defined in the HPP custom resource (CR).
parameters.storagePool
-
- Save the file and exit.
Create the
object by running the following command:StorageClass$ oc create -f storageclass_csi.yaml
Chapter 6. Updating Copiar o linkLink copiado para a área de transferência!
6.1. Updating OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Learn how to keep OpenShift Virtualization updated and compatible with OpenShift Container Platform.
6.1.1. About updating OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
When you install OpenShift Virtualization, you select an update channel and an approval strategy. The update channel determines the version that OpenShift Virtualization is updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. Both settings can impact supportability.
6.1.1.1. Recommended settings Copiar o linkLink copiado para a área de transferência!
To maintain a supportable environment, use the following settings:
- Update channel: stable
- Approval strategy: Automatic
The stable release channel and the Automatic approval strategy are recommended for most OpenShift Virtualization installations. Use other settings only if you understand the risks.
With these settings, the update process automatically starts when a new version of the Operator is available in the stable channel. This ensures that your OpenShift Virtualization and OpenShift Container Platform versions remain compatible, and that your version of OpenShift Virtualization is suitable for production environments.
Each minor version of OpenShift Virtualization is supported only if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.14 on OpenShift Container Platform 4.14.
6.1.1.2. What to expect Copiar o linkLink copiado para a área de transferência!
- The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes.
- Updating OpenShift Virtualization does not interrupt network connections.
- Data volumes and their associated persistent volume claims are preserved during an update.
If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update.
As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the
evictionStrategy: LiveMigrate
runStrategy
Always
6.1.1.3. How updates work Copiar o linkLink copiado para a área de transferência!
- Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster.
- OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the next minor version. You cannot update OpenShift Virtualization to the next minor version without first updating OpenShift Container Platform.
6.1.1.4. Changing update settings Copiar o linkLink copiado para a área de transferência!
You can change the update channel and approval strategy for your OpenShift Virtualization Operator subscription by using the web console.
Prerequisites
- You have installed the OpenShift Virtualization Operator.
- You have administrator permissions.
Procedure
- Click Operators → Installed Operators.
- Select OpenShift Virtualization from the list.
- Click the Subscription tab.
- In the Subscription details section, click the setting that you want to change. For example, to change the approval strategy from Manual to Automatic, click Manual.
- In the window that opens, select the new update channel or approval strategy.
- Click Save.
6.1.1.5. Manual approval strategy Copiar o linkLink copiado para a área de transferência!
If you use the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. To avoid risking the supportability and functionality of your cluster, use the Automatic approval strategy.
If you must use the Manual approval strategy, maintain a supportable cluster by approving pending Operator updates as soon as they become available.
6.1.1.6. Manually approving a pending Operator update Copiar o linkLink copiado para a área de transferência!
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
- In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
- Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
- Click the Subscription tab. Any updates requiring approval are displayed next to Upgrade status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for update. When satisfied, click Approve.
- Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
6.1.2. RHEL 9 compatibility Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization 4.14 is based on Red Hat Enterprise Linux (RHEL) 9. You can update to OpenShift Virtualization 4.14 from a version that was based on RHEL 8 by following the standard OpenShift Virtualization update procedure. No additional steps are required.
As in previous versions, you can perform the update without disrupting running workloads. OpenShift Virtualization 4.14 supports live migration from RHEL 8 nodes to RHEL 9 nodes.
6.1.2.1. RHEL 9 machine type Copiar o linkLink copiado para a área de transferência!
All VM templates that are included with OpenShift Virtualization now use the RHEL 9 machine type by default:
machineType: pc-q35-rhel9.<y>.0
<y>
pc-q35-rhel9.2.0
Updating OpenShift Virtualization does not change the
machineType
Before you change a VM’s
machineType
6.1.3. Monitoring update status Copiar o linkLink copiado para a área de transferência!
To monitor the status of a OpenShift Virtualization Operator update, watch the cluster service version (CSV)
PHASE
The
PHASE
Prerequisites
-
Log in to the cluster as a user with the role.
cluster-admin -
Install the OpenShift CLI ().
oc
Procedure
Run the following command:
$ oc get csv -n openshift-cnvReview the output, checking the
field. For example:PHASEExample output
VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 ReplacingOptional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command:
$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'A successful upgrade results in the following output:
Example output
ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully
6.1.4. VM workload updates Copiar o linkLink copiado para a área de transferência!
When you update OpenShift Virtualization, virtual machine workloads, including
libvirt
virt-launcher
qemu
Each virtual machine has a
virt-launcher
virt-launcher
libvirt
You can configure how workloads are updated by editing the
spec.workloadUpdateStrategy
HyperConverged
LiveMigrate
Evict
Because the
Evict
LiveMigrate
When
LiveMigrate
- VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled.
VMIs that do not support live migration are not disrupted or updated.
-
If a VMI has the eviction strategy but does not support live migration, it is not updated.
LiveMigrate
-
If a VMI has the
If you enable both
LiveMigrate
Evict
-
VMIs that support live migration use the update strategy.
LiveMigrate -
VMIs that do not support live migration use the update strategy. If a VMI is controlled by a
Evictobject that hasVirtualMachineset, a new VMI is created in a new pod with updated components.runStrategy: Always
Migration attempts and timeouts
When updating workloads, live migration fails if a pod is in the
Pending
- 5 minutes
-
If the pod is pending because it is
Unschedulable. - 15 minutes
- If the pod is stuck in the pending state for any reason.
When a VMI fails to migrate, the
virt-controller
virt-launcher
Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging.
6.1.4.1. Configuring workload update methods Copiar o linkLink copiado para a área de transferência!
You can configure workload update methods by editing the
HyperConverged
Prerequisites
To use live migration as an update method, you must first enable live migration in the cluster.
NoteIf a
CR containsVirtualMachineInstanceand the virtual machine instance (VMI) does not support live migration, the VMI will not update.evictionStrategy: LiveMigrate-
You have installed the OpenShift CLI ().
oc
Procedure
To open the
CR in your default editor, run the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvEdit the
stanza of theworkloadUpdateStrategyCR. For example:HyperConvergedapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: - LiveMigrate - Evict batchEvictionSize: 10 batchEvictionInterval: "1m0s" # ...- defines the methods that can be used to perform automated workload updates. The available values are
spec.workloadUpdateStrategy.workloadUpdateMethodsandLiveMigrate. If you enable both options as shown in this example, updates useEvictfor VMIs that support live migration andLiveMigratefor any VMIs that do not support live migration. To disable automatic workload updates, you can either remove theEvictstanza or setworkloadUpdateStrategyto leave the array empty.workloadUpdateMethods: []-
is the least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If
LiveMigrateis the only workload update method listed, VMIs that do not support live migration are not disrupted or updated.LiveMigrate -
is a disruptive method that shuts down VMI pods during upgrade.
Evictis the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by aEvictobject that hasVirtualMachineconfigured, a new VMI is created in a new pod with updated components.runStrategy: Always
-
-
defines the number of VMIs that can be forced to be updated at a time by using the
spec.workloadUpdateStrategy.batchEvictionSizemethod. This does not apply to theEvictmethod.LiveMigrate - defines the interval to wait before evicting the next batch of workloads. This does not apply to the
spec.workloadUpdateStrategy.batchEvictionIntervalmethod.LiveMigrateNoteYou can configure live migration limits and timeouts by editing the
stanza of thespec.liveMigrationConfigCR.HyperConverged
- To apply your changes, save and exit the editor.
6.1.4.2. Viewing outdated VM workloads Copiar o linkLink copiado para a área de transferência!
You can view a list of outdated virtual machine (VM) workloads by using the CLI.
If there are outdated virtualization pods in your cluster, the
OutdatedVirtualMachineInstanceWorkloads
Procedure
To view a list of outdated virtual machine instances (VMIs), run the following command:
$ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
6.1.5. Control Plane Only updates Copiar o linkLink copiado para a área de transferência!
Every even-numbered minor version of OpenShift Container Platform is an Extended Update Support (EUS) version. However, Kubernetes design mandates serial minor version updates, so you cannot directly update from one EUS version to the next. An EUS-to-EUS update starts with updating OpenShift Virtualization to the latest z-stream of the next odd-numbered minor version. Next, update OpenShift Container Platform to the target EUS version. When the OpenShift Container Platform update succeeds, the corresponding update for OpenShift Virtualization becomes available. You can now update OpenShift Virtualization to the target EUS version.
You can directly update OpenShift Virtualization to the latest z-stream release of your current minor version without applying each intermediate z-stream update.
For more information about EUS versions, see the Red Hat OpenShift Container Platform Life Cycle Policy.
6.1.5.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
Before beginning a Control Plane Only update, you must:
- Pause worker nodes' machine config pools before you start a Control Plane Only update so that the workers are not rebooted twice.
- Disable automatic workload updates before you begin the update process. This is to prevent OpenShift Virtualization from migrating or evicting your virtual machines (VMs) until you update to your target EUS version.
By default, OpenShift Virtualization automatically updates workloads, such as the
virt-launcher
spec.workloadUpdateStrategy
HyperConverged
6.1.5.2. Preventing workload updates during a Control Plane Only update Copiar o linkLink copiado para a área de transferência!
When you update from one Extended Update Support (EUS) version to the next, you must manually disable automatic workload updates to prevent OpenShift Virtualization from migrating or evicting workloads during the update process.
Prerequisites
- You are running an EUS version of OpenShift Container Platform and want to update to the next EUS version. You have not yet updated to the odd-numbered version in between.
- You read "Preparing to perform a Control Plane Only update" and learned the caveats and requirements that pertain to your OpenShift Container Platform cluster.
- You paused the worker nodes' machine config pools as directed by the OpenShift Container Platform documentation.
- It is recommended that you use the default Automatic approval strategy. If you use the Manual approval strategy, you must approve all pending updates in the web console. For more details, refer to the "Manually approving a pending Operator update" section.
Procedure
Run the following command and record the
configuration:workloadUpdateMethods$ oc get kv kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'Turn off all workload update methods by running the following command:
$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]'Example output
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patchedEnsure that the
Operator isHyperConvergedbefore you continue. Enter the following command and monitor the output:Upgradeable$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"Example 6.1. Example output
[ { "lastTransitionTime": "2022-12-09T16:29:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "ReconcileComplete" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Progressing" }, { "lastTransitionTime": "2022-12-09T16:39:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Degraded" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Upgradeable" } ]The OpenShift Virtualization Operator has the
status.UpgradeableManually update your cluster from the source EUS version to the next minor version of OpenShift Container Platform:
$ oc adm upgradeVerification
Check the current version by running the following command:
$ oc get clusterversionNoteUpdating OpenShift Container Platform to the next version is a prerequisite for updating OpenShift Virtualization. For more details, refer to the "Updating clusters" section of the OpenShift Container Platform documentation.
Update OpenShift Virtualization.
- With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform.
- If you use the Manual approval strategy, approve the pending updates by using the web console.
Monitor the OpenShift Virtualization update by running the following command:
$ oc get csv -n openshift-cnvConfirm that OpenShift Virtualization successfully updated to the latest z-stream release of the non-EUS version by running the following command:
$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.versions"Example output
[ { "name": "operator", "version": "4.14.17" } ]Wait until the
Operator has theHyperConvergedstatus before you perform the next update. Enter the following command and monitor the output:Upgradeable$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"- Update OpenShift Container Platform to the target EUS version.
Confirm that the update succeeded by checking the cluster version:
$ oc get clusterversionUpdate OpenShift Virtualization to the target EUS version.
- With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform.
- If you use the Manual approval strategy, approve the pending updates by using the web console.
Monitor the OpenShift Virtualization update by running the following command:
$ oc get csv -n openshift-cnvThe update completes when the
field matches the target EUS version and theVERSIONfield readsPHASE.SucceededRestore the
configuration that you recorded from step 1 with the following command:workloadUpdateMethods$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \ "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":{WorkloadUpdateMethodConfig}}]"Example output
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patchedVerification
Check the status of VM migration by running the following command:
$ oc get vmim -A
Next steps
- You can now unpause the worker nodes' machine config pools.
Chapter 7. Virtual machines Copiar o linkLink copiado para a área de transferência!
7.1. Creating VMs from Red Hat images Copiar o linkLink copiado para a área de transferência!
7.1.1. Creating virtual machines from Red Hat images overview Copiar o linkLink copiado para a área de transferência!
Red Hat images are golden images. They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the
openshift-virtualization-os-images
Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates.
Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console.
You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods:
Do not create VMs in the default
openshift-*
openshift
7.1.1.1. About golden images Copiar o linkLink copiado para a área de transferência!
A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently.
7.1.1.1.1. How do golden images work? Copiar o linkLink copiado para a área de transferência!
Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences.
After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image.
7.1.1.1.2. Red Hat implementation of golden images Copiar o linkLink copiado para a área de transferência!
Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs.
7.1.1.2. About VM boot sources Copiar o linkLink copiado para a área de transferência!
Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications.
Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source.
Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster’s default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the previous default storage class.
7.1.2. Creating virtual machines from templates Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) from Red Hat templates by using the OpenShift Container Platform web console.
7.1.2.1. About VM templates Copiar o linkLink copiado para a área de transferência!
You can use VM templates to help you easily create VMs.
- Expedite creation with boot sources
You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label.
Templates without a boot source are labeled Boot source required. See Managing automatic boot source updates for details.
- Customize before starting the VM
You can customize the disk source and VM parameters before you start the VM.
NoteIf you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Customizing a VM template by using the web console.
- Single-node OpenShift
-
Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the
evictionStrategyfield for templates or VMs that use data volumes or storage profiles.
7.1.2.2. Creating a VM from a template Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from a template with an available boot source by using the OpenShift Container Platform web console.
Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM.
Procedure
- Navigate to Virtualization → Catalog in the web console.
Click Boot source available to filter templates with boot sources.
The catalog displays the default templates. Click All Items to view all available templates for your filters.
- Click a template tile to view its details.
Click Quick create VirtualMachine to create a VM from the template.
Optional: Customize the template or VM parameters:
- Click Customize VirtualMachine.
- Expand Storage or Optional parameters to edit data source settings.
Click Customize VirtualMachine parameters.
The Customize and create VirtualMachine pane displays the Overview, YAML, Scheduling, Environment, Network interfaces, Disks, Scripts, and Metadata tabs.
- Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key.
Click Create VirtualMachine.
The VirtualMachine details page displays the provisioning status.
7.1.2.3. Customizing a VM template by using the web console Copiar o linkLink copiado para a área de transferência!
You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed.
You can remove the deprecated designation from the customized template.
Procedure
- Navigate to Virtualization → Templates in the web console.
- From the list of VM templates, click the template marked as deprecated.
- Click Edit next to the pencil icon beside Labels.
Remove the following two labels:
-
template.kubevirt.io/type: "base" -
template.kubevirt.io/version: "version"
-
- Click Save.
- Click the pencil icon beside the number of existing Annotations.
Remove the following annotation:
-
template.kubevirt.io/deprecated
-
- Click Save.
7.1.2.3.1. Creating a custom VM template in the web console Copiar o linkLink copiado para a área de transferência!
You create a virtual machine template by editing a YAML file example in the OpenShift Container Platform web console.
Procedure
- In the web console, click Virtualization → Templates in the side menu.
-
Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the project by default.
openshift - Click Create Template.
- Specify the template parameters by editing the YAML file.
Click Create.
The template is displayed on the Templates page.
- Optional: Click Download to download and save the YAML file.
7.1.2.3.2. Enabling dedicated resources for a virtual machine template Copiar o linkLink copiado para a área de transferência!
You can enable dedicated resources for a virtual machine (VM) template in the OpenShift Container Platform web console. VMs that are created from this template will be scheduled with dedicated resources.
Procedure
- In the OpenShift Container Platform web console, click Virtualization → Templates in the side menu.
- Select the template that you want to edit to open the Template details page.
- On the Scheduling tab, click the edit icon beside Dedicated Resources.
- Select Schedule this workload with dedicated resources (guaranteed policy).
- Click Save.
7.1.3. Creating virtual machines from instance types Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) from instance types by using the OpenShift Container Platform web console.
Creating a VM from an instance type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.1.3.1. Creating a VM from an instance type Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console.
Procedure
- In the web console, navigate to Virtualization → Catalog and click the InstanceTypes tab.
Select a bootable volume.
NoteThe volume table only lists volumes in the
namespace that have theopenshift-virtualization-os-imageslabel.instancetype.kubevirt.io/default-preference- Click an instance type tile and select the configuration appropriate for your workload.
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
7.1.4. Creating virtual machines from the command line Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) from the command line by editing or creating a
VirtualMachine
7.1.4.1. Creating a VM from a VirtualMachine manifest Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from a
VirtualMachine
Procedure
Edit the
manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM:VirtualMachineExample 7.1. Example manifest for a RHEL VM
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name>1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel92 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>'3 chpasswd: { expire: False } name: cloudinitdiskCreate a virtual machine by using the manifest file:
$ oc create -f <vm_manifest_file>.yamlOptional: Start the virtual machine:
$ virtctl start <vm_name> -n <namespace>
7.2. Creating VMs from custom images Copiar o linkLink copiado para a área de transferência!
7.2.1. Creating virtual machines from custom images overview Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) from custom operating system images by using one of the following methods:
Importing the image as a container disk from a registry.
Optional: You can enable auto updates for your container disks. See Managing automatic boot source updates for details.
- Importing the image from a web page.
- Uploading the image from a local machine.
- Cloning a persistent volume claim (PVC) that contains the image.
The Containerized Data Importer (CDI) imports the image into a PVC by using a data volume. You add the PVC to the VM by using the OpenShift Container Platform web console or command line.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
You must also install VirtIO drivers on Windows VMs.
The QEMU guest agent is included with Red Hat images.
7.2.2. Creating VMs by using container disks Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) by using container disks built from operating system images.
You can enable auto updates for your container disks. See Managing automatic boot source updates for details.
If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can perform the following tasks to resolve this issue:
You create a VM from a container disk by performing the following steps:
- Build an operating system image into a container disk and upload it to your container registry.
- If your container registry does not have TLS, configure your environment to disable TLS for your registry.
- Create a VM with the container disk as the disk source by using the web console or the command line.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
7.2.2.1. Building and uploading a container disk Copiar o linkLink copiado para a área de transferência!
You can build a virtual machine (VM) image into a container disk and upload it to a registry.
The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted.
For Red Hat Quay, you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed.
Prerequisites
-
You must have installed.
podman - You must have a QCOW2 or RAW image file.
Procedure
Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of
, and placed in the107directory inside the container. Permissions for the/disk/directory must then be set to/disk/.0440The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal
image in the second stage to store the result:scratch$ cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ // RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOFwhere:
<vm_image>-
Specifies the image in either QCOW2 or RAW format. If you use a remote image, replace
<vm_image>.qcow2with the complete URL.
Build and tag the container:
$ podman build -t <registry>/<container_disk_name>:latest .Push the container image to the registry:
$ podman push <registry>/<container_disk_name>:latest
7.2.2.2. Disabling TLS for a container registry Copiar o linkLink copiado para a área de transferência!
You can disable TLS (transport layer security) for one or more container registries by editing the
insecureRegistries
HyperConverged
Prerequisites
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd a list of insecure registries to the
field.spec.storageImport.insecureRegistriesExample
HyperConvergedcustom resourceapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries:1 - "private-registry-example-1:5000" - "private-registry-example-2:5000"- 1
- Replace the examples in this list with valid registry hostnames.
7.2.2.3. Creating a VM from a container disk by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) by importing a container disk from a container registry by using the OpenShift Container Platform web console.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list.
-
Enter the container image URL. Example:
https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 - Set the disk size.
- Click Next.
- Click Create VirtualMachine.
7.2.2.4. Creating a VM from a container disk by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from a container disk by using the command line.
When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage.
Prerequisites
- You must have access credentials for the container registry that contains the container disk.
Procedure
If the container registry requires authentication, create a
manifest, specifying the credentials, and save it as aSecretfile:data-source-secret.yamlapiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: ""1 secretKey: ""2 Apply the
manifest by running the following command:Secret$ oc apply -f data-source-secret.yamlIf the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM:
$ oc create configmap tls-certs1 --from-file=</path/to/file/ca.pem>2 Edit the
manifest and save it as aVirtualMachinefile:vm-fedora-datavolume.yamlapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv2 spec: storage: resources: requests: storage: 10Gi3 storageClassName: <storage_class>4 source: registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"5 secretRef: data-source-secret6 certConfigMap: tls-certs7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}- 1
- Specify the name of the VM.
- 2
- Specify the name of the data volume.
- 3
- Specify the size of the storage requested for the data volume.
- 4
- Optional: If you do not specify a storage class, the default storage class is used.
- 5
- Specify the URL of the container registry.
- 6
- Optional: Specify the secret name if you created a secret for the container registry access credentials.
- 7
- Optional: Specify a CA certificate config map.
Create the VM by running the following command:
$ oc create -f vm-fedora-datavolume.yamlThe
command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes tooc create. You can start the VM.SucceededData volume provisioning happens in the background, so there is no need to monitor the process.
Verification
The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:
$ oc get podsMonitor the data volume until its status is
by running the following command:Succeeded$ oc describe dv fedora-dv1 - 1
- Specify the data volume name that you defined in the
VirtualMachinemanifest.
Verify that provisioning is complete and that the VM has started by accessing its serial console:
$ virtctl console vm-fedora-datavolume
7.2.3. Creating VMs by importing images from web pages Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) by importing operating system images from web pages.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
7.2.3.1. Creating a VM from an image on a web page by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console.
Prerequisites
- You must have access to the web page that contains the image.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list.
-
Enter the image URL. Example:
https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software -
Enter the container image URL. Example:
https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 - Set the disk size.
- Click Next.
- Click Create VirtualMachine.
7.2.3.2. Creating a VM from an image on a web page by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from an image on a web page by using the command line.
When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage.
Prerequisites
- You must have access credentials for the web page that contains the image.
Procedure
If the web page requires authentication, create a
manifest, specifying the credentials, and save it as aSecretfile:data-source-secret.yamlapiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: ""1 secretKey: ""2 Apply the
manifest by running the following command:Secret$ oc apply -f data-source-secret.yamlIf the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM:
$ oc create configmap tls-certs1 --from-file=</path/to/file/ca.pem>2 Edit the
manifest and save it as aVirtualMachinefile:vm-fedora-datavolume.yamlapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv2 spec: storage: resources: requests: storage: 10Gi3 storageClassName: <storage_class>4 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2"5 registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"6 secretRef: data-source-secret7 certConfigMap: tls-certs8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}- 1
- Specify the name of the VM.
- 2
- Specify the name of the data volume.
- 3
- Specify the size of the storage requested for the data volume.
- 4
- Optional: If you do not specify a storage class, the default storage class is used.
- 5 6
- Specify the URL of the web page.
- 7
- Optional: Specify the secret name if you created a secret for the web page access credentials.
- 8
- Optional: Specify a CA certificate config map.
Create the VM by running the following command:
$ oc create -f vm-fedora-datavolume.yamlThe
command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes tooc create. You can start the VM.SucceededData volume provisioning happens in the background, so there is no need to monitor the process.
Verification
The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:
$ oc get podsMonitor the data volume until its status is
by running the following command:Succeeded$ oc describe dv fedora-dv1 - 1
- Specify the data volume name that you defined in the
VirtualMachinemanifest.
Verify that provisioning is complete and that the VM has started by accessing its serial console:
$ virtctl console vm-fedora-datavolume
7.2.4. Creating VMs by uploading images Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) by uploading operating system images from your local machine.
You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
You must also install VirtIO drivers on Windows VMs.
7.2.4.1. Creating a VM from an uploaded image by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from an uploaded operating system image by using the OpenShift Container Platform web console.
Prerequisites
-
You must have an ,
IMG, orISOimage file.QCOW2
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list.
- Browse to the image on your local machine and set the disk size.
- Click Customize VirtualMachine.
- Click Create VirtualMachine.
7.2.4.2. Creating a Windows VM Copiar o linkLink copiado para a área de transferência!
You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the OpenShift Container Platform web console.
Prerequisites
- You created a Windows installation DVD or USB with the Windows Media Creation Tool. See Create Windows 10 installation media in the Microsoft documentation.
-
You created an answer file. See Answer files (unattend.xml) in the Microsoft documentation.
autounattend.xml
Procedure
Upload the Windows image as a new PVC:
- Navigate to Storage → PersistentVolumeClaims in the web console.
- Click Create PersistentVolumeClaim → With Data upload form.
- Browse to the Windows image and select it.
Enter the PVC name, select the storage class and size and then click Upload.
The Windows image is uploaded to a PVC.
Configure a new VM by cloning the uploaded PVC:
- Navigate to Virtualization → Catalog.
- Select a Windows template tile and click Customize VirtualMachine.
- Select Clone (clone PVC) from the Disk source list.
- Select the PVC project, the Windows image PVC, and the disk size.
Apply the answer file to the VM:
- Click Customize VirtualMachine parameters.
- On the Sysprep section of the Scripts tab, click Edit.
-
Browse to the answer file and click Save.
autounattend.xml
Set the run strategy of the VM:
- Clear Start this VirtualMachine after creation so that the VM does not start immediately.
- Click Create VirtualMachine.
-
On the YAML tab, replace with
running:falseand click Save.runStrategy: RerunOnFailure
Click the options menu
and select Start.
The VM boots from the
disk containing thesysprepanswer file.autounattend.xml
7.2.4.2.1. Generalizing a Windows VM image Copiar o linkLink copiado para a área de transferência!
You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM).
Before generalizing the VM, you must ensure the
sysprep
Prerequisites
- A running Windows VM with the QEMU guest agent installed.
Procedure
- In the OpenShift Container Platform console, click Virtualization → VirtualMachines.
- Select a Windows VM to open the VirtualMachine details page.
- Click Configuration → Disks.
-
Click the Options menu
beside the disk and select Detach.sysprep - Click Detach.
-
Rename to avoid detection by the
C:\Windows\Panther\unattend.xmltool.sysprep Start the
program by running the following command:sysprep%WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm-
After the tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.
sysprep
You can now specialize the VM.
7.2.4.2.2. Specializing a Windows VM image Copiar o linkLink copiado para a área de transferência!
Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM.
Prerequisites
- You must have a generalized Windows disk image.
-
You must create an answer file. See the Microsoft documentation for details.
unattend.xml
Procedure
- In the OpenShift Container Platform console, click Virtualization → Catalog.
- Select a Windows template and click Customize VirtualMachine.
- Select PVC (clone PVC) from the Disk source list.
- Select the PVC project and PVC name of the generalized Windows image.
- Click Customize VirtualMachine parameters.
- Click the Scripts tab.
-
In the Sysprep section, click Edit, browse to the answer file, and click Save.
unattend.xml - Click Create VirtualMachine.
During the initial boot, Windows uses the
unattend.xml
7.2.4.3. Creating a VM from an uploaded image by using the command line Copiar o linkLink copiado para a área de transferência!
You can upload an operating system image by using the
virtctl
Prerequisites
-
You must have an ,
ISO, orIMGoperating system image file.QCOW2 -
For best performance, compress the image file by using the virt-sparsify tool or the or
xzutilities.gzip -
You must have installed.
virtctl - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Upload the image by running the
command:virtctl image-upload$ virtctl image-upload dv <datavolume_name> \ --size=<datavolume_size> \ --image-path=</path/to/image><datavolume_name>- The name of the data volume.
<datavolume_size>-
The size of the data volume. For example:
--size=500Mi,--size=1G </path/to/image>The file path of the image.
Note-
If you do not want to create a new data volume, omit the parameter and include the
--sizeflag.--no-create - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the parameter. When you use the
--insecureflag, the authenticity of the upload endpoint is not verified.--insecure
-
If you do not want to create a new data volume, omit the
Optional. To verify that a data volume was created, view all data volumes by running the following command:
$ oc get dvs
7.2.5. Creating VMs by cloning PVCs Copiar o linkLink copiado para a área de transferência!
You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images.
You clone a PVC by creating a data volume that references a source PVC.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
7.2.5.1. Creating a VM from a PVC by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the OpenShift Container Platform web console.
Prerequisites
- You must have access to the web page that contains the image.
- You must have access to the namespace that contains the source PVC.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list.
-
Enter the image URL. Example:
https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software -
Enter the container image URL. Example:
https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 - Select the PVC project and the PVC name.
- Set the disk size.
- Click Next.
- Click Create VirtualMachine.
7.2.5.2. Creating a VM from a PVC by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line.
You can clone a PVC by using one of the following options:
Cloning a PVC to a new data volume.
This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.
Cloning a PVC by creating a
manifest with aVirtualMachinestanza.dataVolumeTemplatesThis method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC.
7.2.5.2.1. Optimizing clone Performance at scale in OpenShift Data Foundation Copiar o linkLink copiado para a área de transferência!
When you use OpenShift Data Foundation, the storage profile configures the default cloning strategy as
csi-clone
To improve performance when creating hundreds of clones from a single source PVC, use the
VolumeSnapshot
csi-clone
Procedure
Create a
VolumeSnapshot
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: golden-volumesnapshot
namespace: golden-ns
spec:
volumeSnapshotClassName: ocs-storagecluster-rbdplugin-snapclass
source:
persistentVolumeClaimName: golden-snap-source
-
Add the stanza to reference the
spec.source.snapshotas the source for theVolumeSnapshot:DataVolume clone
spec:
source:
snapshot:
namespace: golden-ns
name: golden-volumesnapshot
7.2.5.2.2. Cloning a PVC to a data volume Copiar o linkLink copiado para a área de transferência!
You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line.
You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.
Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the
kubevirt
Smart-cloning is faster and more efficient than host-assisted cloning because it uses snapshots to clone PVCs. Smart-cloning is supported by storage providers that support snapshots, such as Red Hat OpenShift Data Foundation.
Cloning between different volume modes is not supported for smart-cloning.
Prerequisites
- The VM with the source PVC must be powered down.
- If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace.
Additional prerequisites for smart-cloning:
- Your storage provider must support snapshots.
- The source and target PVCs must have the same storage provider and volume mode.
The value of the
key of thedriverobject must match the value of theVolumeSnapshotClasskey of theprovisionerobject as shown in the following example:StorageClassExample
VolumeSnapshotClassobjectkind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ...Example
StorageClassobjectkind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com
Procedure
Create a
manifest as shown in the following example:DataVolumeapiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume>1 spec: source: pvc: namespace: "<source_namespace>"2 name: "<my_vm_disk>"3 storage: {}Create the data volume by running the following command:
$ oc create -f <datavolume>.yamlNoteData volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned.
7.2.5.2.3. Creating a VM from a cloned PVC by using a data volume template Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template.
This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC.
Prerequisites
- The VM with the source PVC must be powered down.
Procedure
Create a
manifest as shown in the following example:VirtualMachineapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace>2 name: "<source_pvc>"3 Create the virtual machine with the PVC-cloned data volume:
$ oc create -f <vm-clone-datavolumetemplate>.yaml
7.2.6. Installing the QEMU guest agent and VirtIO drivers Copiar o linkLink copiado para a área de transferência!
The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
7.2.6.1. Installing the QEMU guest agent Copiar o linkLink copiado para a área de transferência!
7.2.6.1.1. Installing the QEMU guest agent on a Linux VM Copiar o linkLink copiado para a área de transferência!
The
qemu-guest-agent
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
Procedure
- Log in to the VM by using a console or SSH.
Install the QEMU guest agent by running the following command:
$ yum install -y qemu-guest-agentEnsure the service is persistent and start it:
$ systemctl enable --now qemu-guest-agent
Verification
Run the following command to verify that
is listed in the VM spec:AgentConnected$ oc get vm <vm_name>
7.2.6.1.2. Installing the QEMU guest agent on a Windows VM Copiar o linkLink copiado para a área de transferência!
For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
Procedure
-
In the Windows guest operating system, use the File Explorer to navigate to the directory in the
guest-agentCD drive.virtio-win -
Run the installer.
qemu-ga-x86_64.msi
Verification
Obtain a list of network services by running the following command:
$ net start-
Verify that the output contains the .
QEMU Guest Agent
7.2.6.2. Installing VirtIO drivers on Windows VMs Copiar o linkLink copiado para a área de transferência!
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download.
The
container-native-virtualization/virtio-win
After the drivers are installed, the
container-native-virtualization/virtio-win
| Driver name | Hardware ID | Description |
|---|---|---|
| viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. |
| viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. |
| NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
7.2.6.2.1. Attaching VirtIO container disk to Windows VMs during installation Copiar o linkLink copiado para a área de transferência!
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM.
Procedure
- When creating a Windows VM from a template, click Customize VirtualMachine.
- Select Mount Windows drivers disk.
- Click the Customize VirtualMachine parameters.
- Click Create VirtualMachine.
After the VM is created, the
virtio-win
7.2.6.2.2. Attaching VirtIO container disk to an existing Windows VM Copiar o linkLink copiado para a área de transferência!
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM.
Procedure
- Navigate to the existing Windows VM, and click Actions → Stop.
- Go to VM Details → Configuration → Disks and click Add disk.
-
Add from container source, set the Type to CD-ROM, and then set the Interface to SATA.
windows-driver-disk - Click Save.
- Start the VM, and connect to a graphical console.
7.2.6.2.3. Installing VirtIO drivers during Windows installation Copiar o linkLink copiado para a área de transferência!
You can install the VirtIO drivers while installing Windows on a virtual machine (VM).
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Prerequisites
-
A storage device containing the drivers must be attached to the VM.
virtio
Procedure
-
In the Windows operating system, use the to navigate to the
File ExplorerCD drive.virtio-win Double-click the drive to run the appropriate installer for your VM.
For a 64-bit vCPU, select the
installer. 32-bit vCPUs are no longer supported.virtio-win-gt-x64- Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default.
- After the installation is complete, select Finish.
- Reboot the VM.
Verification
-
Open the system disk on the PC. This is typically .
C: - Navigate to Program Files → Virtio-Win.
If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful.
7.2.6.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM Copiar o linkLink copiado para a área de transferência!
You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM).
This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps.
Prerequisites
- A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive.
Procedure
- Start the VM and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
- Open the Device Properties to identify the unknown device.
- Right-click the device and select Properties.
- Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the VM to complete the driver installation.
7.2.6.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive Copiar o linkLink copiado para a área de transferência!
You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive.
Downloading the
container-native-virtualization/virtio-win
Prerequisites
-
You must have access to the Red Hat registry or to the downloaded container disk in a restricted environment.
container-native-virtualization/virtio-win
Procedure
Add the
container disk as a CD drive by editing thecontainer-native-virtualization/virtio-winmanifest:VirtualMachine# ... spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdiskOpenShift Virtualization boots the VM disks in the order defined in the
manifest. You can either define other VM disks that boot before theVirtualMachinecontainer disk, or use the optionalcontainer-native-virtualization/virtio-winparameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks.bootOrderApply the changes:
If the VM is not running, run the following command:
$ virtctl start <vm> -n <namespace>If the VM is running, reboot the VM or run the following command:
$ oc apply -f <vm.yaml>
- After the VM has started, install the VirtIO drivers from the SATA CD drive.
7.2.6.3. Updating VirtIO drivers Copiar o linkLink copiado para a área de transferência!
7.2.6.3.1. Updating VirtIO drivers on a Windows VM Copiar o linkLink copiado para a área de transferência!
Update the
virtio
Prerequisites
- The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service.
Procedure
- In the Windows Guest operating system, click the Windows key and select Settings.
- Navigate to Windows Update → Advanced Options → Optional Updates.
- Install all updates from Red Hat, Inc..
- Reboot the VM.
Verification
- On the Windows VM, navigate to the Device Manager.
- Select a device.
- Select the Driver tab.
-
Click Driver Details and confirm that the driver details displays the correct version.
virtio
7.3. Connecting to virtual machine consoles Copiar o linkLink copiado para a área de transferência!
You can connect to the following consoles to access running virtual machines (VMs):
7.3.1. Connecting to the VNC console Copiar o linkLink copiado para a área de transferência!
You can connect to the VNC console of a virtual machine by using the OpenShift Container Platform web console or the
virtctl
7.3.1.1. Connecting to the VNC console by using the web console Copiar o linkLink copiado para a área de transferência!
You can connect to the VNC console of a virtual machine (VM) by using the OpenShift Container Platform web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Procedure
- On the Virtualization → VirtualMachines page, click a VM to open the VirtualMachine details page.
- Click the Console tab. The VNC console session starts automatically.
Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
7.3.1.2. Connecting to the VNC console by using virtctl Copiar o linkLink copiado para a área de transferência!
You can use the
virtctl
If you run the
virtctl vnc
ssh
-X
-Y
Prerequisites
-
You must install the package.
virt-viewer
Procedure
Run the following command to start the console session:
$ virtctl vnc <vm_name>If the connection fails, run the following command to collect troubleshooting information:
$ virtctl vnc <vm_name> -v 4
7.3.1.3. Generating a temporary token for the VNC console Copiar o linkLink copiado para a área de transferência!
Generate a temporary authentication bearer token for the Kubernetes API to access the VNC of a virtual machine (VM).
Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command.
Prerequisites
-
A running VM with OpenShift Virtualization 4.14 or later and 4.14 or later.
ssp-operator
Procedure
Enable the feature gate in the HyperConverged (
) custom resource (CR):HCO$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]'Generate a token by running the following command:
$ curl --header "Authorization: Bearer ${TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>"1 - 1
- Duration can be in hours and minutes, with a minimum duration of 10 minutes. Example:
5h30m. The token is valid for 10 minutes by default if this parameter is not set.
Sample output:
{ "token": "eyJhb..." }Optional: Use the token provided in the output to create a variable:
$ export VNC_TOKEN="<token>"
You can now use the token to access the VNC console of a VM.
Verification
Log in to the cluster by running the following command:
$ oc login --token ${VNC_TOKEN}Use
to test access to the VNC console of the VM by running the following command:virtctl$ virtctl vnc <vm_name> -n <namespace>
7.3.2. Connecting to the serial console Copiar o linkLink copiado para a área de transferência!
You can connect to the serial console of a virtual machine by using the OpenShift Container Platform web console or the
virtctl
Running concurrent VNC connections to a single virtual machine is not currently supported.
7.3.2.1. Connecting to the serial console by using the web console Copiar o linkLink copiado para a área de transferência!
You can connect to the serial console of a virtual machine (VM) by using the OpenShift Container Platform web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Procedure
- On the Virtualization → VirtualMachines page, click a VM to open the VirtualMachine details page.
- Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Serial console from the console list.
Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
7.3.2.2. Connecting to the serial console by using virtctl Copiar o linkLink copiado para a área de transferência!
You can use the
virtctl
If you run the
virtctl vnc
ssh
-X
-Y
Prerequisites
-
You must install the package.
virt-viewer
Procedure
Run the following command to start the console session:
$ virtctl console <vm_name>Press
to end the console session.Ctrl+]$ virtctl vnc <vm_name>If the connection fails, run the following command to collect troubleshooting information:
$ virtctl vnc <vm_name> -v 4
7.3.3. Connecting to the desktop viewer Copiar o linkLink copiado para a área de transferência!
You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP).
7.3.3.1. Connecting to the desktop viewer by using the web console Copiar o linkLink copiado para a área de transferência!
You can connect to the desktop viewer of a virtual machine (VM) by using the OpenShift Container Platform web console. You can connect to the desktop viewer of a Windows virtual machine (VM) by using the OpenShift Container Platform web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Prerequisites
- You installed the QEMU guest agent on the Windows VM.
- You have an RDP client installed.
Procedure
- On the Virtualization → VirtualMachines page, click a VM to open the VirtualMachine details page.
- Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Desktop viewer from the console list.
- Click Create RDP Service to open the RDP Service dialog.
- Select Expose RDP Service and click Save to create a node port service.
-
Click Launch Remote Desktop to download an file and launch the desktop viewer.
.rdp Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
7.4. Configuring SSH access to virtual machines Copiar o linkLink copiado para a área de transferência!
You can configure SSH access to virtual machines (VMs) by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
command with the private key.virtctl sshYou can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
command to yourvirtctl port-fowardfile and connect to the VM by using OpenSSH..ssh/configYou create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address.
7.4.1. Access configuration considerations Copiar o linkLink copiado para a área de transferência!
Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster.
If the internal cluster network cannot handle the traffic load, you can configure a secondary network.
virtctl sshandvirtctl port-forwardingcommands- Simple to configure.
- Recommended for troubleshooting VMs.
-
recommended for automated configuration of VMs with Ansible.
virtctl port-forwarding - Dynamic public SSH keys can be used to provision VMs with Ansible.
- Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server.
- The API server must be able to handle the traffic load.
- The clients must be able to access the API server.
- The clients must have access credentials for the cluster.
- Cluster IP service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access an internal cluster IP address.
- Node port service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access at least one node.
- Load balancer service
- A load balancer must be configured.
- Each node must be able to handle the traffic load of one or more load balancer services.
- Secondary network
- Excellent performance because traffic does not go through the internal cluster network.
- Allows a flexible approach to network topology.
- Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network.
7.4.2. Using virtctl ssh Copiar o linkLink copiado para a área de transferência!
You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the
virtctl ssh
This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server.
7.4.2.1. About static and dynamic SSH key management Copiar o linkLink copiado para a área de transferência!
You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
Static SSH key management
You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot.
You can add the key by using one of the following methods:
- Add a key to a single VM when you create it by using the web console or the command line.
- Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project.
Use cases
- As a VM owner, you can provision all your newly created VMs with a single key.
Dynamic SSH key management
You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources.
When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM.
Use cases
-
Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a object that is applied to all VMs in a namespace.
Secret - User access: You can add your access credentials to all VMs that you create and manage.
Ansible provisioning:
- As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning.
- As a VM owner, you can create a VM and attach the keys used for Ansible provisioning.
Key rotation:
- As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace.
- As a workload owner, you can rotate the key for the VMs that you manage.
7.4.2.2. Static key management Copiar o linkLink copiado para a área de transferência!
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time.
You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create.
If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually.
7.4.2.2.1. Adding a key when creating a VM from a template Copiar o linkLink copiado para a área de transferência!
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project.
Prerequisites
-
You generated an SSH key pair by running the command.
ssh-keygen
Procedure
- Navigate to Virtualization → Catalog in the web console.
Click a template tile.
The guest operating system must support configuration from a cloud-init data source.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
7.4.2.2.2. Adding a key when creating a VM from an instance type Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Procedure
- In the web console, navigate to Virtualization → Catalog and click the InstanceTypes tab.
Select a bootable volume.
NoteThe volume table only lists volumes in the
namespace that have theopenshift-virtualization-os-imageslabel.instancetype.kubevirt.io/default-preference- Click an instance type tile and select the configuration appropriate for your workload.
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
7.4.2.2.3. Adding a key when creating a VM by using the command line Copiar o linkLink copiado para a área de transferência!
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot.
The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data.
Prerequisites
-
You generated an SSH key pair by running the command.
ssh-keygen
Procedure
Create a manifest file for a
object and aVirtualMachineobject:SecretapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: example-vm-disk spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: example-vm spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: example-volume name: example-vm-disk - cloudInitConfigDrive: <.> userData: |- #cloud-config user: cloud-user password: <password> chpasswd: { expire: False } name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: authorized-keys <.> --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: | MIIEpQIBAAKCAQEAulqb/Y... <.><.> Specify
to create a configuration drive. <.> Specify thecloudInitConfigDriveobject name. <.> Paste the public SSH key.SecretCreate the
andVirtualMachineobjects:Secret$ oc create -f <manifest_file>.yamlStart the VM:
$ virtctl start vm example-vm -n example-namespace
Verification
Get the VM configuration:
$ oc describe vm example-vm -n example-namespaceExample output
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: authorized-keys # ...
7.4.2.3. Dynamic key management Copiar o linkLink copiado para a área de transferência!
You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created.
7.4.2.3.1. Enabling dynamic key injection when creating a VM from a template Copiar o linkLink copiado para a área de transferência!
You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the OpenShift Container Platform web console. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the command.
ssh-keygen
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click the Red Hat Enterprise Linux 9 VM tile.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
7.4.2.3.2. Enabling dynamic key injection when creating a VM from an instance type Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. Then, you can add or revoke the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Procedure
- In the web console, navigate to Virtualization → Catalog and click the InstanceTypes tab.
Select a bootable volume.
NoteThe volume table only lists volumes in the
namespace that have theopenshift-virtualization-os-imageslabel.instancetype.kubevirt.io/default-preference- Click an instance type tile and select the configuration appropriate for your workload.
- Click the Red Hat Enterprise Linux 9 VM tile.
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
- Set Dynamic SSH key injection in the VirtualMachine details section to on.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
7.4.2.3.3. Enabling dynamic SSH key injection by using the web console Copiar o linkLink copiado para a área de transferência!
You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console. Then, you can update the public SSH key at runtime.
The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9.
Prerequisites
- The guest operating system is RHEL 9.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
- On the Configuration tab, click Scripts.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
7.4.2.3.4. Enabling dynamic key injection by using the command line Copiar o linkLink copiado para a área de transferência!
You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the command.
ssh-keygen
Procedure
Create a manifest file for a
object and aVirtualMachineobject:SecretapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: example-vm-disk spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: example-vm spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: example-volume name: example-vm-disk - cloudInitConfigDrive: <.> userData: |- #cloud-config user: cloud-user password: <password> chpasswd: { expire: False } runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["user1","user2","fedora"] <.> source: secret: secretName: authorized-keys <.> --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: | MIIEpQIBAAKCAQEAulqb/Y... <.><.> Specify
to create a configuration drive. <.> Specify the user names. <.> Specify thecloudInitConfigDriveobject name. <.> Paste the public SSH key.SecretCreate the
andVirtualMachineobjects:Secret$ oc create -f <manifest_file>.yamlStart the VM:
$ virtctl start vm example-vm -n example-namespace
Verification
Get the VM configuration:
$ oc describe vm example-vm -n example-namespaceExample output
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["user1","user2","fedora"] source: secret: secretName: authorized-keys # ...
7.4.2.4. Using the virtctl ssh command Copiar o linkLink copiado para a área de transferência!
You can access a running virtual machine (VM) by using the
virtcl ssh
Prerequisites
-
You installed the command-line tool.
virtctl - You added a public SSH key to the VM.
- You have an SSH client installed.
-
The environment where you installed the tool has the cluster permissions required to access the VM. For example, you ran
virtctlor you set theoc loginenvironment variable.KUBECONFIG
Procedure
Run the
command:virtctl ssh$ virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key>1 - 1
- Specify the namespace, user name, and the SSH private key. The default SSH key location is
/home/user/.ssh. If you save the key in a different location, you must specify the path.
Example
$ virtctl -n my-namespace ssh cloud-user@example-vm -i my-key
You can copy the
virtctl ssh
menu beside a VM on the VirtualMachines page.
7.4.3. Using the virtctl port-forward command Copiar o linkLink copiado para a área de transferência!
You can use your local OpenSSH client and the
virtctl port-forward
This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server.
Prerequisites
-
You have installed the client.
virtctl - The virtual machine you want to access is running.
-
The environment where you installed the tool has the cluster permissions required to access the VM. For example, you ran
virtctlor you set theoc loginenvironment variable.KUBECONFIG
Procedure
Add the following text to the
file on your client machine:~/.ssh/configHost vm/* ProxyCommand virtctl port-forward --stdio=true %h %pConnect to the VM by running the following command:
$ ssh <user>@vm/<vm_name>.<namespace>
7.4.4. Using a service for SSH access Copiar o linkLink copiado para a área de transferência!
You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls.
If the cluster network cannot handle the traffic load, consider using a secondary network for VM access.
7.4.4.1. About services Copiar o linkLink copiado para a área de transferência!
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the
NodePort
LoadBalancer
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIPis the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePortmakes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator.
7.4.4.2. Creating a service Copiar o linkLink copiado para a área de transferência!
You can create a service to expose a virtual machine (VM) by using the OpenShift Container Platform web console,
virtctl
7.4.4.2.1. Enabling load balancer service creation by using the web console Copiar o linkLink copiado para a área de transferência!
You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You have logged in as a user with the role.
cluster-admin - You created a network attachment definition for the network.
Procedure
- Go to Virtualization → Overview.
- On the Settings tab, click Cluster.
- Expand LoadBalancer service and select Enable the creation of LoadBalancer services for SSH connections to VirtualMachines.
7.4.4.2.2. Creating a service by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a node port or load balancer service for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You configured the cluster network to support either a load balancer or a node port.
- To create a load balancer service, you enabled the creation of load balancer services.
Procedure
- Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page.
- On the Details tab, select SSH over LoadBalancer from the SSH service type list.
-
Optional: Click the copy icon to copy the command to your clipboard.
SSH
Verification
- Check the Services pane on the Details tab to view the new service.
7.4.4.2.3. Creating a service by using virtctl Copiar o linkLink copiado para a área de transferência!
You can create a service for a virtual machine (VM) by using the
virtctl
Prerequisites
-
You installed the command-line tool.
virtctl - You configured the cluster network to support the service.
-
The environment where you installed has the cluster permissions required to access the VM. For example, you ran
virtctlor you set theoc loginenvironment variable.KUBECONFIG
Procedure
Create a service by running the following command:
$ virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port>1 - 1
- Specify the
ClusterIP,NodePort, orLoadBalancerservice type.
Example
$ virtctl expose vm example-vm --name example-service --type NodePort --port 22
Verification
Verify the service by running the following command:
$ oc get service
Next steps
After you create a service with
virtctl
special: key
spec.template.metadata.labels
VirtualMachine
7.4.4.2.4. Creating a service by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
Procedure
Edit the
manifest to add the label for service creation:VirtualMachineapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key1 # ...- 1
- Add
special: keyto thespec.template.metadata.labelsstanza.
NoteLabels on a virtual machine are passed through to the pod. The
label must match the label in thespecial: keyattribute of thespec.selectormanifest.Service-
Save the manifest file to apply your changes.
VirtualMachine Create a
manifest to expose the VM:ServiceapiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key1 type: NodePort2 ports:3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000-
Save the manifest file.
Service Create the service by running the following command:
$ oc create -f example-service.yaml- Restart the VM to apply the changes.
Verification
Query the
object to verify that it is available:Service$ oc get service -n example-namespace
7.4.4.3. Connecting to a VM exposed by a service by using SSH Copiar o linkLink copiado para a área de transferência!
You can connect to a virtual machine (VM) that is exposed by a service by using SSH.
Prerequisites
- You created a service to expose the VM.
- You have an SSH client installed.
- You are logged in to the cluster.
Procedure
Run the following command to access the VM:
$ ssh <user_name>@<ip_address> -p <port>1 - 1
- Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service.
7.4.5. Using a secondary network for SSH access Copiar o linkLink copiado para a área de transferência!
You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH.
Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method.
See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options.
Prerequisites
- You configured a secondary network such as Linux bridge or SR-IOV.
-
You created a network attachment definition for a Linux bridge network or the SR-IOV Network Operator created a network attachment definition when you created an object.
SriovNetwork
7.4.5.1. Configuring a VM network interface by using the web console Copiar o linkLink copiado para a área de transferência!
You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Click a VM to view the VirtualMachine details page.
- On the Configuration tab, click the Network interfaces tab.
- Click Add network interface.
- Enter the interface name and select the network attachment definition from the Network list.
- Click Save.
- Restart the VM to apply the changes.
7.4.5.2. Connecting to a VM attached to a secondary network by using SSH Copiar o linkLink copiado para a área de transferência!
You can connect to a virtual machine (VM) attached to a secondary network by using SSH.
Prerequisites
- You attached a VM to a secondary network with a DHCP server.
- You have an SSH client installed.
Procedure
Obtain the IP address of the VM by running the following command:
$ oc describe vm <vm_name> -n <namespace>Example output
# ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default # ...Connect to the VM by running the following command:
$ ssh <user_name>@<ip_address> -i <ssh_key>Example
$ ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user
7.5. Editing virtual machines Copiar o linkLink copiado para a área de transferência!
You can update a virtual machine (VM) configuration by using the OpenShift Container Platform web console. You can update the YAML file or the VirtualMachine details page.
You can also edit a VM by using the command line.
7.5.1. Editing a virtual machine by using the command line Copiar o linkLink copiado para a área de transferência!
You can edit a virtual machine (VM) by using the command line.
Prerequisites
-
You installed the CLI.
oc
Procedure
Obtain the virtual machine configuration by running the following command:
$ oc edit vm <vm_name>- Edit the YAML configuration.
If you edit a running virtual machine, you need to do one of the following:
- Restart the virtual machine.
Run the following command for the new configuration to take effect:
$ oc apply vm <vm_name> -n <namespace>
7.5.2. Adding a disk to a virtual machine Copiar o linkLink copiado para a área de transferência!
You can add a virtual disk to a virtual machine (VM) by using the OpenShift Container Platform web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
- On the Disks tab, click Add disk.
Specify the Source, Name, Size, Type, Interface, and Storage Class.
- Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
-
Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the config map.
kubevirt-storage-class-defaults
- Click Add.
If the VM is running, you must restart the VM to apply the change.
7.5.2.1. Storage fields Copiar o linkLink copiado para a área de transferência!
| Field | Description |
|---|---|
| Blank (creates PVC) | Create an empty disk. |
| Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). |
| Use an existing PVC | Use a PVC that is already available in the cluster. |
| Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. |
| Import via Registry (creates PVC) | Import content via container registry. |
| Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. |
| Name | Name of the disk. The name can contain lowercase letters (
|
| Size | Size of the disk in GiB. |
| Type | Type of disk. Example: Disk or CD-ROM |
| Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
| Storage Class | The storage class that is used to create the disk. |
Advanced storage settings
The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.
If you do not specify these parameters, the system uses the default storage profile values.
| Parameter | Option | Parameter description |
|---|---|---|
| Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
| Block | Stores the virtual disk directly on the block volume. Only use
| |
| Access Mode | ReadWriteOnce (RWO) | Volume can be mounted as read-write by a single node. |
| ReadWriteMany (RWX) | Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. |
7.5.3. Adding a secret, config map, or service account to a virtual machine Copiar o linkLink copiado para a área de transferência!
You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.
These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.
If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page.
Prerequisites
- The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click Configuration → Environment.
- Click Add Config Map, Secret or Service Account.
- Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
- Optional: Click Reload to revert the environment to its last saved state.
- Click Save.
Verification
- On the VirtualMachine details page, click Configuration → Disks and verify that the resource is displayed in the list of disks.
- Restart the virtual machine by clicking Actions → Restart.
You can now mount the secret, config map, or service account as you would mount any other disk.
7.6. Editing boot order Copiar o linkLink copiado para a área de transferência!
You can update the values for a boot order list by using the web console or the CLI.
With Boot Order in the Virtual Machine Overview page, you can:
- Select a disk or network interface controller (NIC) and add it to the boot order list.
- Edit the order of the disks or NICs in the boot order list.
- Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.
7.6.1. Adding items to a boot order list in the web console Copiar o linkLink copiado para a área de transferência!
Add items to a boot order list by using the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
- Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
- Add any additional disks or NICs to the boot order list.
- Click Save.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
7.6.2. Editing a boot order list in the web console Copiar o linkLink copiado para a área de transferência!
Edit the boot order list in the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
Choose the appropriate method to move the item in the boot order list:
- If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
- If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
- Click Save.
If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
7.6.3. Editing a boot order list in the YAML configuration file Copiar o linkLink copiado para a área de transferência!
Edit the boot order list in a YAML configuration file by using the CLI.
Procedure
Open the YAML configuration file for the virtual machine by running the following command:
$ oc edit vm <vm_name> -n <namespace>Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:
disks: - bootOrder: 11 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 22 macAddress: '02:96:c4:00:00' masquerade: {} name: default- Save the YAML file.
7.6.4. Removing items from a boot order list in the web console Copiar o linkLink copiado para a área de transferência!
Remove items from a boot order list by using the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
-
Click the Remove icon
next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
7.7. Deleting virtual machines Copiar o linkLink copiado para a área de transferência!
You can delete a virtual machine from the web console or by using the
oc
7.7.1. Deleting a virtual machine using the web console Copiar o linkLink copiado para a área de transferência!
Deleting a virtual machine permanently removes it from the cluster.
Procedure
- In the OpenShift Container Platform console, click Virtualization → VirtualMachines from the side menu.
Click the Options menu
beside a virtual machine and select Delete.
Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions → Delete.
- Optional: Select With grace period or clear Delete disks.
- Click Delete to permanently delete the virtual machine.
7.7.2. Deleting a virtual machine by using the CLI Copiar o linkLink copiado para a área de transferência!
You can delete a virtual machine by using the
oc
oc
Prerequisites
- Identify the name of the virtual machine that you want to delete.
Procedure
Delete the virtual machine by running the following command:
$ oc delete vm <vm_name>NoteThis command only deletes a VM in the current project. Specify the
option if the VM you want to delete is in a different project or namespace.-n <project_name>
7.8. Exporting virtual machines Copiar o linkLink copiado para a área de transferência!
You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes.
You create a
VirtualMachineExport
Alternatively, you can use the virtctl vmexport command to create a
VirtualMachineExport
7.8.1. Creating a VirtualMachineExport custom resource Copiar o linkLink copiado para a área de transferência!
You can create a
VirtualMachineExport
- Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM.
-
VM snapshot: Exports PVCs contained in a CR.
VirtualMachineSnapshot -
PVC: Exports a PVC. If the PVC is used by another pod, such as the pod, the export remains in a
virt-launcherstate until the PVC is no longer in use.Pending
The
VirtualMachineExport
Ingress
Route
The export server supports the following file formats:
-
: Raw disk image file.
raw -
: Compressed disk image file.
gzip -
: PVC directory and files.
dir -
: Compressed PVC file.
tar.gz
Prerequisites
- The VM must be shut down for a VM export.
Procedure
Create a
manifest to export a volume from aVirtualMachineExport,VirtualMachine, orVirtualMachineSnapshotCR according to the following example and save it asPersistentVolumeClaim:example-export.yamlVirtualMachineExportexampleapiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io"1 kind: VirtualMachine2 name: example-vm ttlDuration: 1h3 Create the
CR:VirtualMachineExport$ oc create -f example-export.yamlGet the
CR:VirtualMachineExport$ oc get vmexport example-export -o yamlThe internal and external links for the exported volumes are displayed in the
stanza:statusOutput example
apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external:1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal:2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export
7.8.2. Accessing exported virtual machine manifests Copiar o linkLink copiado para a área de transferência!
After you export a virtual machine (VM) or snapshot, you can get the
VirtualMachine
Prerequisites
You exported a virtual machine or VM snapshot by creating a
custom resource (CR).VirtualMachineExportNoteobjects that have theVirtualMachineExportparameter do not generate virtual machine manifests.spec.source.kind: PersistentVolumeClaim
Procedure
To access the manifests, you must first copy the certificates from the source cluster to the target cluster.
- Log in to the source cluster.
Save the certificates to the
file by running the following command:cacert.crt$ oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crtReplace
with the<export_name>value from themetadata.nameobject.VirtualMachineExport-
Copy the file to the target cluster.
cacert.crt
Decode the token in the source cluster and save it to the
file by running the following command:token_decode$ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decodeReplace
with the<export_name>value from themetadata.nameobject.VirtualMachineExport-
Copy the file to the target cluster.
token_decode Get the
custom resource by running the following command:VirtualMachineExport$ oc get vmexport <export_name> -o yamlReview the
stanza, which is divided intostatus.linksandexternalsections. Note theinternalfields within each section:manifests.urlExample output
apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: #... links: external: #... manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret internal: #... manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export-
where the
status.links.external.manifests.urlistypecontains theallmanifest,VirtualMachinemanifest, if present, and aDataVolumemanifest that contains the public certificate for the external URL’s ingress or route.ConfigMap -
where the
status.links.external.manifests.urlistypecontains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token.auth-header-secret
-
- Log in to the target cluster.
Get the
manifest by running the following command:Secret$ curl --cacert cacert.crt <secret_manifest_url> -H \ "x-kubevirt-export-token:token_decode" -H \ "Accept:application/yaml"-
Replace with an
<secret_manifest_url>URL from theauth-header-secretYAML output.VirtualMachineExport Reference the
file that you created earlier.token_decodeFor example:
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
-
Replace
Get the manifests of
, such as thetype: allandConfigMapmanifests, by running the following command:VirtualMachine$ curl --cacert cacert.crt <all_manifest_url> -H \ "x-kubevirt-export-token:token_decode" -H \ "Accept:application/yaml"-
Replace with a URL from the
<all_manifest_url>YAML output.VirtualMachineExport Reference the
file that you created earlier.token_decodeFor example:
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
-
Replace
Next steps
-
You can now create the and
ConfigMapobjects on the target cluster by using the exported manifests.VirtualMachine
7.9. Managing virtual machine instances Copiar o linkLink copiado para a área de transferência!
If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using
oc
virtctl commands from the command-line interface (CLI).
The
virtctl
oc
virtctl
7.9.1. About virtual machine instances Copiar o linkLink copiado para a área de transferência!
A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the
oc
A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:
- List standalone VMIs and their details.
- Edit labels and annotations for a standalone VMI.
- Delete a standalone VMI.
When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.
Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.
7.9.2. Listing all virtual machine instances using the CLI Copiar o linkLink copiado para a área de transferência!
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the
oc
Procedure
List all VMIs by running the following command:
$ oc get vmis -A
7.9.3. Listing standalone virtual machine instances using the web console Copiar o linkLink copiado para a área de transferência!
Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).
VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.
Procedure
Click Virtualization → VirtualMachines from the side menu.
You can identify a standalone VMI by a dark colored badge next to its name.
7.9.4. Editing a standalone virtual machine instance using the web console Copiar o linkLink copiado para a área de transferência!
You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.
Procedure
- In the OpenShift Container Platform console, click Virtualization → VirtualMachines from the side menu.
- Select a standalone VMI to open the VirtualMachineInstance details page.
- On the Details tab, click the pencil icon beside Annotations or Labels.
- Make the relevant changes and click Save.
7.9.5. Deleting a standalone virtual machine instance using the CLI Copiar o linkLink copiado para a área de transferência!
You can delete a standalone virtual machine instance (VMI) by using the
oc
Prerequisites
- Identify the name of the VMI that you want to delete.
Procedure
Delete the VMI by running the following command:
$ oc delete vmi <vmi_name>
7.9.6. Deleting a standalone virtual machine instance using the web console Copiar o linkLink copiado para a área de transferência!
Delete a standalone virtual machine instance (VMI) from the web console.
Procedure
- In the OpenShift Container Platform web console, click Virtualization → VirtualMachines from the side menu.
- Click Actions → Delete VirtualMachineInstance.
- In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.
7.10. Controlling virtual machine states Copiar o linkLink copiado para a área de transferência!
You can stop, start, restart, and unpause virtual machines from the web console.
You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use
virtctl
7.10.1. Starting a virtual machine Copiar o linkLink copiado para a área de transferência!
You can start a virtual machine from the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Find the row that contains the virtual machine that you want to start.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Start VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you start it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
- Click Actions → Start.
When you start virtual machine that is provisioned from a
URL
7.10.2. Stopping a virtual machine Copiar o linkLink copiado para a área de transferência!
You can stop a virtual machine from the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Find the row that contains the virtual machine that you want to stop.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Stop VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you stop it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
- Click Actions → Stop.
7.10.3. Restarting a virtual machine Copiar o linkLink copiado para a área de transferência!
You can restart a running virtual machine from the web console.
To avoid errors, do not restart a virtual machine while it has a status of Importing.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Find the row that contains the virtual machine that you want to restart.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Restart.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you restart it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
- Click Actions → Restart.
7.10.4. Pausing a virtual machine Copiar o linkLink copiado para a área de transferência!
You can pause a virtual machine from the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Find the row that contains the virtual machine that you want to pause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Pause VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you pause it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
- Click Actions → Pause.
7.10.5. Unpausing a virtual machine Copiar o linkLink copiado para a área de transferência!
You can unpause a paused virtual machine from the web console.
Prerequisites
- At least one of your virtual machines must have a status of Paused.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Find the row that contains the virtual machine that you want to unpause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Unpause VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you unpause it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
- Click Actions → Unpause.
7.11. Using virtual Trusted Platform Module devices Copiar o linkLink copiado para a área de transferência!
Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the
VirtualMachine
VirtualMachineInstance
7.11.1. About vTPM devices Copiar o linkLink copiado para a área de transferência!
A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip.
You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip.
If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one.
A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the
vmStateStorageClass
HyperConverged
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
spec:
vmStateStorageClass: <storage_class_name>
# ...
The storage class must be of type
Filesystem
ReadWriteMany
7.11.2. Adding a vTPM device to a virtual machine Copiar o linkLink copiado para a área de transferência!
Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM.
Prerequisites
-
You have installed the OpenShift CLI ().
oc -
You have configured a Persistent Volume Claim (PVC) to use a storage class of type that supports the
Filesystem(RWX) access mode. This is necessary for the vTPM device data to persist across VM reboots.ReadWriteMany
Procedure
Run the following command to update the VM configuration:
$ oc edit vm <vm_name> -n <namespace>Edit the VM specification to add the vTPM device. For example:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm:1 persistent: true2 # ...-
specifies the vTPM device to add to the VM.
spec.template.spec.domain.devices.tpm -
specifies that the vTPM device state persists after the VM is shut down. The default value is
spec.template.spec.domain.devices.tpm.persistent.false
-
- To apply your changes, save and exit the editor.
- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
7.12. Managing virtual machines with OpenShift Pipelines Copiar o linkLink copiado para a área de transferência!
Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container.
The Scheduling, Scale, and Performance (SSP) Operator integrates OpenShift Virtualization with OpenShift Pipelines. The SSP Operator includes tasks and example pipelines that allow you to:
- Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes
- Run commands in VMs
-
Manipulate disk images with tools
libguestfs
Managing virtual machines with Red Hat OpenShift Pipelines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.12.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
-
You have access to an OpenShift Container Platform cluster with permissions.
cluster-admin -
You have installed the OpenShift CLI ().
oc - You have installed OpenShift Pipelines.
7.12.2. Deploying the Scheduling, Scale, and Performance (SSP) resources Copiar o linkLink copiado para a área de transferência!
The SSP Operator example Tekton Tasks and Pipelines are not deployed by default when you install OpenShift Virtualization. To deploy the SSP Operator’s Tekton resources, enable the
deployTektonTaskResources
HyperConverged
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvSet the
field tospec.featureGates.deployTektonTaskResources.trueapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: kubevirt-hyperconverged spec: tektonPipelinesNamespace: <user_namespace>1 featureGates: deployTektonTaskResources: true2 # ...NoteThe tasks and example pipelines remain available even if you disable the feature gate later.
- Save your changes and exit the editor.
7.12.3. Virtual machine tasks supported by the SSP Operator Copiar o linkLink copiado para a área de transferência!
The following table shows the tasks that are included as part of the SSP Operator.
| Task | Description |
|---|---|
|
| Create a virtual machine from a provided manifest or with
|
|
| Create a virtual machine from a template. |
|
| Copy a virtual machine template. |
|
| Modify a virtual machine template. |
|
| Create or delete data volumes or data sources. |
|
| Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. |
|
| Use the
|
|
| Use the
|
|
| Wait for a specific status of a virtual machine instance and fail or succeed based on the status. |
Virtual machine creation in pipelines now utilizes
ClusterInstanceType
ClusterPreference
create-vm-from-template
copy-template
modify-vm-template
7.12.4. Example pipelines Copiar o linkLink copiado para a área de transferência!
The SSP Operator includes the following example
Pipeline
You might have to run more than one installer pipline if you need multiple versions of Windows. If you run more than one installer pipeline, each one requires unique parameters, such as the
autounattend
- Windows EFI installer pipeline
- This pipeline installs Windows 11 or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process.
- Windows BIOS installer pipeline
- This pipeline installs Windows 10 into a new data volume from a Windows installation image, also called an ISO file. A custom answer file is used to run the installation process.
- Windows customize pipeline
- This pipeline clones the data volume of a basic Windows 10, 11, or Windows Server 2022 installation, customizes it by installing Microsoft SQL Server Express or Microsoft Visual Studio Code, and then creates a new image and template.
The example pipelines use a config map file with
sysprep
7.12.4.1. Running the example pipelines using the web console Copiar o linkLink copiado para a área de transferência!
You can run the example pipelines from the Pipelines menu in the web console.
Procedure
- Click Pipelines → Pipelines in the side menu.
- Select a pipeline to open the Pipeline details page.
- From the Actions list, select Start. The Start Pipeline dialog is displayed.
- Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status.
7.12.4.2. Running the example pipelines using the CLI Copiar o linkLink copiado para a área de transferência!
Use a
PipelineRun
PipelineRun
TaskRun
Procedure
To run the Windows 10 installer pipeline, create the following
manifest:PipelineRunapiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task status: {}Where
is the URL for the Windows 10 64-bit ISO file. The product language must be English (United States).<link_to_windows_10_iso>Apply the
manifest:PipelineRun$ oc apply -f windows10-installer-run.yamlTo run the Windows 10 customize pipeline, create the following
manifest:PipelineRunapiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden taskServiceAccountName: modify-vm-template-task status: {}Apply the
manifest:PipelineRun$ oc apply -f windows10-customize-run.yaml
7.13. Advanced virtual machine management Copiar o linkLink copiado para a área de transferência!
7.13.1. Working with resource quotas for virtual machines Copiar o linkLink copiado para a área de transferência!
Create and manage resource quotas for virtual machines.
7.13.1.1. Setting resource quota limits for virtual machines Copiar o linkLink copiado para a área de transferência!
Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.
Procedure
Set limits for a VM by editing the
manifest. For example:VirtualMachineapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi1 - 1
- This configuration is supported because the
limits.memoryvalue is at least100Milarger than therequests.memoryvalue.
-
Save the manifest.
VirtualMachine
7.13.2. Specifying nodes for virtual machines Copiar o linkLink copiado para a área de transferência!
You can place virtual machines (VMs) on specific nodes by using node placement rules.
7.13.2.1. About node placement for virtual machines Copiar o linkLink copiado para a área de transferência!
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:
- You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
- You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
- Your VMs require specific hardware features that are not present on all available nodes.
- You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.
You can use the following rule types in the
spec
VirtualMachine
nodeSelector- Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity-
Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the
VirtualMachineworkload type is based on thePodobject. tolerationsAllows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.
NoteAffinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met.
7.13.2.2. Node placement examples Copiar o linkLink copiado para a área de transferência!
The following example YAML file snippets use
nodePlacement
affinity
tolerations
7.13.2.2.1. Example: VM node placement with nodeSelector Copiar o linkLink copiado para a área de transferência!
In this example, the virtual machine requires a node that has metadata containing both
example-key-1 = example-value-1
example-key-2 = example-value-2
If there are no nodes that fit this description, the virtual machine is not scheduled.
Example VM manifest
metadata:
name: example-vm-node-selector
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
template:
spec:
nodeSelector:
example-key-1: example-value-1
example-key-2: example-value-2
# ...
7.13.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity Copiar o linkLink copiado para a área de transferência!
In this example, the VM must be scheduled on a node that has a running pod with the label
example-key-1 = example-value-1
If possible, the VM is not scheduled on a node that has any pod with the label
example-key-2 = example-value-2
Example VM manifest
metadata:
name: example-vm-pod-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
template:
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: example-key-1
operator: In
values:
- example-value-1
topologyKey: kubernetes.io/hostname
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: example-key-2
operator: In
values:
- example-value-2
topologyKey: kubernetes.io/hostname
# ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecutionrule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecutionrule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
7.13.2.2.3. Example: VM node placement with node affinity Copiar o linkLink copiado para a área de transferência!
In this example, the VM must be scheduled on a node that has the label
example.io/example-key = example-value-1
example.io/example-key = example-value-2
If possible, the scheduler avoids nodes that have the label
example-node-label-key = example-node-label-value
Example VM manifest
metadata:
name: example-vm-node-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-key
operator: In
values:
- example-value-1
- example-value-2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: example-node-label-key
operator: In
values:
- example-node-label-value
# ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecutionrule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecutionrule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
7.13.2.2.4. Example: VM node placement with tolerations Copiar o linkLink copiado para a área de transferência!
In this example, nodes that are reserved for virtual machines are already labeled with the
key=virtualization:NoSchedule
tolerations
A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.
Example VM manifest
metadata:
name: example-vm-tolerations
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
# ...
7.13.3. Configuring certificate rotation Copiar o linkLink copiado para a área de transferência!
Configure certificate rotation parameters to replace existing certificates.
7.13.3.1. Configuring certificate rotation Copiar o linkLink copiado para a área de transferência!
You can do this during OpenShift Virtualization installation in the web console or after installation in the
HyperConverged
Procedure
Open the
CR by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvEdit the
fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golangspec.certConfigParseDurationformat.apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s1 server: duration: 24h0m0s2 renewBefore: 12h0m0s3 - Apply the YAML file to your cluster.
7.13.3.2. Troubleshooting certificate rotation parameters Copiar o linkLink copiado para a área de transferência!
Deleting one or more
certConfig
-
The value of must be less than or equal to the value of
ca.renewBefore.ca.duration -
The value of must be less than or equal to the value of
server.duration.ca.duration -
The value of must be less than or equal to the value of
server.renewBefore.server.duration
If the default values conflict with these conditions, you will receive an error.
If you remove the
server.duration
24h0m0s
ca.duration
Example
certConfig:
ca:
duration: 4h0m0s
renewBefore: 1h0m0s
server:
duration: 4h0m0s
renewBefore: 4h0m0s
This results in the following error message:
error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration
The error message only mentions the first conflict. Review all certConfig values before you proceed.
7.13.4. Configuring the default CPU model Copiar o linkLink copiado para a área de transferência!
Use the
defaultCPUModel
HyperConverged
The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster.
If the VM does not have a defined CPU model:
-
The is automatically set using the CPU model defined at the cluster-wide level.
defaultCPUModel
-
The
If both the VM and the cluster have a defined CPU model:
- The VM’s CPU model takes precedence.
If neither the VM nor the cluster have a defined CPU model:
- The host-model is automatically set using the CPU model defined at the host level.
7.13.4.1. Configuring the default CPU model Copiar o linkLink copiado para a área de transferência!
Configure the
defaultCPUModel
HyperConverged
defaultCPUModel
The
defaultCPUModel
Prerequisites
- Install the OpenShift CLI (oc).
Procedure
Open the
CR by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the
field to the CR and set the value to the name of a CPU model that exists in the cluster:defaultCPUModelapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: "EPYC"- Apply the YAML file to your cluster.
7.13.5. Using UEFI mode for virtual machines Copiar o linkLink copiado para a área de transferência!
You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode.
7.13.5.1. About UEFI mode for virtual machines Copiar o linkLink copiado para a área de transferência!
Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.
It stores all the information about initialization and startup in a file with a
.efi
7.13.5.2. Booting virtual machines in UEFI mode Copiar o linkLink copiado para a área de transferência!
You can configure a virtual machine to boot in UEFI mode by editing the
VirtualMachine
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Edit or create a
manifest file. Use theVirtualMachinestanza to configure UEFI mode:spec.firmware.bootloaderBooting in UEFI mode with secure boot active
apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true1 firmware: bootloader: efi: secureBoot: true2 # ...- 1
- OpenShift Virtualization requires System Management Mode (
SMM) to be enabled for Secure Boot in UEFI mode to occur. - 2
- OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot.
Apply the manifest to your cluster by running the following command:
$ oc create -f <file_name>.yaml
7.13.6. Configuring PXE booting for virtual machines Copiar o linkLink copiado para a área de transferência!
PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
7.13.6.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
7.13.6.2. PXE booting with a specified MAC address Copiar o linkLink copiado para a área de transferência!
As an administrator, you can boot a client over the network by first creating a
NetworkAttachmentDefinition
Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the network attachment definition file for PXE network
:pxe-net-confapiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: | { "cniVersion": "0.3.1", "name": "pxe-net-conf", "type": "bridge", "bridge": "bridge-interface", "macspoofchk": false, "vlan": 100, "preserveDefaultVlan": false }-
specifies the name for the
metadata.nameobject.NetworkAttachmentDefinition -
specifies the name for the configuration. It is recommended to match the configuration name to the
spec.config.namevalue of the network attachment definition.name -
specifies the actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin.
spec.config.type -
specifies the name of the Linux bridge configured on the node.
spec.config.bridge -
is an optional flag to enable the MAC spoof check. When set to
spec.config.macspoofchk, you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack.true -
is an optional VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
spec.config.vlan -
is an optional flag that indicates whether the VM connects to the bridge through the default VLAN. The default value is
spec.config.preserveDefaultVlan.true
-
Create the network attachment definition by using the file you created in the previous step:
$ oc create -f pxe-net-conf.yamlEdit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.
Ensure that
is set tobootOrderso that the interface boots first. In this example, the interface is connected to a network called1:<pxe-net>interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1NoteBoot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
value tobootOrder:2devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2Specify that the network is connected to the previously created network attachment definition. In this scenario,
is connected to the network attachment definition called<pxe-net>:<pxe-net-conf>networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yamlExample output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" createdWait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: RunningView the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot- Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
$ virtctl console vmi-pxe-boot
Verification
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
for the PXE boot, without an IP address. The other interface,eth1, got an IP address from OpenShift Container Platform.eth0$ ip addrExample output
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
7.13.6.3. OpenShift Virtualization networking glossary Copiar o linkLink copiado para a área de transferência!
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicymanifest to the cluster.
7.13.7. Using huge pages with virtual machines Copiar o linkLink copiado para a área de transferência!
You can use huge pages as backing memory for virtual machines in your cluster.
7.13.7.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- Nodes must have pre-allocated huge pages configured.
7.13.7.2. What huge pages do Copiar o linkLink copiado para a área de transferência!
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.
7.13.7.3. Configuring huge pages for virtual machines Copiar o linkLink copiado para a área de transferência!
You can configure virtual machines to use pre-allocated huge pages by including the
memory.hugepages.pageSize
resources.requests.memory
The memory request must be divisible by the page size. For example, you cannot request
500Mi
1Gi
The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.
If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.
Prerequisites
- Nodes must have pre-allocated huge pages configured.
Procedure
In your virtual machine configuration, add the
andresources.requests.memoryparameters to thememory.hugepages.pageSize. The following configuration snippet is for a virtual machine that requests a total ofspec.domainmemory with a page size of4Gi:1Gikind: VirtualMachine # ... spec: domain: resources: requests: memory: "4Gi"1 memory: hugepages: pageSize: "1Gi"2 # ...Apply the virtual machine configuration:
$ oc apply -f <virtual_machine>.yaml
7.13.8. Enabling dedicated resources for virtual machines Copiar o linkLink copiado para a área de transferência!
To improve performance, you can dedicate node resources, such as CPU, to a virtual machine.
7.13.8.1. About dedicated resources Copiar o linkLink copiado para a área de transferência!
When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.
7.13.8.2. Prerequisites Copiar o linkLink copiado para a área de transferência!
-
The CPU Manager must be configured on the node. Verify that the node has the label before scheduling virtual machine workloads.
cpumanager = true - The virtual machine must be powered off.
7.13.8.3. Enabling dedicated resources for a virtual machine Copiar o linkLink copiado para a área de transferência!
You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources.
Procedure
- In the OpenShift Container Platform console, click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- On the Configuration → Scheduling tab, click the edit icon beside Dedicated Resources.
- Select Schedule this workload with dedicated resources (guaranteed policy).
- Click Save.
7.13.9. Scheduling virtual machines Copiar o linkLink copiado para a área de transferência!
You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.
7.13.9.1. Policy attributes Copiar o linkLink copiado para a área de transferência!
You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.
| Policy attribute | Description |
|---|---|
| force | The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU. |
| require | Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model. |
| optional | The VM is added to a node if that VM is supported by the host’s physical machine CPU. |
| disable | The VM cannot be scheduled with CPU node discovery. |
| forbid | The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. |
7.13.9.2. Setting a policy attribute and CPU feature Copiar o linkLink copiado para a área de transferência!
You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.
Procedure
Edit the
spec of your VM configuration file. The following example sets the CPU feature and thedomainpolicy for a virtual machine (VM):requireapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic policy: require-
defines the name of the CPU feature for the VM.
spec.template.spec.domain.cpu.features.name -
defines the policy attribute for the VM.
spec.template.spec.domain.cpu.features.policy
-
7.13.9.3. Scheduling virtual machines with the supported CPU model Copiar o linkLink copiado para a área de transferência!
You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.
Procedure
Edit the
spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:domainapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe # ...-
defines the CPU model for the VM.
spec.template.spec.domain.cpu.model
-
7.13.9.4. Scheduling virtual machines with the host model Copiar o linkLink copiado para a área de transferência!
When the CPU model for a virtual machine (VM) is set to
host-model
Procedure
Edit the
spec of your VM configuration file. The following example showsdomainbeing specified for the virtual machine:host-modelapiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model-
defines the VM that inherits the CPU model of the node where it is scheduled.
spec.template.spec.domain.cpu.model
-
7.13.9.5. Scheduling virtual machines with a custom scheduler Copiar o linkLink copiado para a área de transferência!
You can use a custom scheduler to schedule a virtual machine (VM) on a node.
Prerequisites
- A secondary scheduler is configured for your cluster.
Procedure
Add the custom scheduler to the VM configuration by editing the
manifest. For example:VirtualMachineapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler domain: devices: disks: - name: containerdisk disk: bus: virtio # ...schedulerName-
The name of the custom scheduler. If the
schedulerNamevalue does not match an existing scheduler, thevirt-launcherpod stays in aPendingstate until the specified scheduler is found.
Verification
Verify that the VM is using the custom scheduler specified in the
manifest by checking theVirtualMachinepod events:virt-launcherView the list of pods in your cluster by entering the following command:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24mRun the following command to display the pod events:
$ oc describe pod virt-launcher-vm-fedora-dpc87The value of the
field in the output verifies that the scheduler name matches the custom scheduler specified in theFrommanifest:VirtualMachineExample output
[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]
7.13.10. Configuring PCI passthrough Copiar o linkLink copiado para a área de transferência!
The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine (VM). When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.
Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the
oc
7.13.10.1. Preparing nodes for GPU passthrough Copiar o linkLink copiado para a área de transferência!
You can prevent GPU operands from deploying on worker nodes that you designated for GPU passthrough.
7.13.10.1.1. Preventing NVIDIA GPU operands from deploying on nodes Copiar o linkLink copiado para a área de transferência!
If you use the NVIDIA GPU Operator in your cluster, you can apply the
nvidia.com/gpu.deploy.operands=false
Prerequisites
-
The OpenShift CLI () is installed.
oc
Procedure
Label the node by running the following command:
$ oc label node <node_name> nvidia.com/gpu.deploy.operands=falsewhere:
<node_name>- Specifies the name of a node where you do not want to install the NVIDIA GPU operands.
Verification
Verify that the label was added to the node by running the following command:
$ oc describe node <node_name>Optional: If GPU operands were previously deployed on the node, verify their removal.
Check the status of the pods in the
namespace by running the following command:nvidia-gpu-operator$ oc get pods -n nvidia-gpu-operatorExample output
NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9dMonitor the pod status until the pods with
status are removed:Terminating$ oc get pods -n nvidia-gpu-operatorExample output
NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d
7.13.10.2. Preparing host devices for PCI passthrough Copiar o linkLink copiado para a área de transferência!
7.13.10.2.1. About preparing a host device for PCI passthrough Copiar o linkLink copiado para a área de transferência!
To prepare a host device for PCI passthrough by using the CLI, create a
MachineConfig
permittedHostDevices
HyperConverged
permittedHostDevices
To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the
HyperConverged
7.13.10.2.2. Adding kernel arguments to enable the IOMMU driver Copiar o linkLink copiado para a área de transferência!
To enable the IOMMU driver in the kernel, create the
MachineConfig
Prerequisites
- You have cluster administrator permissions.
- Your CPU hardware is Intel or AMD.
- You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS.
Procedure
Create a
object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.MachineConfigapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker1 name: 100-worker-iommu2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on3 # ...-
specifies that the new kernel argument is applied only to worker nodes.
metadata.labels.machineconfiguration.openshift.io/role -
specifies the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as
metadata.name.amd_iommu=on -
specifies the kernel argument as
spec.kernelArgumentsfor an Intel CPU.intel_iommu
-
Create the new
object:MachineConfig$ oc create -f 100-worker-kernel-arg-iommu.yaml
Verification
Verify that the new
object was added.MachineConfig$ oc get MachineConfig
7.13.10.2.3. Binding PCI devices to the VFIO driver Copiar o linkLink copiado para a área de transferência!
To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for
vendor-ID
device-ID
MachineConfig
MachineConfig
/etc/modprobe.d/vfio.conf
Prerequisites
- You added kernel arguments to enable IOMMU for the CPU.
Procedure
Run the
command to obtain thelspciand thevendor-IDfor the PCI device.device-ID$ lspci -nnv | grep -i nvidiaExample output
02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)Create a Butane config file,
, binding the PCI device to the VFIO driver.100-worker-vfiopci.buNoteThe Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in
. For example,0. See "Creating machine configs with Butane" for information about Butane.4.14.0Example
variant: openshift version: 4.14.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb81 - path: /etc/modules-load.d/vfio-pci.conf2 mode: 0644 overwrite: true contents: inline: vfio-pci-
specifies that the new kernel argument is applied only to worker nodes.
metadata.labels.machineconfiguration.openshift.io/role: worker -
, where the path is
storage.files.contents.inline, specifies the previously determined/etc/modprobe.d/vfio.confvalue (vendor-ID) and the10devalue (device-ID) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information.1eb8 -
, where the
storage.files.pathiscontents.inline, specifies the file that loads thevfio-pcikernel module on the worker nodes.vfio-pci
-
Use Butane to generate a
object file,MachineConfig, containing the configuration to be delivered to the worker nodes:100-worker-vfiopci.yaml$ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yamlApply the
object to the worker nodes:MachineConfig$ oc apply -f 100-worker-vfiopci.yamlVerify that the
object was added.MachineConfig$ oc get MachineConfigExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s
Verification
Verify that the VFIO driver is loaded.
$ lspci -nnk -d 10de:The output confirms that the VFIO driver is being used.
Example output
04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau
7.13.10.2.4. Exposing PCI host devices in the cluster using the CLI Copiar o linkLink copiado para a área de transferência!
To expose PCI host devices in the cluster, add details about the PCI devices to the
spec.permittedHostDevices.pciHostDevices
HyperConverged
Procedure
Edit the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the PCI device information to the
array. For example:spec.permittedHostDevices.pciHostDevicesExample configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true # ...-
specifies the host devices that are permitted to be used in the cluster.
spec.permittedHostDevices -
specifies the list of PCI devices available on the node.
spec.permittedHostDevices.pciHostDevices -
specifies the
spec.permittedHostDevices.pciHostDevices.pciDeviceSelectorand thevendor-IDrequired to identify the PCI device.device-ID -
specifies the name of a PCI host device.
spec.permittedHostDevices.pciHostDevices.resourceName - is an optional setting. Setting this field to
spec.permittedHostDevices.pciHostDevices.externalResourceProviderindicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.trueNoteThe above example snippet shows two PCI host devices that are named
andnvidia.com/GV100GL_Tesla_V100added to the list of permitted host devices in thenvidia.com/TU104GL_Tesla_T4CR. These devices have been tested and verified to work with OpenShift Virtualization.HyperConverged
-
- Save your changes and exit the editor.
Verification
Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the
,nvidia.com/GV100GL_Tesla_V100, andnvidia.com/TU104GL_Tesla_T4resource names.intel.com/qat$ oc describe node <node_name>Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250
7.13.10.2.5. Removing PCI host devices from the cluster using the CLI Copiar o linkLink copiado para a área de transferência!
To remove a PCI host device from the cluster, delete the information for that device from the
HyperConverged
Procedure
Edit the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvRemove the PCI device information from the
array by deleting thespec.permittedHostDevices.pciHostDevices,pciDeviceSelectorandresourceName(if applicable) fields for the appropriate device. In this example, theexternalResourceProviderresource has been deleted.intel.com/qatExample configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" # ...- Save your changes and exit the editor.
Verification
Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the
resource name.intel.com/qat$ oc describe node <node_name>Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250
7.13.10.3. Configuring virtual machines for PCI passthrough Copiar o linkLink copiado para a área de transferência!
After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.
7.13.10.3.1. Assigning a PCI device to a virtual machine Copiar o linkLink copiado para a área de transferência!
When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.
Procedure
Assign the PCI device to a virtual machine as a host device.
Example
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 name: hostdevices1-
specifies the name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.
spec.template.spec.domain.devices.hostDevices.deviceName
-
Verification
Use the following command to verify that the host device is available from the virtual machine.
$ lspci -nnk | grep NVIDIAExample output
$ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
7.13.11. Configuring virtual GPUs Copiar o linkLink copiado para a área de transferência!
If you have graphics processing unit (GPU) cards, OpenShift Virtualization can automatically create virtual GPUs (vGPUs) that you can assign to virtual machines (VMs).
7.13.11.1. About using virtual GPUs with OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the
HyperConverged
Refer to your hardware vendor’s documentation for functionality and support details.
- Mediated device
- A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.
7.13.11.2. Preparing hosts for mediated devices Copiar o linkLink copiado para a área de transferência!
You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices.
7.13.11.2.1. Adding kernel arguments to enable the IOMMU driver Copiar o linkLink copiado para a área de transferência!
To enable the IOMMU driver in the kernel, create the
MachineConfig
Prerequisites
- You have cluster administrator permissions.
- Your CPU hardware is Intel or AMD.
- You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS.
Procedure
Create a
object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.MachineConfigapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker1 name: 100-worker-iommu2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on3 # ...-
specifies that the new kernel argument is applied only to worker nodes.
metadata.labels.machineconfiguration.openshift.io/role -
specifies the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as
metadata.name.amd_iommu=on -
specifies the kernel argument as
spec.kernelArgumentsfor an Intel CPU.intel_iommu
-
Create the new
object:MachineConfig$ oc create -f 100-worker-kernel-arg-iommu.yaml
Verification
Verify that the new
object was added.MachineConfig$ oc get MachineConfig
7.13.11.3. Configuring the NVIDIA GPU Operator Copiar o linkLink copiado para a área de transferência!
You can use the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated virtual machines (VMs) in OpenShift Virtualization.
The NVIDIA GPU Operator is supported only by NVIDIA. For more information, see Obtaining Support from NVIDIA in the Red Hat Knowledgebase.
7.13.11.3.1. About using the NVIDIA GPU Operator Copiar o linkLink copiado para a área de transferência!
You can use the NVIDIA GPU Operator with OpenShift Virtualization to rapidly provision worker nodes for running GPU-enabled virtual machines (VMs). The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks that are required when preparing nodes for GPU workloads.
Before you can deploy application workloads to a GPU resource, you must install components such as the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features, such as automatic node labeling and monitoring. By automating these tasks, you can quickly scale the GPU capacity of your infrastructure. The NVIDIA GPU Operator can especially facilitate provisioning complex artificial intelligence and machine learning (AI/ML) workloads.
7.13.11.3.2. Options for configuring mediated devices Copiar o linkLink copiado para a área de transferência!
There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses OpenShift Virtualization features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator.
- Using the NVIDIA GPU Operator to configure mediated devices
- This method exclusively uses the NVIDIA GPU Operator to configure mediated devices. To use this method, refer to NVIDIA GPU Operator with OpenShift Virtualization in the NVIDIA documentation.
- Using OpenShift Virtualization to configure mediated devices
This method, which is tested by Red Hat, uses OpenShift Virtualization’s capabilities to configure mediated devices. In this case, the NVIDIA GPU Operator is only used for installing drivers with the NVIDIA vGPU Manager. The GPU Operator does not configure mediated devices.
When using the OpenShift Virtualization method, you still configure the GPU Operator by following the NVIDIA documentation. However, this method differs from the NVIDIA documentation in the following ways:
You must not overwrite the default
setting in thedisableMDEVConfiguration: falsecustom resource (CR).HyperConvergedImportantSetting this feature gate as described in the NVIDIA documentation prevents OpenShift Virtualization from configuring mediated devices.
You must configure your
manifest so that it matches the following example:ClusterPolicyExample manifest
kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: "true" vgpuManager: enabled: true repository: <vgpu_container_registry> image: <vgpu_image_name> version: <nvidia_vgpu_manager_version> vgpuDeviceManager: enabled: false sandboxDevicePlugin: enabled: false vfioManager: enabled: false-
is set to
spec.drive.enabled. This is not required for VMs.false -
is set to
spec.vgpuManager.enabled. This is required if you want to use vGPUs with VMs.true -
is set to your registry value.
spec.vgpuManager.repository -
is set to the version of the vGPU driver you have downloaded from the NVIDIA website and used to build the image.
spec.vgpuManager.version -
is set to
spec.vgpuDeviceManager.enabledto allow OpenShift Virtualization to configure mediated devices instead of the NVIDIA GPU Operator.false -
is set to
spec.sandboxDevicePlugin.enabledto prevent discovery and advertising of the vGPU devices to the kubelet.false -
is set to
spec.vfioManager.enabledto prevent loading thefalsedriver. Instead, follow the OpenShift Virtualization documentation to configure PCI passthrough.vfio-pci
-
7.13.11.4. How vGPUs are assigned to nodes Copiar o linkLink copiado para a área de transferência!
For each physical device, OpenShift Virtualization configures the following values:
- A single mdev type.
-
The maximum number of instances of the selected type.
mdev
The cluster architecture affects how devices are created and assigned to nodes.
- Large cluster with multiple cards per node
On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:
# ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 # ...In this scenario, each node has two cards, both of which support the following vGPU types:
nvidia-105 # ... nvidia-108 nvidia-217 nvidia-299 # ...On each node, OpenShift Virtualization creates the following vGPUs:
- 16 vGPUs of type nvidia-105 on the first card.
- 2 vGPUs of type nvidia-108 on the second card.
- One node has a single card that supports more than one requested vGPU type
OpenShift Virtualization uses the supported type that comes first on the
list.mediatedDeviceTypesFor example, the card on a node card supports
andnvidia-223. The followingnvidia-224list is configured:mediatedDeviceTypes# ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224 # ...In this example, OpenShift Virtualization uses the
type.nvidia-223
7.13.11.5. Managing mediated devices Copiar o linkLink copiado para a área de transferência!
Before you can assign mediated devices to virtual machines, you must create the devices and expose them to the cluster. You can also reconfigure and remove mediated devices.
7.13.11.5.1. Creating and exposing mediated devices Copiar o linkLink copiado para a área de transferência!
As an administrator, you can create mediated devices and expose them to the cluster by editing the
HyperConverged
Prerequisites
- You enabled the Input-Output Memory Management Unit (IOMMU) driver.
If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.
- If you use NVIDIA cards, you installed the NVIDIA GRID driver.
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvExample 7.2. Example configuration file with mediated devices configured
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q # ...Create mediated devices by adding them to the
stanza:spec.mediatedDevicesConfigurationExample YAML snippet
# ... spec: mediatedDevicesConfiguration: mediatedDeviceTypes:1 - <device_type> nodeMediatedDeviceTypes:2 - mediatedDeviceTypes:3 - <device_type> nodeSelector:4 <node_selector_key>: <node_selector_value> # ...- 1 1 1
- Required: Configures global settings for the cluster.
- 2 1 2 2
- Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global
mediatedDeviceTypesconfiguration. - 3 2 3 3
- Required if you use
nodeMediatedDeviceTypes. Overrides the globalmediatedDeviceTypesconfiguration for the specified nodes. - 4
- Required if you use
nodeMediatedDeviceTypes. Must include akey:valuepair.
ImportantBefore OpenShift Virtualization 4.14, the
field was namedmediatedDeviceTypes. Ensure that you use the correct field name when configuring mediated devices.mediatedDevicesTypesIdentify the name selector and resource name values for the devices that you want to expose to the cluster. You will add these values to the
CR in the next step.HyperConvergedFind the
value by running the following command:resourceName$ oc get $NODE -o json \ | jq '.status.allocatable \ | with_entries(select(.key | startswith("nvidia.com/"))) \ | with_entries(select(.value != "0"))'Find the
value by viewing the contents ofmdevNameSelector, substituting the correct values for your system./sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/nameFor example, the name file for the
type contains the selector stringnvidia-231. UsingGRID T4-2Qas theGRID T4-2Qvalue allows nodes to use themdevNameSelectortype.nvidia-231
Expose the mediated devices to the cluster by adding the
andmdevNameSelectorvalues to theresourceNamestanza of thespec.permittedHostDevices.mediatedDevicesCR:HyperConvergedExample YAML snippet
# ... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q1 resourceName: nvidia.com/GRID_T4-2Q2 # ...- Save your changes and exit the editor.
Verification
Optional: Confirm that a device was added to a specific node by running the following command:
$ oc describe node <node_name>
7.13.11.5.2. About changing and removing mediated devices Copiar o linkLink copiado para a área de transferência!
You can reconfigure or remove mediated devices in several ways:
-
Edit the CR and change the contents of the
HyperConvergedstanza.mediatedDeviceTypes -
Change the node labels that match the node selector.
nodeMediatedDeviceTypes Remove the device information from the
andspec.mediatedDevicesConfigurationstanzas of thespec.permittedHostDevicesCR.HyperConvergedNoteIf you remove the device information from the
stanza without also removing it from thespec.permittedHostDevicesstanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.spec.mediatedDevicesConfiguration
7.13.11.5.3. Removing mediated devices from the cluster Copiar o linkLink copiado para a área de transferência!
To remove a mediated device from the cluster, delete the information for that device from the
HyperConverged
Procedure
Edit the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvRemove the device information from the
andspec.mediatedDevicesConfigurationstanzas of thespec.permittedHostDevicesCR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:HyperConvergedExample configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q-
To remove the device type, delete it from the
nvidia-231array.mediatedDeviceTypes -
To remove the device, delete the
GRID T4-2Qfield and its correspondingmdevNameSelectorfield.resourceName
-
To remove the
- Save your changes and exit the editor.
7.13.11.6. Using mediated devices Copiar o linkLink copiado para a área de transferência!
You can assign mediated devices to one or more virtual machines.
7.13.11.6.1. Assigning a vGPU to a VM by using the CLI Copiar o linkLink copiado para a área de transferência!
Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs).
Prerequisites
-
The mediated device is configured in the custom resource.
HyperConverged - The virtual machine (VM) is stopped.
Procedure
Assign the mediated device to a VM by editing the
stanza of thespec.domain.devices.gpusmanifest.VirtualMachineExample virtual machine manifest:
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 name: gpu1 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2-
specifies the resource name associated with the mediated device.
spec.template.spec.domain.devices.gpus.deviceName -
specifies a name to identify the device on the VM.
spec.template.spec.domain.devices.gpus.name
-
Verification
To verify that the device is available from the virtual machine, run the following command, substituting
with the<device_name>value from thedeviceNamemanifest:VirtualMachine$ lspci -nnk | grep <device_name>
7.13.11.6.2. Assigning a vGPU to a VM by using the web console Copiar o linkLink copiado para a área de transferência!
You can assign virtual GPUs to virtual machines by using the OpenShift Container Platform web console.
You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems.
Prerequisites
The vGPU is configured as a mediated device in your cluster.
- To view the devices that are connected to your cluster, click Compute → Hardware Devices from the side menu.
- The VM is stopped.
Procedure
- In the OpenShift Container Platform web console, click Virtualization → VirtualMachines from the side menu.
- Select the VM that you want to assign the device to.
- On the Details tab, click GPU devices.
- Click Add GPU device.
- Enter an identifying value in the Name field.
- From the Device name list, select the device that you want to add to the VM.
- Click Save.
Verification
-
To confirm that the devices were added to the VM, click the YAML tab and review the configuration. Mediated devices are added to the
VirtualMachinestanza.spec.domain.devices
7.13.12. Enabling descheduler evictions on virtual machines Copiar o linkLink copiado para a área de transferência!
You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node.
Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.13.12.1. Descheduler profiles Copiar o linkLink copiado para a área de transferência!
Use the Technology Preview
DevPreviewLongLifecycle
DevPreviewLongLifecycleThis profile balances resource usage between nodes and enables the following strategies:
-
: removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count.
RemovePodsHavingTooManyRestarts - : evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler.
LowNodeUtilization- A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods).
- A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods).
-
7.13.12.2. Installing the descheduler Copiar o linkLink copiado para a área de transferência!
The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles.
By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions.
If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to
hypershift-control-plane
100000000
Prerequisites
-
You are logged in to OpenShift Container Platform as a user with the role.
cluster-admin - Access to the OpenShift Container Platform web console.
Procedure
- Log in to the OpenShift Container Platform web console.
Create the required namespace for the Kube Descheduler Operator.
- Navigate to Administration → Namespaces and click Create Namespace.
-
Enter in the Name field, enter
openshift-kube-descheduler-operatorin the Labels field to enable descheduler metrics, and click Create.openshift.io/cluster-monitoring=true
Install the Kube Descheduler Operator.
- Navigate to Operators → OperatorHub.
- Type Kube Descheduler Operator into the filter box.
- Select the Kube Descheduler Operator and click Install.
- On the Install Operator page, select A specific namespace on the cluster. Select openshift-kube-descheduler-operator from the drop-down menu.
- Adjust the values for the Update Channel and Approval Strategy to the desired values.
- Click Install.
Create a descheduler instance.
- From the Operators → Installed Operators page, click the Kube Descheduler Operator.
- Select the Kube Descheduler tab and click Create KubeDescheduler.
Edit the settings as necessary.
- To evict pods instead of simulating the evictions, change the Mode field to Automatic.
Expand the Profiles section and select
. TheDevPreviewLongLifecycleprofile is enabled by default.AffinityAndTaintsImportantThe only profile currently available for OpenShift Virtualization is
.DevPreviewLongLifecycle
You can also configure the profiles and settings for the descheduler later using the OpenShift CLI (
oc
7.13.12.3. Enabling descheduler evictions on a virtual machine (VM) Copiar o linkLink copiado para a área de transferência!
After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the
VirtualMachine
Prerequisites
-
Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI ().
oc - Ensure that the VM is not running.
Procedure
Before starting the VM, add the
annotation to thedescheduler.alpha.kubernetes.io/evictCR:VirtualMachineapiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true"If you did not already set the
profile in the web console during installation, specify theDevPreviewLongLifecyclein theDevPreviewLongLifecyclesection of thespec.profileobject:KubeDeschedulerapiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive1 - 1
- By default, the descheduler does not evict pods. To evict pods, set
modetoAutomatic.
The descheduler is now enabled on the VM.
7.13.13. About high availability for virtual machines Copiar o linkLink copiado para a área de transferência!
You can enable high availability for virtual machines (VMs) by manually deleting a failed node to trigger VM failover or by configuring remediating nodes.
Manually deleting a failed node
If a node fails and machine health checks are not deployed on your cluster, virtual machines with
runStrategy: Always
Node
See Deleting a failed node to trigger virtual machine failover.
Configuring remediating nodes
You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks.
For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
7.13.14. Virtual machine control plane tuning Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization offers the following tuning options at the control-plane level:
-
The profile, which uses fixed
highBurstandQPSrates, to create hundreds of virtual machines (VMs) in one batchburst - Migration setting adjustment based on workload type
7.13.14.1. Configuring a highBurst profile Copiar o linkLink copiado para a área de transferência!
Use the
highBurst
Procedure
Apply the following patch to enable the
tuning policy profile:highBurst$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]'
Verification
Run the following command to verify the
tuning policy profile is enabled:highBurst$ oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range $config, \ $value := .spec.configuration}} {{if eq $config "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{$config}} = {{$value}} {{end}} {{end}} {{"\n"}}
7.13.15. Assigning compute resources Copiar o linkLink copiado para a área de transferência!
In OpenShift Virtualization, compute resources assigned to virtual machines (VMs) are backed by either guaranteed CPUs or time-sliced CPU shares.
Guaranteed CPUs, also known as CPU reservation, dedicate CPU cores or threads to a specific workload, which makes them unavailable to any other workload. Assigning guaranteed CPUs to a VM ensures that the VM will have sole access to a reserved physical CPU. Enable dedicated resources for VMs to use a guaranteed CPU.
Time-sliced CPUs dedicate a slice of time on a shared physical CPU to each workload. You can specify the size of the slice during VM creation, or when the VM is offline. By default, each vCPU receives 100 milliseconds, or 1/10 of a second, of physical CPU time.
The type of CPU reservation depends on the instance type or VM configuration.
7.13.15.1. Overcommitting CPU resources Copiar o linkLink copiado para a área de transferência!
Time-slicing allows multiple virtual CPUs (vCPUs) to share a single physical CPU. This is known as CPU overcommitment. Guaranteed VMs can not be overcommitted.
Configure CPU overcommitment to prioritize VM density over performance when assigning CPUs to VMs. With a higher CPU over-commitment of vCPUs, more VMs fit onto a given node.
7.13.15.2. Setting the CPU allocation ratio Copiar o linkLink copiado para a área de transferência!
The CPU Allocation Ratio specifies the degree of overcommitment by mapping vCPUs to time slices of physical CPUs.
For example, a mapping or ratio of 10:1 maps 10 virtual CPUs to 1 physical CPU by using time slices.
To change the default number of vCPUs mapped to each physical CPU, set the
vmiCPUAllocationRatio
HyperConverged
vmiCPUAllocationRatio
Procedure
Set the
vmiCPUAllocationRatio
HyperConverged
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvSet the
:vmiCPUAllocationRatio... spec: resourceRequirements: vmiCPUAllocationRatio: 1 # ...When
is set tovmiCPUAllocationRatio, the maximum amount of vCPUs are requested for the pod.1
7.14. VM disks Copiar o linkLink copiado para a área de transferência!
7.14.1. Hot-plugging VM disks Copiar o linkLink copiado para a área de transferência!
You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI).
Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks.
A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Each VM has a
virtio-scsi
scsi
virtio-scsi
virtio
Regular
virtio
virtio
7.14.1.1. Hot plugging and hot unplugging a disk by using the web console Copiar o linkLink copiado para a área de transferência!
You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the OpenShift Container Platform web console.
The hot plugged disk remains attached to the VM until you unplug it.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have a data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a running VM to view its details.
- On the VirtualMachine details page, click Configuration → Disks.
Add a hot plugged disk:
- Click Add disk.
- In the Add disk (hot plugged) window, select the disk from the Source list and click Save.
Optional: Unplug a hot plugged disk:
-
Click the options menu
beside the disk and select Detach.
- Click Detach.
-
Click the options menu
Optional: Make a hot plugged disk persistent:
-
Click the options menu
beside the disk and select Make persistent.
- Reboot the VM to apply the change.
-
Click the options menu
7.14.1.2. Hot plugging and hot unplugging a disk by using the command line Copiar o linkLink copiado para a área de transferência!
You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have at least one data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
Hot plug a disk by running the following command:
$ virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>]-
Use the optional flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the
--persistflag, you can no longer hot plug or hot unplug the virtual disk. The--persistflag applies to virtual machines, not virtual machine instances.--persist -
The optional flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC.
--serial
-
Use the optional
Hot unplug a disk by running the following command:
$ virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC>
7.14.2. Expanding virtual machine disks Copiar o linkLink copiado para a área de transferência!
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk.
If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes.
You cannot reduce the size of a VM disk.
7.14.2.1. Expanding a VM disk PVC by using the CLI Copiar o linkLink copiado para a área de transferência!
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. To specify the increased PVC volume, you can edit the
PersistentVolumeClaim
oc
If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead.
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
manifest of the VM disk that you want to expand:PersistentVolumeClaim$ oc edit pvc <pvc_name>Update the disk size:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi # ...-
specifies the new disk size.
spec.resources.requests.storage
-
7.14.2.2. Expanding available virtual storage by adding blank data volumes Copiar o linkLink copiado para a área de transferência!
You can expand the available storage of a virtual machine (VM) by adding blank data volumes.
Prerequisites
- You must have at least one persistent volume.
Procedure
Create a
manifest as shown in the following example:DataVolumeExample
DataVolumemanifestapiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> storageClassName: "<storage_class>"-
specifies the amount of available space requested for the data volume.
spec.storage.resources.requests.storage -
is an optional field that specifies a storage class. If you do not specify a storage class, the default storage class is used.
spec.storageClassName
-
Create the data volume by running the following command:
$ oc create -f <blank-image-datavolume>.yaml
Chapter 8. Networking Copiar o linkLink copiado para a área de transferência!
8.1. Networking overview Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with OpenShift Container Platform networking and its ecosystem.
8.1.1. OpenShift Virtualization networking glossary Copiar o linkLink copiado para a área de transferência!
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicymanifest to the cluster.
8.1.2. Using the default pod network Copiar o linkLink copiado para a área de transferência!
- Connecting a virtual machine to the default pod network
- Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification.
- Exposing a virtual machine as a service
-
You can expose a VM within the cluster or outside the cluster by creating a
Serviceobject. For on-premise clusters, you can configure a load balancing service by using the MetalLB Operator. You can install the MetalLB Operator by using the OpenShift Container Platform web console or the CLI.
8.1.3. Configuring VM secondary network interfaces Copiar o linkLink copiado para a área de transferência!
- Connecting a virtual machine to a Linux bridge network
Install the Kubernetes NMState Operator to configure Linux bridges, VLANs, and bondings for your secondary networks.
You can create a Linux bridge network and attach a VM to the network by performing the following steps:
-
Configure a Linux bridge network device by creating a custom resource definition (CRD).
NodeNetworkConfigurationPolicy -
Configure a Linux bridge network by creating a CRD.
NetworkAttachmentDefinition - Connect the VM to the Linux bridge network by including the network details in the VM configuration.
-
Configure a Linux bridge network device by creating a
- Connecting a virtual machine to an SR-IOV network
You can use Single Root I/O Virtualization (SR-IOV) network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency.
You must install the SR-IOV Network Operator on your cluster to manage SR-IOV network devices and network attachments.
You can connect a VM to an SR-IOV network by performing the following steps:
-
Configure an SR-IOV network device by creating a CRD.
SriovNetworkNodePolicy -
Configure an SR-IOV network by creating an object.
SriovNetwork - Connect the VM to the SR-IOV network by including the network details in the VM configuration.
-
Configure an SR-IOV network device by creating a
- Connecting a virtual machine to an OVN-Kubernetes secondary network
You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps:
-
Configure an OVN-Kubernetes secondary network by creating a CRD.
NetworkAttachmentDefinition - Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification.
-
Configure an OVN-Kubernetes secondary network by creating a
- Hot plugging secondary network interfaces
- You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for Linux bridge interfaces that use the VirtIO device driver.
- Using DPDK with SR-IOV
- The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and VMs to run DPDK workloads over SR-IOV networks.
- Configuring a dedicated network for live migration
- You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
- Accessing a virtual machine by using the cluster FQDN
- You can access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN).
- Configuring and viewing IP addresses
- You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent.
8.1.4. Integrating with OpenShift Service Mesh Copiar o linkLink copiado para a área de transferência!
- Connecting a virtual machine to a service mesh
- OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines.
8.1.5. Managing MAC address pools Copiar o linkLink copiado para a área de transferência!
- Managing MAC address pools for network interfaces
- The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots.
8.1.6. Configuring SSH access Copiar o linkLink copiado para a área de transferência!
- Configuring SSH access to virtual machines
You can configure SSH access to VMs by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
command with the private key.virtctl sshYou can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
command to yourvirtctl port-fowardfile and connect to the VM by using OpenSSH..ssh/configYou create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address.
8.2. Connecting a virtual machine to the default pod network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine to the default internal pod network by configuring its network interface to use the
masquerade
Traffic passing through network interfaces to the default pod network is interrupted during live migration.
8.2.1. Configuring masquerade mode from the command line Copiar o linkLink copiado para a área de transferência!
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
- The virtual machine must be configured to use DHCP to acquire IPv4 addresses.
Procedure
Edit the
spec of your virtual machine configuration file:interfacesapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {}1 ports:2 - port: 80 # ... networks: - name: default pod: {}- 1
- Connect using masquerade mode.
- 2
- Optional: List the ports that you want to expose from the virtual machine, each specified by the
portfield. Theportvalue must be a number between 0 and 65536. When theportsarray is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port80.
NotePorts 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.
Create the virtual machine:
$ oc create -f <vm-name>.yaml
8.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) Copiar o linkLink copiado para a área de transferência!
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The
Network.pod.vmIPv6NetworkCIDR
Network.pod.vmIPv6NetworkCIDR
fd10:0:2::2/120
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
Prerequisites
- The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack.
Procedure
In a new virtual machine configuration, include an interface with
and configure the IPv6 address and default gateway by using cloud-init.masqueradeapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {}1 ports: - port: 802 # ... networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ]3 gateway6: fd10:0:2::14 - 1
- Connect using masquerade mode.
- 2
- Allows incoming traffic on port 80 to the virtual machine.
- 3
- The static IPv6 address as determined by the
Network.pod.vmIPv6NetworkCIDRfield in the virtual machine instance configuration. The default value isfd10:0:2::2/120. - 4
- The gateway IP address as determined by the
Network.pod.vmIPv6NetworkCIDRfield in the virtual machine instance configuration. The default value isfd10:0:2::1.
Create the virtual machine in the namespace:
$ oc create -f example-vm-ipv6.yaml
Verification
- To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
8.2.3. About jumbo frames support Copiar o linkLink copiado para a área de transferência!
When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes.
The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways:
-
: If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device.
libvirt - DHCP: If the guest DHCP client can read the MTU value from the DHCP server response.
For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using
netsh
8.3. Exposing a virtual machine by using a service Copiar o linkLink copiado para a área de transferência!
You can expose a virtual machine within the cluster or outside the cluster by creating a
Service
8.3.1. About services Copiar o linkLink copiado para a área de transferência!
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the
NodePort
LoadBalancer
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIPis the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePortmakes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator.
8.3.2. Dual-stack support Copiar o linkLink copiado para a área de transferência!
If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the
spec.ipFamilyPolicy
spec.ipFamilies
Service
The
spec.ipFamilyPolicy
- SingleStack
- The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.
- PreferDualStack
- The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.
- RequireDualStack
-
This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to
PreferDualStack. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.
You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the
spec.ipFamilies
-
[IPv4] -
[IPv6] -
[IPv4, IPv6] -
[IPv6, IPv4]
8.3.3. Creating a service by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
Procedure
Edit the
manifest to add the label for service creation:VirtualMachineapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key1 # ...- 1
- Add
special: keyto thespec.template.metadata.labelsstanza.
NoteLabels on a virtual machine are passed through to the pod. The
label must match the label in thespecial: keyattribute of thespec.selectormanifest.Service-
Save the manifest file to apply your changes.
VirtualMachine Create a
manifest to expose the VM:ServiceapiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key1 type: NodePort2 ports:3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000-
Save the manifest file.
Service Create the service by running the following command:
$ oc create -f example-service.yaml- Restart the VM to apply the changes.
Verification
Query the
object to verify that it is available:Service$ oc get service -n example-namespace
8.4. Connecting a virtual machine to a Linux bridge network Copiar o linkLink copiado para a área de transferência!
By default, OpenShift Virtualization is installed with a single, internal pod network.
You can create a Linux bridge network and attach a virtual machine (VM) to the network by performing the following steps:
- Create a Linux bridge node network configuration policy (NNCP).
- Create a Linux bridge network attachment definition (NAD) by using the web console or the command line.
- Configure the VM to recognize the NAD by using the web console or the command line.
OpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?.
8.4.1. Creating a Linux bridge NNCP Copiar o linkLink copiado para a área de transferência!
You can create a
NodeNetworkConfigurationPolicy
Prerequisites
- You have installed the Kubernetes NMState Operator.
Procedure
Create the
manifest. This example includes sample values that you must replace with your own information.NodeNetworkConfigurationPolicyapiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy spec: desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge state: up ipv4: enabled: false bridge: options: stp: enabled: false port: - name: eth1-
defines the name of the node network configuration policy.
metadata.name -
defines the name of the new Linux bridge.
spec.desiredState.interfaces.name -
is an optional field that can be used to define a human-readable description for the bridge.
spec.desiredState.interfaces.description -
defines the interface type. In this example, the type is a Linux bridge.
spec.desiredState.interfaces.type -
defines the requested state for the interface after creation.
spec.desiredState.interfaces.state -
defines whether the ipv4 protocol is active. Setting this to
spec.desiredState.interfaces.ipv4.enableddisables IPv4 addressing on this bridge.false -
defines whether STP is active. Setting this to
spec.desiredState.interfaces.bridge.options.stp.enableddisables STP on this bridge.false -
defines the node NIC to which the bridge is attached.
spec.desiredState.interfaces.bridge.port.name
-
8.4.2. Creating a Linux bridge NAD Copiar o linkLink copiado para a área de transferência!
You can create a Linux bridge network attachment definition (NAD) by using the OpenShift Container Platform web console or command line.
8.4.2.1. Creating a Linux bridge NAD by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console.
A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Procedure
- In the web console, click Networking → NetworkAttachmentDefinitions.
Click Create Network Attachment Definition.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
- Enter a unique Name and optional Description.
- Select CNV Linux bridge from the Network Type list.
- Enter the name of the bridge in the Bridge Name field.
- Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
- Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
- Click Create.
8.4.2.2. Creating a Linux bridge NAD by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines (VMs) by using the command line.
The NAD and the VM must be in the same namespace.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Prerequisites
-
The node must support nftables and the binary must be deployed to enable MAC spoof check.
nft
Procedure
Add the VM to the
configuration, as in the following example:NetworkAttachmentDefinitionapiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br12 spec: config: | { "cniVersion": "0.3.1", "name": "bridge-network",3 "type": "bridge",4 "bridge": "br1",5 "macspoofchk": false,6 "vlan": 100,7 "preserveDefaultVlan": false8 }- 1
- The name for the
NetworkAttachmentDefinitionobject. - 2
- Optional: Annotation key-value pair for node selection for the bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the defined bridge connected.
- 3
- The name for the configuration. It is recommended to match the configuration name to the
namevalue of the network attachment definition. - 4
- The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
- 5
- The name of the Linux bridge configured on the node. The name should match the interface bridge name defined in the
NodeNetworkConfigurationPolicymanifest. - 6
- Optional: A flag to enable the MAC spoof check. When set to
true, you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. - 7
- Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
- 8
- Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is
true.
NoteA Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Create the network attachment definition:
$ oc create -f network-attachment-definition.yaml1 - 1
- Where
network-attachment-definition.yamlis the file name of the network attachment definition manifest.
Verification
Verify that the network attachment definition was created by running the following command:
$ oc get network-attachment-definition bridge-network
8.4.3. Configuring a VM network interface Copiar o linkLink copiado para a área de transferência!
You can configure a virtual machine (VM) network interface by using the OpenShift Container Platform web console or command line.
8.4.3.1. Configuring a VM network interface by using the web console Copiar o linkLink copiado para a área de transferência!
You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Click a VM to view the VirtualMachine details page.
- On the Configuration tab, click the Network interfaces tab.
- Click Add network interface.
- Enter the interface name and select the network attachment definition from the Network list.
- Click Save.
- Restart the VM to apply the changes.
Networking fields
| Name | Description |
|---|---|
| Name | Name for the network interface controller. |
| Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
| Network | List of available network attachment definitions. |
| Type | List of available binding methods. Select the binding method suitable for the network interface:
|
| MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
8.4.3.2. Configuring a VM network interface by using the command line Copiar o linkLink copiado para a área de transferência!
You can configure a virtual machine (VM) network interface for a bridge network by using the command line.
Prerequisites
- Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect.
Procedure
Add the bridge interface and the network attachment definition to the VM configuration as in the following example:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - masquerade: {} name: default - bridge: {} name: bridge-net1 # ... networks: - name: default pod: {} - name: bridge-net2 multus: networkName: a-bridge-network3 Apply the configuration:
$ oc apply -f example-vm.yaml- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
8.5. Connecting a virtual machine to an SR-IOV network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps:
8.5.1. Configuring SR-IOV network devices Copiar o linkLink copiado para a área de transferência!
The SR-IOV Network Operator adds the
SriovNetworkNodePolicy.sriovnetwork.openshift.io
SriovNetworkNodePolicy
When applying the configuration specified in a
SriovNetworkNodePolicy
It might take several minutes for a configuration change to apply.
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You have access to the cluster as a user with the role.
cluster-admin - You have installed the SR-IOV Network Operator.
- You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
- You have not selected any control plane nodes for SR-IOV network device configuration.
Procedure
Create an
object, and then save the YAML in theSriovNetworkNodePolicyfile. Replace<name>-sriov-node-network.yamlwith the name for this configuration.<name>apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> mtu: <mtu> numVfs: <num> nicSelector: vendor: "<vendor_code>" deviceID: "<device_id>" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci isRdma: false-
specifies a name for the
metadata.nameobject.SriovNetworkNodePolicy -
specifies the namespace where the SR-IOV Network Operator is installed.
metadata.namespace -
specifies the resource name of the SR-IOV device plugin. You can create multiple
spec.resourceNameobjects for a resource name.SriovNetworkNodePolicy -
specifies the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
spec.nodeSelector.feature.node.kubernetes.io/network-sriov.capable -
is an optional field that specifies an integer value between
spec.priorityand0. A smaller number gets higher priority, so a priority of99is higher than a priority of10. The default value is99.99 -
is an optional field that specifies a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
spec.mtu -
specifies the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than
spec.numVfs.127 - selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters.
spec.nicSelectorNoteIt is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify
, you must also specify a value forrootDevices,vendor, ordeviceID.pfNamesIf you specify both
andpfNamesat the same time, ensure that they point to an identical device.rootDevices -
is an optional field that specifies the vendor hex code of the SR-IOV network device. The only allowed values are either
spec.nicSelector.vendoror8086.15b3 -
is an optional field that specifies the device hex code of SR-IOV network device. The only allowed values are
spec.nicSelector.deviceID,158b,1015.1017 -
is an optional field that specifies an array of one or more physical function (PF) names for the Ethernet device.
spec.nicSelector.pfNames -
is an optional field that specifies an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format:
spec.nicSelector.rootDevices.0000:02:00.1 -
specifies the driver type. The
spec.deviceTypedriver type is required for virtual functions in OpenShift Virtualization.vfio-pci - is an optional field that specifies whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set
spec.isRdmatoisRdma. The default value isfalse.falseNoteIf
flag is set toisRDMA, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.true
-
-
Optional: Label the SR-IOV capable cluster nodes with if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes".
SriovNetworkNodePolicy.Spec.NodeSelector Create the
object. When running the following command, replaceSriovNetworkNodePolicywith the name for this configuration:<name>$ oc create -f <name>-sriov-node-network.yamlAfter applying the configuration update, all the pods in
namespace transition to thesriov-network-operatorstatus.RunningTo verify that the SR-IOV network device is configured, enter the following command. Replace
with the name of a node with the SR-IOV network device that you just configured.<node_name>$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'
8.5.2. Configuring SR-IOV additional network Copiar o linkLink copiado para a área de transferência!
You can configure an additional network that uses SR-IOV hardware by creating an
SriovNetwork
When you create an
SriovNetwork
NetworkAttachmentDefinition
Do not modify or delete an
SriovNetwork
running
Prerequisites
-
Install the OpenShift CLI ().
oc -
Log in as a user with privileges.
cluster-admin
Procedure
Create the following
object, and then save the YAML in theSriovNetworkfile. Replace<name>-sriov-network.yamlwith a name for this additional network.<name>apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> networkNamespace: <target_namespace> vlan: <vlan> spoofChk: "<spoof_check>" linkState: <link_state> maxTxRate: <max_tx_rate> minTxRate: <min_rx_rate> vlanQoS: <vlan_qos> trust: "<trust_vf>" capabilities: <capabilities>metadata.name-
Specify a name for the
SriovNetworkobject. The SR-IOV Network Operator creates aNetworkAttachmentDefinitionobject with same name. metadata.namespace- Specify the namespace where the SR-IOV Network Operator is installed.
spec.resourceName-
Specify the value of the
.spec.resourceNameparameter in theSriovNetworkNodePolicyobject that defines the SR-IOV hardware for this additional network. spec.networkNamespace-
Specify the target namespace for the
SriovNetworkobject. Only pods or virtual machines in the target namespace can attach to theSriovNetworkobject. spec.vlan-
Optional: Specify a Virtual LAN (VLAN) ID for the additional network. The integer value must be from
0to4095. The default value is0. spec.spoofChkOptional: Specify the spoof check mode of the VF. The allowed values are the strings
and"on"."off"ImportantYou must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.
spec.linkState-
Optional: Specify the link state of virtual function (VF). Allowed values are
enable,disableandauto. spec.maxTxRate- Optional: Specify the maximum transmission rate, in Mbps, for the VF.
spec.minTxRateOptional: Specify the minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to the maximum transmission rate.
Notespec.vlanQoS-
Optional: Specify the IEEE 802.1p priority level for the VF. The default value is
0. spec.trustOptional: Specify the trust mode of the VF. The allowed values are the strings
and"on"."off"ImportantYou must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.
spec.capabilities- Optional: Specify the capabilities to configure for this network.
To create the object, enter the following command. Replace
with a name for this additional network.<name>$ oc create -f <name>-sriov-network.yamlOptional: To confirm that the
object associated with theNetworkAttachmentDefinitionobject that you created in the previous step exists, enter the following command. ReplaceSriovNetworkwith the namespace you specified in the<namespace>object.SriovNetwork$ oc get net-attach-def -n <namespace>
8.5.3. Connecting a virtual machine to an SR-IOV network Copiar o linkLink copiado para a área de transferência!
You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration.
Procedure
Add the SR-IOV network details to the
andspec.domain.devices.interfacesstanzas of the VM configuration as in the following example:spec.networksapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: default masquerade: {} - name: nic1 sriov: {} networks: - name: default pod: {} - name: nic1 multus: networkName: sriov-network # ...-
specifies a unique name for the SR-IOV interface.
spec.template.spec.domain.devices.interfaces.name -
specifies the name of the SR-IOV interface. This must be the same as the
spec.template.spec.networks.namethat you defined earlier.interfaces.name -
specifies the name of the SR-IOV network attachment definition.
spec.template.spec.networks.multus.networkName
-
Apply the virtual machine configuration:
$ oc apply -f <vm_sriov>.yamlwhere:
<vm_sriov>- Specifies the name of the virtual machine YAML file.
8.6. Using DPDK with SR-IOV Copiar o linkLink copiado para a área de transferência!
The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing.
You can configure clusters and virtual machines (VMs) to run DPDK workloads over SR-IOV networks.
Running DPDK workloads is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
8.6.1. Configuring a cluster for DPDK workloads Copiar o linkLink copiado para a área de transferência!
You can configure an OpenShift Container Platform cluster to run Data Plane Development Kit (DPDK) workloads for improved network performance.
Prerequisites
-
You have access to the cluster as a user with permissions.
cluster-admin -
You have installed the OpenShift CLI ().
oc - You have installed the SR-IOV Network Operator.
- You have installed the Node Tuning Operator.
Procedure
- Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS).
Label a subset of the compute nodes with a custom role; for example,
:worker-dpdk$ oc label node <node_name> node-role.kubernetes.io/worker-dpdk=""Create a new
manifest that contains theMachineConfigPoollabel in theworker-dpdkobject:spec.machineConfigSelectorExample
MachineConfigPoolmanifestapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: ""Create a
manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping.PerformanceProfileExample
PerformanceProfilemanifestapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: "" numa: topologyPolicy: single-numa-nodeNoteThe compute nodes automatically restart after you apply the
andMachineConfigPoolmanifests.PerformanceProfileRetrieve the name of the generated
resource from theRuntimeClassfield of thestatus.runtimeClassobject:PerformanceProfile$ oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}'Set the previously obtained
name as the default container runtime class for theRuntimeClasspods by editing thevirt-launchercustom resource (CR):HyperConverged$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]'NoteEditing the
CR changes a global setting that affects all VMs that are created after the change is applied.HyperConvergedCreate an
object with theSriovNetworkNodePolicyfield set tospec.deviceType:vfio-pciExample
SriovNetworkNodePolicymanifestapiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: "8086" deviceID: "1572" pfNames: - eno3 rootDevices: - "0000:19:00.2" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true"
8.6.2. Configuring a project for DPDK workloads Copiar o linkLink copiado para a área de transferência!
You can configure the project to run DPDK workloads on SR-IOV hardware.
Prerequisites
- Your cluster is configured to run DPDK workloads.
Procedure
Create a namespace for your DPDK applications:
$ oc create ns dpdk-checkup-nsCreate an
object that references theSriovNetworkobject. When you create anSriovNetworkNodePolicyobject, the SR-IOV Network Operator automatically creates aSriovNetworkobject.NetworkAttachmentDefinitionExample
SriovNetworkmanifestapiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } networkNamespace: dpdk-checkup-ns1 resourceName: intel_nics_dpdk2 spoofChk: "off" trust: "on" vlan: 1019- Optional: Run the virtual machine latency checkup to verify that the network is properly configured.
- Optional: Run the DPDK checkup to verify that the namespace is ready for DPDK workloads.
8.6.3. Configuring a virtual machine for DPDK workloads Copiar o linkLink copiado para a área de transferência!
You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing.
Prerequisites
- Your cluster is configured to run DPDK workloads.
- You have created and configured the project in which the VM will run.
Procedure
Edit the
manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages:VirtualMachineExample
VirtualMachinemanifestapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: running: true template: metadata: annotations: cpu-load-balancing.crio.io: disable1 cpu-quota.crio.io: disable2 irq-load-balancing.crio.io: disable3 spec: domain: cpu: sockets: 14 cores: 55 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net7 name: nic-east # ...- 1
- This annotation specifies that load balancing is disabled for CPUs that are used by the container.
- 2
- This annotation specifies that the CPU quota is disabled for CPUs that are used by the container.
- 3
- This annotation specifies that Interrupt Request (IRQ) load balancing is disabled for CPUs that are used by the container.
- 4
- The number of sockets inside the VM. This field must be set to
1for the CPUs to be scheduled from the same Non-Uniform Memory Access (NUMA) node. - 5
- The number of cores inside the VM. This must be a value greater than or equal to
1. In this example, the VM is scheduled with 5 hyper-threads or 10 CPUs. - 6
- The size of the huge pages. The possible values for x86-64 architecture are 1Gi and 2Mi. In this example, the request is for 8 huge pages of size 1Gi.
- 7
- The name of the SR-IOV
NetworkAttachmentDefinitionobject.
- Save and exit the editor.
Apply the
manifest:VirtualMachine$ oc apply -f <file_name>.yamlConfigure the guest operating system. The following example shows the configuration steps for RHEL 8 OS:
Configure huge pages by using the GRUB bootloader command-line interface. In the following example, 8 1G huge pages are specified.
$ grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8"To achieve low-latency tuning by using the
profile in the TuneD application, run the following commands:cpu-partitioning$ dnf install -y tuned-profiles-cpu-partitioning$ echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.confThe first two CPUs (0 and 1) are set aside for house keeping tasks and the rest are isolated for the DPDK application.
$ tuned-adm profile cpu-partitioningOverride the SR-IOV NIC driver by using the
device driver control utility:driverctl$ dnf install -y driverctl$ driverctl set-override 0000:07:00.0 vfio-pci
- Restart the VM to apply the changes.
8.7. Connecting a virtual machine to an OVN-Kubernetes secondary network Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to an Open Virtual Network (OVN)-Kubernetes secondary network. The OVN-Kubernetes Container Network Interface (CNI) plug-in uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes.
OpenShift Virtualization currently supports the flat layer 2 topology. This topology connects workloads by a cluster-wide logical switch. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps:
8.7.1. Creating an OVN-Kubernetes NAD Copiar o linkLink copiado para a área de transferência!
You can create an OVN-Kubernetes flat layer 2 network attachment definition (NAD) by using the OpenShift Container Platform web console or the CLI.
Configuring IP address management (IPAM) by specifying the
spec.config.ipam.subnet
8.7.1.1. Creating a NAD for flat layer 2 topology by using the CLI Copiar o linkLink copiado para a área de transferência!
You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network.
Prerequisites
-
You have access to the cluster as a user with privileges.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
Create a
object:NetworkAttachmentDefinitionapiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |2 { "cniVersion": "0.3.1",1 "name": "my-namespace-l2-network",2 "type": "ovn-k8s-cni-overlay",3 "topology":"layer2",4 "mtu": 1400,5 "netAttachDefName": "my-namespace/l2-network"6 }- 1
- The Container Network Interface (CNI) specification version. The required value is
0.3.1. - 2
- The name of the network. This attribute is not namespaced. For example, you can have a network named
l2-networkreferenced from two differentNetworkAttachmentDefinitionobjects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. - 3
- The name of the CNI plugin. The required value is
ovn-k8s-cni-overlay. - 4
- The topological configuration for the network. The required value is
layer2. - 5
- Optional: The maximum transmission unit (MTU) value. If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, such as the Geneve (Generic Network Virtualization Encapsulation), and byte capacity of any enabled features, such as IPsec.
- 6
- The value of the
namespaceandnamefields in themetadatastanza of theNetworkAttachmentDefinitionobject.
NoteThe previous example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address.
Apply the manifest by running the following command:
$ oc apply -f <filename>.yaml
8.7.2. Attaching a virtual machine to the OVN-Kubernetes secondary network Copiar o linkLink copiado para a área de transferência!
You can attach a virtual machine (VM) to the OVN-Kubernetes secondary network interface by using the OpenShift Container Platform web console or the CLI.
8.7.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI Copiar o linkLink copiado para a área de transferência!
You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration.
Prerequisites
-
You have access to the cluster as a user with privileges.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
manifest to add the OVN-Kubernetes secondary network interface details, as in the following example:VirtualMachineapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: default masquerade: {} - name: secondary bridge: {} resources: requests: memory: 1024Mi networks: - name: default pod: {} - name: secondary multus: networkName: l2-network # ...-
specifies the name of the OVN-Kubernetes secondary interface.
spec.template.spec.domain.devices.interfaces.name -
specifies the name of the network. This must match the value of the
spec.template.spec.networks.namefield.spec.template.spec.domain.devices.interfaces.name -
specifies the name of the
spec.template.spec.networks.multus.networkNameobject.NetworkAttachmentDefinition
-
Apply the
manifest:VirtualMachine$ oc apply -f <filename>.yaml- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
8.8. Hot plugging secondary network interfaces Copiar o linkLink copiado para a área de transferência!
You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for Linux bridge interfaces that use the VirtIO device driver.
Hot plugging and hot unplugging bridge network interfaces is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
8.8.1. VirtIO limitations Copiar o linkLink copiado para a área de transferência!
Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces.
The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation.
If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces.
8.8.2. Hot plugging a bridge network interface using the CLI Copiar o linkLink copiado para a área de transferência!
Hot plug a bridge network interface to a virtual machine (VM) while the VM is running.
Prerequisites
- A network attachment definition is configured in the same namespace as your VM.
-
You have installed the tool.
virtctl
Procedure
If the VM to which you want to hot plug the network interface is not running, start it by using the following command:
$ virtctl start <vm_name> -n <namespace>Use the following command to hot plug a new network interface to the running VM. The
command adds the new network interface to the VM and virtual machine instance (VMI) specification but does not attach it to the running VM.virtctl addinterface$ virtctl addinterface <vm_name> --network-attachment-definition-name <net_attach_dev_namespace>/<net_attach_def_name> --name <interface_name>where:
- <vm_name>
-
The name of the
VirtualMachineobject. - <net_attach_def_name>
-
The name of the
NetworkAttachmentDefinitionobject. - <net_attach_dev_namespace>
-
An identifier for the namespace associated with the
NetworkAttachmentDefinitionobject. The supported values aredefaultor the name of the namespace where the VM is located. - <interface_name>
- The name of the new network interface.
To attach the network interface to the running VM, live migrate the VM by using the following command:
$ virtctl migrate <vm_name>
Verification
Verify that the VM live migration is successful by using the following command:
$ oc get VirtualMachineInstanceMigration -wExample output
NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedoraVerify that the new interface is added to the VM by checking the VMI status:
$ oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"Example output
[ { "infoSource": "domain, guest-agent", "interfaceName": "eth0", "ipAddress": "10.130.0.195", "ipAddresses": [ "10.130.0.195", "fd02:0:0:3::43c" ], "mac": "52:54:00:0e:ab:25", "name": "default", "queueCount": 1 }, { "infoSource": "domain, guest-agent, multus-status", "interfaceName": "eth1", "mac": "02:d8:b8:00:00:2a", "name": "bridge-interface", "queueCount": 1 } ]The hot plugged interface appears in the VMI status.
8.8.3. Hot unplugging a bridge network interface using the CLI Copiar o linkLink copiado para a área de transferência!
You can remove a bridge network interface from a running virtual machine (VM).
Prerequisites
- Your VM must be running.
- The VM must be created on a cluster running OpenShift Virtualization 4.14 or later.
- The VM must have a bridge network interface attached.
Procedure
Hot unplug a bridge network interface by running the following command. The
command detaches the network interface from the guest, but the interface still exists in the pod.virtctl removeinterface$ virtctl removeinterface <vm_name> --name <interface_name>Remove the interface from the pod by migrating the VM:
$ virtctl migrate <vm_name>
8.9. Connecting a virtual machine to a service mesh Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.
8.9.1. Adding a virtual machine to a service mesh Copiar o linkLink copiado para a área de transferência!
To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the
sidecar.istio.io/inject
true
To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090.
Prerequisites
- You installed the Service Mesh Operators.
- You created the Service Mesh control plane.
- You added the VM project to the Service Mesh member roll.
Procedure
Edit the VM configuration file to add the
annotation:sidecar.istio.io/inject: "true"Example configuration file
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio annotations: sidecar.istio.io/inject: "true" spec: domain: devices: interfaces: - name: default masquerade: {} disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk-
specifies the key/value pair (label) that must be matched to the service selector attribute.
spec.template.metadata.labels.app -
is the annotation to enable automatic sidecar injection.
spec.template.metadata.annotations.sidecar.istio.io/inject -
is the binding method (masquerade mode) for use with the default pod network.
spec.template.spec.domain.devices.interfaces.masquerade
-
Run the following command to apply the VM configuration:
$ oc apply -f <vm_name>.yamlwhere:
<vm_name>- Specifies the name of the virtual machine YAML file.
Create a
object to expose your VM to the service mesh:ServiceapiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio ports: - port: 8080 name: http protocol: TCP-
specifies the service selector that determines the set of pods targeted by a service. This attribute corresponds to the
spec.selector.appfield in the VM configuration file. In the above example, thespec.metadata.labelsobject namedServicetargets TCP port 8080 on any pod with the labelvm-istio.app=vm-istio
-
Run the following command to create the service:
$ oc create -f <service_name>.yamlwhere:
<service_name>- Specifies the name of the service YAML file.
8.10. Configuring a dedicated network for live migration Copiar o linkLink copiado para a área de transferência!
You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
8.10.1. Configuring a dedicated secondary network for live migration Copiar o linkLink copiado para a área de transferência!
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the
NetworkAttachmentDefinition
HyperConverged
Prerequisites
-
You installed the OpenShift CLI ().
oc -
You logged in to the cluster as a user with the role.
cluster-admin - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
manifest according to the following example:NetworkAttachmentDefinitionExample configuration file
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "10.200.5.0/24" } }'-
specifies the name of the
metadata.nameobject.NetworkAttachmentDefinition -
specifies the name of the NIC to be used for live migration.
config.master -
specifies the name of the CNI plugin that provides the network for the NAD.
config.type -
specifies an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
config.range
-
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the name of the
object to theNetworkAttachmentDefinitionstanza of thespec.liveMigrationConfigCR:HyperConvergedExample
HyperConvergedmanifestapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ...-
specifies the name of the Multus
spec.liveMigrationConfig.networkobject to be used for live migrations.NetworkAttachmentDefinition
-
-
Save your changes and exit the editor. The pods restart and connect to the secondary network.
virt-handler
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
8.10.2. Selecting a dedicated network by using the web console Copiar o linkLink copiado para a área de transferência!
You can select a dedicated network for live migration by using the OpenShift Container Platform web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Go to Virtualization > Overview in the OpenShift Container Platform web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
8.11. Configuring and viewing IP addresses Copiar o linkLink copiado para a área de transferência!
You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent.
8.11.1. Configuring IP addresses for virtual machines Copiar o linkLink copiado para a área de transferência!
You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line.
You can configure a dynamic IP address when you create a VM by using the command line.
The IP address is provisioned with cloud-init.
8.11.1.1. Configuring a static IP address when creating a virtual machine by using the web console Copiar o linkLink copiado para a área de transferência!
You can configure a static IP address when you create a virtual machine (VM) by using the web console. The IP address is provisioned with cloud-init.
If the VM is connected to the pod network, the pod network interface is the default route unless you update it.
Prerequisites
- The virtual machine is connected to a secondary network.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile.
- Click Customize VirtualMachine.
- Click Next.
- On the Scripts tab, click the edit icon beside Cloud-init.
- Select the Add network data checkbox.
- Enter the ethernet name, one or more IP addresses separated by commas, and the gateway address.
- Click Apply.
- Click Create VirtualMachine.
8.11.1.2. Configuring an IP address when creating a virtual machine by using the command line Copiar o linkLink copiado para a área de transferência!
You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
If the VM is connected to the pod network, the pod network interface is the default route unless you update it.
Prerequisites
- The virtual machine is connected to a secondary network.
- You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine.
Procedure
Edit the
stanza of the virtual machine configuration:spec.template.spec.volumes.cloudInitNoCloud.networkDataTo configure a dynamic IP address, specify the interface name and enable DHCP:
kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1:1 dhcp4: true- 1
- Specify the interface name.
To configure a static IP, specify the interface name and the IP address:
kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1:1 addresses: - 10.10.10.14/242
8.11.2. Viewing IP addresses of virtual machines Copiar o linkLink copiado para a área de transferência!
You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line.
The network information is collected by the QEMU guest agent.
8.11.2.1. Viewing the IP address of a virtual machine by using the web console Copiar o linkLink copiado para a área de transferência!
You can view the IP address of a virtual machine (VM) by using the OpenShift Container Platform web console.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Procedure
- In the OpenShift Container Platform console, click Virtualization → VirtualMachines from the side menu.
- Select a VM to open the VirtualMachine details page.
- Click the Details tab to view the IP address.
8.11.2.2. Viewing the IP address of a virtual machine by using the command line Copiar o linkLink copiado para a área de transferência!
You can view the IP address of a virtual machine (VM) by using the command line.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Procedure
Obtain the virtual machine instance configuration by running the following command:
$ oc describe vmi <vmi_name>Example output
# ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa
8.12. Accessing a virtual machine by using the cluster FQDN Copiar o linkLink copiado para a área de transferência!
You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using the fully qualified domain name (FQDN) of the cluster.
Accessing VMs by using the cluster FQDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
8.12.1. Configuring a DNS server for secondary networks Copiar o linkLink copiado para a área de transferência!
The Cluster Network Addons Operator (CNAO) deploys a Domain Name Server (DNS) server and monitoring components when you enable the
deployKubeSecondaryDNS
HyperConverged
Prerequisites
-
You installed the OpenShift CLI ().
oc - You configured a load balancer for the cluster.
-
You logged in to the cluster with permissions.
cluster-admin
Procedure
Create a load balancer service to expose the DNS server outside the cluster by running the
command according to the following example:oc expose$ oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb \ --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'Retrieve the external IP address by running the following command:
$ oc get service -n openshift-cnvExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5sEdit the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvEnable the DNS server and monitoring components according to the following example:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: "10.46.41.94"1 # ...- 1
- Specify the external IP address exposed by the load balancer service.
- Save the file and exit the editor.
Retrieve the cluster FQDN by running the following command:
$ oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'Example output
openshift.example.comPoint to the DNS server by using one of the following methods:
Add the
value to thekubeSecondaryDNSNameServerIPfile on your local machine.resolv.confNoteEditing the
file overwrites existing DNS settings.resolv.confAdd the
value and the cluster FQDN to the enterprise DNS server records. For example:kubeSecondaryDNSNameServerIPvm.<FQDN>. IN NS ns.vm.<FQDN>.ns.vm.<FQDN>. IN A 10.46.41.94
8.12.2. Connecting to a VM on a secondary network by using the cluster FQDN Copiar o linkLink copiado para a área de transferência!
You can access a running virtual machine (VM) attached to a secondary network interface by using the fully qualified domain name (FQDN) of the cluster.
Prerequisites
- You installed the QEMU guest agent on the VM.
- The IP address of the VM is public.
- You configured the DNS server for secondary networks.
- You retrieved the fully qualified domain name (FQDN) of the cluster.
Procedure
Retrieve the network interface name from the VM configuration by running the following command:
$ oc get vm -n <namespace> <vm_name> -o yamlExample output
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: true template: spec: domain: devices: interfaces: - bridge: {} name: example-nic # ... networks: - multus: networkName: bridge-conf name: example-nic1 - 1
- Note the name of the network interface.
Connect to the VM by using the
command:ssh$ ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>
8.13. Managing MAC address pools for network interfaces Copiar o linkLink copiado para a área de transferência!
The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address.
A virtual machine instance created from that VM retains the assigned MAC address across reboots.
KubeMacPool does not handle virtual machine instances created independently from a virtual machine.
8.13.1. Managing KubeMacPool by using the command line Copiar o linkLink copiado para a área de transferência!
You can disable and re-enable KubeMacPool by using the command line.
KubeMacPool is enabled by default.
Procedure
To disable KubeMacPool in two namespaces, run the following command:
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignoreTo re-enable KubeMacPool in two namespaces, run the following command:
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-
Chapter 9. Storage Copiar o linkLink copiado para a área de transferência!
9.1. Storage configuration overview Copiar o linkLink copiado para a área de transferência!
You can configure a default storage class, storage profiles, Containerized Data Importer (CDI), data volumes (DVs), and automatic boot source updates.
9.1.1. Storage Copiar o linkLink copiado para a área de transferência!
The following storage configuration tasks are mandatory:
- Configure a default storage class
-
You must configure a default storage class for the cluster. Otherwise, OpenShift Virtualization cannot automatically import boot source images.
DataVolumeobjects (DVs) andPersistentVolumeClaimobjects (PVCs) that do not explicitly specify a storage class remain in thePendingstate until you set a default storage class. - Configure storage profiles
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
The following storage configuration tasks are optional:
- Reserve additional PVC space for file system overhead
- By default, 5.5% of a file system PVC is reserved for overhead, reducing the space available for VM disks by that amount. You can configure a different overhead value.
- Configure local storage by using the hostpath provisioner
- You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the HPP Operator is automatically installed.
- Configure user permissions to clone data volumes between namespaces
- You can configure RBAC roles to enable users to clone data volumes between namespaces.
9.1.2. Containerized Data Importer Copiar o linkLink copiado para a área de transferência!
You can perform the following Containerized Data Importer (CDI) configuration tasks:
- Override the resource request limits of a namespace
- You can configure CDI to import, upload, and clone VM disks into namespaces that are subject to CPU and memory resource restrictions.
- Configure CDI scratch space
- CDI requires scratch space (temporary storage) to complete some operations, such as importing and uploading VM images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV).
9.1.3. Data volumes Copiar o linkLink copiado para a área de transferência!
You can perform the following data volume configuration tasks:
- Enable preallocation for data volumes
- CDI can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes.
- Manage data volume annotations
- Data volume annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
9.1.4. Boot source updates Copiar o linkLink copiado para a área de transferência!
You can perform the following boot source update configuration task:
- Manage automatic boot source updates
- Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, CDI imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. You can enable automatic updates for custom boot sources.
9.2. Configuring storage profiles Copiar o linkLink copiado para a área de transferência!
A storage profile provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class.
The Containerized Data Importer (CDI) recognizes a storage provider if it has been configured to identify and interact with the storage provider’s capabilities.
For recognized storage types, the CDI provides values that optimize the creation of PVCs. You can also configure automatic settings for the storage class by customizing the storage profile. If the CDI does not recognize your storage provider, you must configure storage profiles.
When using OpenShift Virtualization with Red Hat OpenShift Data Foundation, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and
VolumeMode: Block
9.2.1. Customizing the storage profile Copiar o linkLink copiado para a área de transferência!
You can specify default parameters by editing the
StorageProfile
DataVolume
An empty
status
If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created.
Prerequisites
- Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail.
Procedure
Edit the storage profile. In this example, the provisioner is not recognized by CDI.
$ oc edit storageprofile <storage_class>Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>Provide the needed attribute values in the storage profile:
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce1 volumeMode: Filesystem2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>After you save your changes, the selected values appear in the storage profile
element.status
9.2.1.1. Setting a default cloning strategy using a storage profile Copiar o linkLink copiado para a área de transferência!
You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy. Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance.
Cloning strategies can be specified by setting the
cloneStrategy
-
is used by default when snapshots are configured. The CDI will use the snapshot method if it recognizes the storage provider and the provider supports Container Storage Interface (CSI) snapshots. This cloning strategy uses a temporary volume snapshot to clone the volume.
snapshot -
uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning.
copy -
uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike
csi-cloneorsnapshot, which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in thecopyobject for the provisioner’s storage class.StorageProfile
You can also set clone strategies using the CLI without modifying the default
claimPropertySets
spec
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
name: <provisioner_class>
# ...
spec:
claimPropertySets:
- accessModes:
- ReadWriteOnce
volumeMode:
Filesystem
cloneStrategy: csi-clone
status:
provisioner: <provisioner>
storageClass: <provisioner_class>
| Storage provider | Default behavior |
|---|---|
| rook-ceph.rbd.csi.ceph.com | Snapshot |
| openshift-storage.rbd.csi.ceph.com | Snapshot |
| csi-vxflexos.dellemc.com | CSI Clone |
| csi-isilon.dellemc.com | CSI Clone |
| csi-powermax.dellemc.com | CSI Clone |
| csi-powerstore.dellemc.com | CSI Clone |
| hspc.csi.hitachi.com | CSI Clone |
| csi.hpe.com | CSI Clone |
| spectrumscale.csi.ibm.com | CSI Clone |
| rook-ceph.rbd.csi.ceph.com | CSI Clone |
| openshift-storage.rbd.csi.ceph.com | CSI Clone |
| cephfs.csi.ceph.com | CSI Clone |
| openshift-storage.cephfs.csi.ceph.com | CSI Clone |
9.3. Managing automatic boot source updates Copiar o linkLink copiado para a área de transferência!
You can manage automatic updates for the following boot sources:
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources.
9.3.1. Managing Red Hat boot source updates Copiar o linkLink copiado para a área de transferência!
You can opt out of automatic updates for all system-defined boot sources by disabling the
enableCommonBootImageImport
DataImportCron
When the
enableCommonBootImageImport
DataSource
DataSource
9.3.1.1. Managing automatic updates for all system-defined boot sources Copiar o linkLink copiado para a área de transferência!
Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents
CDIDataImportCronOutdated
To disable automatic updates for all system-defined boot sources, turn off the
enableCommonBootImageImport
false
true
Custom boot sources are not affected by this setting.
Procedure
Toggle the feature gate for automatic boot source updates by editing the
custom resource (CR).HyperConvergedTo disable automatic boot source updates, set the
field in thespec.featureGates.enableCommonBootImageImportCR toHyperConverged. For example:false$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": false}]'To re-enable automatic boot source updates, set the
field in thespec.featureGates.enableCommonBootImageImportCR toHyperConverged. For example:true$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": true}]'
9.3.2. Managing custom boot source updates Copiar o linkLink copiado para a área de transferência!
Custom boot sources that are not provided by OpenShift Virtualization are not controlled by the feature gate. You must manage them individually by editing the
HyperConverged
You must configure a storage class. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Defining a storage class for details.
9.3.2.1. Configuring a storage class for custom boot source updates Copiar o linkLink copiado para a área de transferência!
You can override the default storage class by editing the
HyperConverged
Boot sources are created from storage using the default storage class. If your cluster does not have a default storage class, you must define one before configuring automatic updates for custom boot sources.
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvDefine a new storage class by entering a value in the
field:storageClassNameapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <new_storage_class> schedule: "0 */12 * * *" managedDataSource: <data_source> # ...-
specifies the storage class.
spec.dataImportCronTemplates.spec.template.spec.storageClassName -
is a required field that specifies the schedule for the job in cron format.
spec.dataImportCronTemplates.spec.schedule - is a required field that specifies the data source to use.
spec.dataImportCronTemplates.spec.managedDataSourceNoteFor the custom image to be detected as an available boot source, the value of the
parameter in the VM template must match this value.spec.dataVolumeTemplates.spec.sourceRef.name
-
Remove the
annotation from the current default storage class.storageclass.kubernetes.io/is-default-classRetrieve the name of the current default storage class by running the following command:
$ oc get storageclassExample output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11dIn this example, the current default storage class is named
.hostpath-csi-basicRemove the annotation from the current default storage class by running the following command:
$ oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'Replace
with the<current_default_storage_class>value of the default storage class.storageClassName
Set the new storage class as the default by running the following command:
$ oc patch storageclass <new_storage_class> -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'Replace
with the<new_storage_class>value that you added to thestorageClassNameCR.HyperConverged
9.3.2.2. Enabling automatic updates for custom boot sources Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the
HyperConverged
Prerequisites
- The cluster has a default storage class.
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvEdit the
CR, adding the appropriate template and boot source in theHyperConvergedsection. For example:dataImportCronTemplatesExample custom resource
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" spec: schedule: "0 */12 * * *" template: spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 retentionPolicy: "None"-
specifies a required annotation for storage classes with
spec.dataImportCronTemplates.metadata.annotationsset tovolumeBindingMode.WaitForFirstConsumer -
specifies the schedule for the job, specified in cron format.
spec.dataImportCronTemplates.spec.schedule -
specifies the registry source to use to create a data volume. Use the default
spec.dataImportCronTemplates.spec.template.spec.source.registrypodand notpullMethodnode, which is based on thepullMethoddocker cache. Thenodedocker cache is useful when a registry image is available vianode, but the CDI importer is not authorized to access it.Container.Image -
specifies the name of the managed data source. For the custom image to be detected as an available boot source, the name of the image’s
spec.dataImportCronTemplates.spec.managedDataSourcemust match the name of the template’smanagedDataSource, which is found underDataSourcein the VM template YAML file.spec.dataVolumeTemplates.spec.sourceRef.name -
specifies whether to retain data volumes and data sources after the cron job is deleted. Use
spec.dataImportCronTemplates.spec.retentionPolicyto retain data volumes and data sources. UseAllto delete data volumes and data sources.None
-
- Save the file.
9.3.2.3. Enabling volume snapshot boot sources Copiar o linkLink copiado para a área de transferência!
Enable volume snapshot boot sources by setting the parameter in the
StorageProfile
DataImportCron
VolumeSnapshot
Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot.
Prerequisites
- You must have access to a volume snapshot with the operating system image.
- The storage must support snapshotting.
Procedure
Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command:
$ oc edit storageprofile <storage_class>-
Review the specification of the
dataImportCronSourceFormatto confirm whether or not the VM is using PVC or volume snapshot by default.StorageProfile Edit the storage profile, if needed, by updating the
specification todataImportCronSourceFormat.snapshotExample storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: # ... spec: dataImportCronSourceFormat: snapshot
Verification
Open the storage profile object that corresponds to the storage class used to provision boot sources.
$ oc get storageprofile <storage_class> -oyaml-
Confirm that the specification of the
dataImportCronSourceFormatis set to 'snapshot', and that anyStorageProfileobjects that theDataSourcepoints to now reference volume snapshots.DataImportCron
You can now use these boot sources to create virtual machines.
9.3.3. Disabling automatic updates for a single boot source Copiar o linkLink copiado para a área de transferência!
You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the
HyperConverged
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvDisable automatic updates for an individual boot source by editing the
field.spec.dataImportCronTemplates- Custom boot source
-
Remove the boot source from the field. Automatic updates are disabled for custom boot sources by default.
spec.dataImportCronTemplates
-
Remove the boot source from the
- System-defined boot source
Add the boot source to
.spec.dataImportCronTemplatesNoteAutomatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them.
Set the value of the
annotation todataimportcrontemplate.kubevirt.io/enable.'false'For example:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron # ...
- Save the file.
9.3.4. Verifying the status of a boot source Copiar o linkLink copiado para a área de transferência!
You can determine if a boot source is system-defined or custom by viewing the
HyperConverged
Procedure
View the contents of the
CR by running the following command:HyperConverged$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yamlExample output
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: # ... status: # ... dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true # ... - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} # ...-
specifies a system-defined boot source.
status.dataImportCronTemplates.status.commonTemplate -
specifies a custom boot source.
status.dataImportCronTemplates.status
-
Verify the status of the boot source by reviewing the
field.status.dataImportCronTemplates.status-
If the field contains , it is a system-defined boot source.
commonTemplate: true -
If the field has the value
status.dataImportCronTemplates.status, it is a custom boot source.{}
-
If the field contains
9.4. Reserving PVC space for file system overhead Copiar o linkLink copiado para a área de transferência!
When you add a virtual machine disk to a persistent volume claim (PVC) that uses the
Filesystem
By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount.
You can configure a different overhead value by editing the
HCO
9.4.1. Overriding the default file system overhead value Copiar o linkLink copiado para a área de transferência!
Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the
spec.filesystemOverhead
HCO
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Open the
object for editing by running the following command:HCO$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvEdit the
fields, populating them with your chosen values:spec.filesystemOverhead# ... spec: filesystemOverhead: global: "<new_global_value>" storageClass: <storage_class_name>: "<new_value_for_this_storage_class>"-
specifies the default file system overhead percentage used for any storage classes that do not already have a set value. For example,
spec.filesystemOverhead.globalreserves 7% of the PVC for file system overhead.global: "0.07" -
specifies the file system overhead percentage for the specified storage class. For example,
spec.filesystemOverhead.storageClasschanges the default overhead value for PVCs in themystorageclass: "0.04"storage class to 4%.mystorageclass
-
-
Save and exit the editor to update the object.
HCO
Verification
View the
status and verify your changes by running one of the following commands:CDIConfigTo generally verify changes to
:CDIConfig$ oc get cdiconfig -o yamlTo view your specific changes to
:CDIConfig$ oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'
9.5. Configuring local storage by using the hostpath provisioner Copiar o linkLink copiado para a área de transferência!
You can configure local storage for virtual machines by using the hostpath provisioner (HPP).
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner Operator is automatically installed. HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use HPP, you create an HPP custom resource (CR) with a basic storage pool.
9.5.1. Creating a hostpath provisioner with a basic storage pool Copiar o linkLink copiado para a área de transferência!
You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a
storagePools
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in must have read/write access.
spec.storagePools.path
Procedure
Create an
file with ahpp_cr.yamlstanza as in the following example:storagePoolsapiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools:1 - name: any_name path: "/var/myvolumes"2 workload: nodeSelector: kubernetes.io/os: linux- Save the file and exit.
Create the HPP by running the following command:
$ oc create -f hpp_cr.yaml
9.5.1.1. About creating storage classes Copiar o linkLink copiado para a área de transferência!
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a
StorageClass
In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the
storagePools
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the
StorageClass
volumeBindingMode
WaitForFirstConsumer
9.5.1.2. Creating a storage class for the CSI driver with the storagePools stanza Copiar o linkLink copiado para a área de transferência!
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a
StorageClass
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the
StorageClass
volumeBindingMode
WaitForFirstConsumer
Procedure
Create a
file to define the storage class:storageclass_csi.yamlapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete1 volumeBindingMode: WaitForFirstConsumer2 parameters: storagePool: my-storage-pool3 -
specifies whether the underlying storage is deleted or retained when a user deletes a PVC. The two possible
reclaimPolicyvalues arereclaimPolicyandDelete. If you do not specify a value, the default value isRetain.Delete -
specifies the timing of PV creation. The
volumeBindingModeconfiguration in this example means that PV creation is delayed until a pod is scheduled to a specific node.WaitForFirstConsumer -
specifies the name of the storage pool defined in the HPP custom resource (CR).
parameters.storagePool
-
- Save the file and exit.
Create the
object by running the following command:StorageClass$ oc create -f storageclass_csi.yaml
9.5.2. About storage pools created with PVC templates Copiar o linkLink copiado para a área de transferência!
If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR).
A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation.
The PVC template is based on the
spec
PersistentVolumeClaim
Example PersistentVolumeClaim object
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: iso-pvc
spec:
volumeMode: Block
storageClassName: my-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
The
spec.volumeMode
You define a storage pool using a
pvcTemplate
pvcTemplate
You can combine basic storage pools with storage pools created from PVC templates.
9.5.2.1. Creating a storage pool with a PVC template Copiar o linkLink copiado para a área de transferência!
You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR).
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in must have read/write access.
spec.storagePools.path
Procedure
Create an
file for the HPP CR that specifies a persistent volume (PVC) template in thehpp_pvc_template_pool.yamlstanza according to the following example:storagePoolsapiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools:1 - name: my-storage-pool path: "/var/myvolumes"2 pvcTemplate: volumeMode: Block3 storageClassName: my-storage-class4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi5 workload: nodeSelector: kubernetes.io/os: linux- 1 1
- The
storagePoolsstanza is an array that can contain both basic and PVC template storage pools. - 2 2
- Specify the storage pool directories under this node path.
- 3 3
- Optional: The
volumeModeparameter can be eitherBlockorFilesystemas long as it matches the provisioned volume format. If no value is specified, the default isFilesystem. If thevolumeModeisBlock, the mounting pod creates an XFS file system on the block volume before mounting it. - 4
- If the
storageClassNameparameter is omitted, the default storage class is used to create PVCs. If you omitstorageClassName, ensure that the HPP storage class is not the default storage class. - 5
- You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
- Save the file and exit.
Create the HPP with a storage pool by running the following command:
$ oc create -f hpp_pvc_template_pool.yaml
9.6. Enabling user permissions to clone data volumes across namespaces Copiar o linkLink copiado para a área de transferência!
The isolating nature of namespaces means that users cannot by default clone resources between namespaces.
To enable a user to clone a virtual machine to another namespace, a user with the
cluster-admin
9.6.1. Creating RBAC resources for cloning data volumes Copiar o linkLink copiado para a área de transferência!
Create a new cluster role that enables permissions for all actions for the
datavolumes
Prerequisites
- You must have cluster admin privileges.
If you are a non-admin user that is an administrator for both the source and target namespaces, you can create a
Role
ClusterRole
Procedure
Create a
manifest:ClusterRoleapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner>1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"]- 1
- Unique name for the cluster role.
Create the cluster role in the cluster:
$ oc create -f <datavolume-cloner.yaml>1 - 1
- The file name of the
ClusterRolemanifest created in the previous step.
Create a
manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.RoleBindingapiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user>1 namespace: <Source namespace>2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace>3 roleRef: kind: ClusterRole name: datavolume-cloner4 apiGroup: rbac.authorization.k8s.ioCreate the role binding in the cluster:
$ oc create -f <datavolume-cloner.yaml>1 - 1
- The file name of the
RoleBindingmanifest created in the previous step.
9.7. Configuring CDI to override CPU and memory quotas Copiar o linkLink copiado para a área de transferência!
You can configure the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
9.7.1. About CPU and memory quotas in a namespace Copiar o linkLink copiado para a área de transferência!
A resource quota, defined by the
ResourceQuota
The
HyperConverged
0
9.7.2. Overriding CPU and memory defaults Copiar o linkLink copiado para a área de transferência!
Modify the default settings for CPU and memory requests and limits for your use case by adding the
spec.resourceRequirements.storageWorkloads
HyperConverged
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Edit the
CR by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the
stanza to the CR, setting the values based on your use case. For example:spec.resourceRequirements.storageWorkloadsapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi"-
Save and exit the editor to update the CR.
HyperConverged
9.8. Preparing CDI scratch space Copiar o linkLink copiado para a área de transferência!
9.8.1. About scratch space Copiar o linkLink copiado para a área de transferência!
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts.
You can define the storage class that is used to bind the scratch space PVC in the
spec.scratchSpaceStorageClass
HyperConverged
If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used.
CDI requires requesting scratch space with a
file
block
file
Manual provisioning
If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
9.8.2. CDI operations that require scratch space Copiar o linkLink copiado para a área de transferência!
| Type | Reason |
|---|---|
| Registry imports | CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
| Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
| HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
| HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
| HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
9.8.3. Defining a storage class Copiar o linkLink copiado para a área de transferência!
You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the
spec.scratchSpaceStorageClass
HyperConverged
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Edit the
CR by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the
field to the CR, setting the value to the name of a storage class that exists in the cluster:spec.scratchSpaceStorageClassapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>"1 - 1
- If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated.
-
Save and exit your default editor to update the CR.
HyperConverged
9.8.4. CDI supported operations matrix Copiar o linkLink copiado para a área de transferência!
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
| Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
|---|---|---|---|---|---|
| KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
| KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
9.9. Using preallocation for data volumes Copiar o linkLink copiado para a área de transferência!
The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes.
You can enable preallocation for specific data volumes.
9.9.1. About preallocation Copiar o linkLink copiado para a área de transferência!
The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes.
If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type:
fallocate-
If the file system supports it, CDI uses the operating system’s
fallocatecall to preallocate space by using theposix_fallocatefunction, which allocates blocks and marks them as uninitialized. full-
If
fallocatemode cannot be used,fullmode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed.
9.9.2. Enabling preallocation for a data volume Copiar o linkLink copiado para a área de transferência!
You can enable preallocation for specific data volumes by including the
spec.preallocation
oc
Preallocation mode is supported for all CDI source types.
Procedure
Specify the
field in the data volume manifest:spec.preallocationapiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source:1 registry: url: <image_url>2 storage: resources: requests: storage: 1Gi # ...
9.10. Managing data volume annotations Copiar o linkLink copiado para a área de transferência!
Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
9.10.1. Example: Data volume annotations Copiar o linkLink copiado para a área de transferência!
This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The
v1.multus-cni.io/default-network: bridge-network
bridge-network
k8s.v1.cni.cncf.io/networks: <network_name>
Multus network annotation example
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: datavolume-example
annotations:
v1.multus-cni.io/default-network: bridge-network
# ...
- 1
- Multus network annotation
Chapter 10. Live migration Copiar o linkLink copiado para a área de transferência!
10.1. About live migration Copiar o linkLink copiado para a área de transferência!
Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. By default, live migration traffic is encrypted using Transport Layer Security (TLS).
10.1.1. Live migration requirements Copiar o linkLink copiado para a área de transferência!
Live migration has the following requirements:
-
The cluster must have shared storage with (RWX) access mode.
ReadWriteMany The cluster must have sufficient RAM and network bandwidth.
NoteYou must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)The default number of migrations that can run in parallel in the cluster is 5.
- If a VM uses a host model CPU, the nodes must support the CPU.
- Configuring a dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
10.1.2. Common live migration tasks Copiar o linkLink copiado para a área de transferência!
You can perform the following live migration tasks:
10.1.3. Additional resources Copiar o linkLink copiado para a área de transferência!
10.2. Configuring live migration Copiar o linkLink copiado para a área de transferência!
You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster.
You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs).
10.2.1. Live migration settings Copiar o linkLink copiado para a área de transferência!
You can configure the following live migration settings:
10.2.1.1. Configuring live migration limits and timeouts Copiar o linkLink copiado para a área de transferência!
Configure live migration limits and timeouts for the cluster by updating the
HyperConverged
openshift-cnv
Procedure
Edit the
CR and add the necessary live migration parameters:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvExample configuration file
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi1 completionTimeoutPerGiB: 8002 parallelMigrationsPerCluster: 53 parallelOutboundMigrationsPerNode: 24 progressTimeout: 1505 - 1
- Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of
2048Mimeans 2048 MiB/s. Default:0, which is unlimited. - 2
- The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the
Migration MethodisBlockMigration, the size of the migrating disks is included in the calculation. - 3
- Number of migrations running in parallel in the cluster. Default:
5. - 4
- Maximum number of outbound migrations per node. Default:
2. - 5
- The migration is canceled if memory copy fails to make progress in this time, in seconds. Default:
150.
You can restore the default value for any
spec.liveMigrationConfig
progressTimeout: <value>
progressTimeout: 150
10.2.2. Live migration policies Copiar o linkLink copiado para a área de transferência!
You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels.
You can create live migration policies by using the web console.
10.2.2.1. Creating a live migration policy by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a live migration policy by using the command line. A live migration policy is applied to selected virtual machines (VMs) by using any combination of labels:
-
VM labels such as ,
size, orosgpu -
Project labels such as ,
priority, orbandwidthhpc-workload
For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy.
If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence.
If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence.
Procedure
Create a
object as in the following example:MigrationPolicyapiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector:1 hpc-workloads: "True" xyz-workloads-type: "" virtualMachineInstanceSelector:2 workload-type: "db" operating-system: ""Create the migration policy by running the following command:
$ oc create -f <migration_policy>.yaml
10.3. Initiating and canceling live migration Copiar o linkLink copiado para a área de transferência!
You can initiate the live migration of a virtual machine (VM) to another node by using the OpenShift Container Platform web console or the command line.
You can cancel a live migration by using the web console or the command line. The VM remains on its original node.
You can also initiate and cancel live migration by using the
virtctl migrate <vm_name>
virtctl migrate-cancel <vm_name>
10.3.1. Initiating live migration Copiar o linkLink copiado para a área de transferência!
10.3.1.1. Initiating live migration by using the web console Copiar o linkLink copiado para a área de transferência!
You can live migrate a running virtual machine (VM) to a different node in the cluster by using the OpenShift Container Platform web console.
The Migrate action is visible to all users but only cluster administrators can initiate a live migration.
Prerequisites
- The VM must be migratable.
- If the VM is configured with a host model CPU, the cluster must have an available node that supports the CPU model.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
-
Select Migrate from the Options menu
beside a VM.
- Click Migrate.
10.3.1.2. Initiating live migration by using the command line Copiar o linkLink copiado para a área de transferência!
You can initiate the live migration of a running virtual machine (VM) by using the command line to create a
VirtualMachineInstanceMigration
Procedure
Create a
manifest for the VM that you want to migrate:VirtualMachineInstanceMigrationapiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>Create the object by running the following command:
$ oc create -f <migration_name>.yamlThe
object triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted.VirtualMachineInstanceMigration
Verification
Obtain the VM status by running the following command:
$ oc describe vmi <vm_name> -n <namespace>Example output
# ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true
10.3.2. Canceling live migration Copiar o linkLink copiado para a área de transferência!
10.3.2.1. Canceling live migration by using the web console Copiar o linkLink copiado para a área de transferência!
You can cancel the live migration of a virtual machine (VM) by using the OpenShift Container Platform web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
-
Select Cancel Migration on the Options menu
beside a VM.
10.3.2.2. Canceling live migration by using the command line Copiar o linkLink copiado para a área de transferência!
Cancel the live migration of a virtual machine by deleting the
VirtualMachineInstanceMigration
Procedure
Delete the
object that triggered the live migration,VirtualMachineInstanceMigrationin this example:migration-job$ oc delete vmim migration-job
Chapter 11. Nodes Copiar o linkLink copiado para a área de transferência!
11.1. Node maintenance Copiar o linkLink copiado para a área de transferência!
Nodes can be placed into maintenance mode by using the
oc adm
NodeMaintenance
The
node-maintenance-operator
oc
For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
Virtual machines (VMs) must have a persistent volume claim (PVC) with a shared
ReadWriteMany
The Node Maintenance Operator watches for new or deleted
NodeMaintenance
NodeMaintenance
NodeMaintenance
Using a
NodeMaintenance
oc adm cordon
oc adm drain
11.1.1. Eviction strategies Copiar o linkLink copiado para a área de transferência!
Placing a node into maintenance marks the node as unschedulable and drains all the VMs and pods from it.
You can configure eviction strategies for virtual machines (VMs) or for the cluster.
- VM eviction strategy
The VM
eviction strategy ensures that a virtual machine instance (VMI) is not interrupted if the node is placed into maintenance or drained. VMIs with this eviction strategy will be live migrated to another node.LiveMigrateYou can configure eviction strategies for virtual machines (VMs) by using the web console or the command line.
ImportantThe default eviction strategy is
. A non-migratable VM with aLiveMigrateeviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in aLiveMigrateorPendingstate unless you shut down the VM manually.SchedulingYou must set the eviction strategy of non-migratable VMs to
, which does not block an upgrade, or toLiveMigrateIfPossible, for VMs that should not be migrated.None
- Cluster eviction strategy
- You can configure an eviction strategy for the cluster to prioritize workload continuity or infrastructure upgrade.
Configuring a cluster eviction strategy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
| Eviction strategy | Description | Interrupts workflow | Blocks upgrades |
|---|---|---|---|
|
| Prioritizes workload continuity over upgrades. | No | Yes 2 |
|
| Prioritizes upgrades over workload continuity to ensure that the environment is updated. | Yes | No |
|
| Shuts down VMs with no eviction strategy. | Yes | No |
- Default eviction strategy for multi-node clusters.
- If a VM blocks an upgrade, you must shut down the VM manually.
- Default eviction strategy for single-node OpenShift.
11.1.1.1. Configuring a VM eviction strategy using the command line Copiar o linkLink copiado para a área de transferência!
You can configure an eviction strategy for a virtual machine (VM) by using the command line.
The default eviction strategy is
LiveMigrate
LiveMigrate
Pending
Scheduling
You must set the eviction strategy of non-migratable VMs to
LiveMigrateIfPossible
None
Procedure
Edit the
resource by running the following command:VirtualMachine$ oc edit vm <vm_name> -n <namespace>Example eviction strategy
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible1 # ...- 1
- Specify the eviction strategy. The default value is
LiveMigrate.
Restart the VM to apply the changes:
$ virtctl restart <vm_name> -n <namespace>
11.1.1.2. Configuring a cluster eviction strategy by using the command line Copiar o linkLink copiado para a área de transferência!
You can configure an eviction strategy for a cluster by using the command line.
Configuring a cluster eviction strategy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Procedure
Edit the
resource by running the following command:hyperconverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvSet the cluster eviction strategy as shown in the following example:
Example cluster eviction strategy
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate # ...
11.1.2. Run strategies Copiar o linkLink copiado para a área de transferência!
A virtual machine (VM) configured with
spec.running: true
spec.runStrategy
The
spec.runStrategy
spec.running
A VM configuration with both keys is invalid.
11.1.2.1. Run strategies Copiar o linkLink copiado para a área de transferência!
The
spec.runStrategy
Always-
The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. This is the same behavior as
running: true. RerunOnFailure- The VMI is re-created on another node if the previous instance fails. The instance is not re-created if the VM stops successfully, such as when it is shut down.
Manual-
You control the VMI state manually with the
start,stop, andrestartvirtctl client commands. The VM is not automatically restarted. Halted-
No VMI is present when a VM is created. This is the same behavior as
running: false.
Different combinations of the
virtctl start
stop
restart
The following table describes a VM’s transition between states. The first column shows the VM’s initial run strategy. The remaining columns show a virtctl command and the new run strategy after that command is run.
| Initial run strategy | Start | Stop | Restart |
|---|---|---|---|
| Always | - | Halted | Always |
| RerunOnFailure | RerunOnFailure | RerunOnFailure | RerunOnFailure |
| Manual | Manual | Manual | Manual |
| Halted | Always | - | - |
If a node in a cluster installed by using installer-provisioned infrastructure fails the machine health check and is unavailable, VMs with
runStrategy: Always
runStrategy: RerunOnFailure
11.1.2.2. Configuring a VM run strategy by using the command line Copiar o linkLink copiado para a área de transferência!
You can configure a run strategy for a virtual machine (VM) by using the command line.
The
spec.runStrategy
spec.running
Procedure
Edit the
resource by running the following command:VirtualMachine$ oc edit vm <vm_name> -n <namespace>Example run strategy
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always # ...
11.1.3. Maintaining bare metal nodes Copiar o linkLink copiado para a área de transferência!
When you deploy OpenShift Container Platform on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks.
When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance.
11.2. Managing node labeling for obsolete CPU models Copiar o linkLink copiado para a área de transferência!
You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node.
11.2.1. About node labeling for obsolete CPU models Copiar o linkLink copiado para a área de transferência!
The OpenShift Virtualization Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs.
By default, the following CPU models are eliminated from the list of labels generated for the node:
Example 11.1. Obsolete CPU models
"486"
Conroe
athlon
core2duo
coreduo
kvm32
kvm64
n270
pentium
pentium2
pentium3
pentiumpro
phenom
qemu32
qemu64
This predefined list is not visible in the
HyperConverged
spec.obsoleteCPUs.cpuModels
HyperConverged
11.2.2. About node labeling for CPU features Copiar o linkLink copiado para a área de transferência!
Through the process of iteration, the base CPU features in the minimum CPU model are eliminated from the list of labels generated for the node.
For example:
-
An environment might have two supported CPU models: and
Penryn.Haswell If
is specified as the CPU model forPenryn, each base CPU feature forminCPUis compared to the list of CPU features supported byPenryn.HaswellExample 11.2. CPU features supported by
Penrynapic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tscExample 11.3. CPU features supported by
Haswellaes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsaveIf both
andPenrynsupport a specific CPU feature, a label is not created for that feature. Labels are generated for CPU features that are supported only byHaswelland not byHaswell.PenrynExample 11.4. Node labels created for CPU features after iteration
aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave
11.2.3. Configuring obsolete CPU models Copiar o linkLink copiado para a área de transferência!
You can configure a list of obsolete CPU models by editing the
HyperConverged
Procedure
Edit the
custom resource, specifying the obsolete CPU models in theHyperConvergedarray. For example:obsoleteCPUsapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels:1 - "<obsolete_cpu_1>" - "<obsolete_cpu_2>" minCPUModel: "<minimum_cpu_model>"2 - 1
- Replace the example values in the
cpuModelsarray with obsolete CPU models. Any value that you specify is added to a predefined list of obsolete CPU models. The predefined list is not visible in the CR. - 2
- Replace this value with the minimum CPU model that you want to use for basic CPU features. If you do not specify a value,
Penrynis used by default.
11.3. Preventing node reconciliation Copiar o linkLink copiado para a área de transferência!
Use
skip-node
node-labeller
11.3.1. Using skip-node annotation Copiar o linkLink copiado para a área de transferência!
If you want the
node-labeller
oc
Prerequisites
-
You have installed the OpenShift CLI ().
oc
Procedure
Annotate the node that you want to skip by running the following command:
$ oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=trueReplace
with the name of the relevant node to skip.<node_name>Reconciliation resumes on the next cycle after the node annotation is removed or set to false.
11.4. Deleting a failed node to trigger virtual machine failover Copiar o linkLink copiado para a área de transferência!
If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with
runStrategy: Always
Node
If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks, the following events occur:
- Failed nodes are automatically recycled.
-
Virtual machines with
runStrategyset toorAlwaysare automatically scheduled on healthy nodes.RerunOnFailure
11.4.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
-
A node where a virtual machine was running has the condition.
NotReady -
The virtual machine that was running on the failed node has set to
runStrategy.Always -
You have installed the OpenShift CLI ().
oc
11.4.2. Deleting nodes from a bare metal cluster Copiar o linkLink copiado para a área de transferência!
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.
Procedure
Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps:
Mark the node as unschedulable:
$ oc adm cordon <node_name>Drain all pods on the node:
$ oc adm drain <node_name> --force=trueThis step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.
Delete the node from the cluster:
$ oc delete node <node_name>Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.
- If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster.
11.4.3. Verifying virtual machine failover Copiar o linkLink copiado para a área de transferência!
After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the
oc
11.4.3.1. Listing all virtual machine instances using the CLI Copiar o linkLink copiado para a área de transferência!
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the
oc
Procedure
List all VMIs by running the following command:
$ oc get vmis -A
Chapter 12. Monitoring Copiar o linkLink copiado para a área de transferência!
12.1. Monitoring overview Copiar o linkLink copiado para a área de transferência!
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- Monitoring OpenShift Virtualization VMs health status
- View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home → Overview page in the OpenShift Container Platform web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions.
- OpenShift Container Platform cluster checkup framework
Run automated tests on your cluster with the OpenShift Container Platform cluster checkup framework to check the following conditions:
- Network connectivity and latency between two VMs attached to a secondary network interface
- VM running a Data Plane Development Kit (DPDK) workload with zero packet loss
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporterservice to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
- Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the OpenShift Container Platform web console.
12.2. OpenShift Virtualization cluster checkup framework Copiar o linkLink copiado para a área de transferência!
A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.
The OpenShift Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a developer or cluster administrator, you can use predefined checkups to improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. You can review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.
12.2.1. Running predefined latency checkups Copiar o linkLink copiado para a área de transferência!
You can use a latency checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The predefined latency checkup uses the ping utility.
Before you run a latency checkup, you must first create a bridge interface on the cluster nodes to connect the VM’s secondary interface to any interface on the node. If you do not create a bridge interface, the VMs do not start and the job fails.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the
Role
RoleBinding
You must always:
- Verify that the checkup image is from a trustworthy source before applying it.
-
Review the checkup permissions before creating the and
Roleobjects.RoleBinding
12.2.1.1. Running a latency checkup Copiar o linkLink copiado para a área de transferência!
You run a latency checkup using the CLI by performing the following steps:
- Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the latency checkup resources.
Prerequisites
-
You installed the OpenShift CLI ().
oc - The cluster has at least two worker nodes.
- You configured a network attachment definition for a namespace.
Procedure
Create a
,ServiceAccount, andRolemanifest for the latency checkup:RoleBindingExample 12.1. Example role manifest file
--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: ["kubevirt.io"] resources: ["virtualmachineinstances"] verbs: ["get", "create", "delete"] - apiGroups: ["subresources.kubevirt.io"] resources: ["virtualmachineinstances/console"] verbs: ["get"] - apiGroups: ["k8s.cni.cncf.io"] resources: ["network-attachment-definitions"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.ioApply the
,ServiceAccount, andRolemanifest:RoleBinding$ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yamlwhere:
<target_namespace>-
Specifies the namespace where the checkup is to be run. This must be an existing namespace where the
NetworkAttachmentDefinitionobject resides.
Create a
manifest that contains the input parameters for the checkup:ConfigMapExample input config map
apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2"where:
data.spec.param.networkAttachmentDefinitionName-
Specifies the name of the
NetworkAttachmentDefinitionobject. data.spec.param.maxDesiredLatencyMilliseconds- Optional: Specifies the maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
data.spec.param.sampleDurationSeconds- Optional: Specifies the duration of the latency check, in seconds.
data.spec.param.sourceNode-
Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the
spec.param.targetNodefield cannot be empty. data.spec.param.targetNode- Optional: When specified, latency is measured from the source node to this node.
Apply the config map manifest in the target namespace:
$ oc apply -n <target_namespace> -f <latency_config_map>.yamlCreate a
manifest to run the checkup:JobExample job manifest
apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.14.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uidApply the
manifest:Job$ oc apply -n <target_namespace> -f <latency_job>.yamlWait for the job to complete:
$ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6mReview the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the
attribute, the checkup fails and returns an error.spec.param.maxDesiredLatencyMilliseconds$ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yamlExample output config map (success)
apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2" status.succeeded: "true" status.failureReason: "" status.completionTimestamp: "2022-01-01T09:00:00Z" status.startTimestamp: "2022-01-01T09:00:07Z" status.result.avgLatencyNanoSec: "177000" status.result.maxLatencyNanoSec: "244000" status.result.measurementDurationSec: "5" status.result.minLatencyNanoSec: "135000" status.result.sourceNode: "worker1" status.result.targetNode: "worker2"where:
data.status.result.maxLatencyNanoSec- Specifies the maximum measured latency in nanoseconds.
Optional: To view the detailed job log in case of checkup failure, use the following command:
$ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup$ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-configOptional: If you do not plan to run another checkup, delete the roles manifest:
$ oc delete -f <latency_sa_roles_rolebinding>.yaml
12.2.2. Running predefined DPDK checkups Copiar o linkLink copiado para a área de transferência!
You can use a DPDK checkup to verify that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss.
12.2.2.1. DPDK checkup Copiar o linkLink copiado para a área de transferência!
Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
- Create a service account, role, and role bindings for the DPDK checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the DPDK checkup resources.
Prerequisites
-
You have installed the OpenShift CLI ().
oc - The cluster is configured to run DPDK applications.
- The project is configured to run DPDK applications.
Procedure
Create a
,ServiceAccount, andRolemanifest for the DPDK checkup:RoleBindingExample 12.2. Example service account, role, and rolebinding manifest file
--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "get", "update" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/console" ] verbs: [ "get" ] - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "create", "delete" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checkerApply the
,ServiceAccount, andRolemanifest:RoleBinding$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yamlCreate a
manifest that contains the input parameters for the checkup:ConfigMapExample input config map
apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name>1 spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.2.02 spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.2.0"3 - 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
- 3
- The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry.
Apply the
manifest in the target namespace:ConfigMap$ oc apply -n <target_namespace> -f <dpdk_config_map>.yamlCreate a
manifest to run the checkup:JobExample job manifest
apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.14.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uidApply the
manifest:Job$ oc apply -n <target_namespace> -f <dpdk_job>.yamlWait for the job to complete:
$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10mReview the results of the checkup by running the following command:
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yamlExample output config map (success)
apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1" spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.2.0" spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.2.0" status.succeeded: "true"1 status.failureReason: ""2 status.startTimestamp: "2023-07-31T13:14:38Z"3 status.completionTimestamp: "2023-07-31T13:19:41Z"4 status.result.trafficGenSentPackets: "480000000"5 status.result.trafficGenOutputErrorPackets: "0"6 status.result.trafficGenInputErrorPackets: "0"7 status.result.trafficGenActualNodeName: worker-dpdk18 status.result.vmUnderTestActualNodeName: worker-dpdk29 status.result.vmUnderTestReceivedPackets: "480000000"10 status.result.vmUnderTestRxDroppedPackets: "0"11 status.result.vmUnderTestTxDroppedPackets: "0"12 - 1
- Specifies if the checkup is successful (
true) or not (false). - 2
- The reason for failure if the checkup fails.
- 3
- The time when the checkup started, in RFC 3339 time format.
- 4
- The time when the checkup has completed, in RFC 3339 time format.
- 5
- The number of packets sent from the traffic generator.
- 6
- The number of error packets sent from the traffic generator.
- 7
- The number of error packets received by the traffic generator.
- 8
- The node on which the traffic generator VM was scheduled.
- 9
- The node on which the VM under test was scheduled.
- 10
- The number of packets received on the VM under test.
- 11
- The ingress traffic packets that were dropped by the DPDK application.
- 12
- The egress traffic packets that were dropped from the DPDK application.
Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> dpdk-checkup$ oc delete config-map -n <target_namespace> dpdk-checkup-configOptional: If you do not plan to run another checkup, delete the
,ServiceAccount, andRolemanifest:RoleBinding$ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
12.2.2.1.1. DPDK checkup config map parameters Copiar o linkLink copiado para a área de transferência!
The following table shows the mandatory and optional parameters that you can set in the
data
ConfigMap
| Parameter | Description | Is Mandatory |
|---|---|---|
|
| The time, in minutes, before the checkup fails. | True |
|
| The name of the
| True |
|
| The container disk image for the traffic generator. The default value is
| False |
|
| The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The number of packets per second, in kilo (k) or million(m). The default value is 8m. | False |
|
| The container disk image for the VM under test. The default value is
| False |
|
| The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. | False |
|
| The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps. | False |
|
| When set to
| False |
12.2.2.1.2. Building a container disk image for RHEL virtual machines Copiar o linkLink copiado para a área de transferência!
You can build a custom Red Hat Enterprise Linux (RHEL) 8 OS image in
qcow2
spec.param.vmContainerDiskImage
To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 8 VM that can be used to build custom RHEL images.
Prerequisites
-
The image builder VM must run RHEL 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the directory.
/var -
You have installed the image builder tool and its CLI () on the VM.
composer-cli You have installed the
tool:virt-customize# dnf install libguestfs-tools-
You have installed the Podman CLI tool ().
podman
Procedure
Verify that you can build a RHEL 8.7 image:
# composer-cli distros listNoteTo run the
commands as non-root, add your user to thecomposer-cliorweldrgroups:root# usermod -a -G weldr user$ newgrp weldrEnter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
$ cat << EOF > dpdk-vm.toml name = "dpdk_image" description = "Image to use with the DPDK checkup" version = "0.0.1" distro = "rhel-87" [[packages]] name = "dpdk" [[packages]] name = "dpdk-tools" [[packages]] name = "driverctl" [[packages]] name = "tuned-profiles-cpu-partitioning" [customizations.kernel] append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7" [customizations.services] disabled = ["NetworkManager-wait-online", "sshd"] EOFPush the blueprint file to the image builder tool by running the following command:
# composer-cli blueprints push dpdk-vm.tomlGenerate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
# composer-cli compose start dpdk_image qcow2Wait for the compose process to complete. The compose status must show
before you can continue to the next step.FINISHED# composer-cli compose statusEnter the following command to download the
image file by specifying its UUID:qcow2# composer-cli compose image <UUID>Create the customization scripts by running the following commands:
$ cat <<EOF >customize-vm echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf tuned-adm profile cpu-partitioning echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf EOF$ cat <<EOF >first-boot driverctl set-override 0000:06:00.0 vfio-pci driverctl set-override 0000:07:00.0 vfio-pci mkdir /mnt/huge mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB EOFUse the
tool to customize the image generated by the image builder tool:virt-customize$ virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabelTo create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
$ cat << EOF > Dockerfile FROM scratch COPY <uuid>-disk.qcow2 /disk/ EOFwhere:
- <uuid>-disk.qcow2
-
Specifies the name of the custom image in
qcow2format.
Build and tag the container by running the following command:
$ podman build . -t dpdk-rhel:latestPush the container disk image to a registry that is accessible from your cluster by running the following command:
$ podman push dpdk-rhel:latest-
Provide a link to the container disk image in the attribute in the DPDK checkup config map.
spec.param.vmContainerDiskImage
12.3. Prometheus queries for virtual resources Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status.
12.3.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
-
To use the vCPU metric, the kernel argument must be applied to the
schedstats=enableobject. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see Adding kernel arguments to nodes.MachineConfig - For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
12.3.2. Querying metrics for all projects with the OpenShift Container Platform web console Copiar o linkLink copiado para a área de transferência!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.
Prerequisites
-
You have access to the cluster as a user with the cluster role or with view permissions for all projects.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
- From the Administrator perspective in the OpenShift Container Platform web console, select Observe → Metrics.
To add one or more queries, do any of the following:
Expand Option Description Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Select Add query.
Duplicate an existing query.
Select the Options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Select the Options menu
next to the query and choose Disable query.
To run queries that you created, select Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
NoteQueries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
NoteBy default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following:
Expand Option Description Hide all metrics from a query.
Click the Options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box.
Hide the plot.
Select Hide graph.
12.3.3. Querying metrics for user-defined projects with the OpenShift Container Platform web console Copiar o linkLink copiado para a área de transferência!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.
Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a custom resource definition (CRD) for the service to define how the service is monitored.
ServiceMonitor
Procedure
- From the Developer perspective in the OpenShift Container Platform web console, select Observe → Metrics.
- Select the project that you want to view metrics for from the Project: list.
Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL. The metrics from the queries are visualized on the plot.
NoteIn the Developer perspective, you can only run one query at a time.
Explore the visualized metrics by doing any of the following:
Expand Option Description Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box.
12.3.4. Virtualization metrics Copiar o linkLink copiado para a área de transferência!
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use
topk
12.3.4.1. vCPU metrics Copiar o linkLink copiado para a área de transferência!
The following query can identify virtual machines that are waiting for Input/Output (I/O):
kubevirt_vmi_vcpu_wait_seconds_total- Returns the wait time (in seconds) on I/O for vCPUs of a virtual machine. Type: Counter.
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
To query the vCPU metric, the
schedstats=enable
MachineConfig
Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0
- 1
- This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
12.3.4.2. Network metrics Copiar o linkLink copiado para a área de transferência!
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
12.3.4.3. Storage metrics Copiar o linkLink copiado para a área de transferência!
12.3.4.3.1. Storage-related traffic Copiar o linkLink copiado para a área de transferência!
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
12.3.4.3.2. Storage snapshot data Copiar o linkLink copiado para a área de transferência!
kubevirt_vmsnapshot_disks_restored_from_source_total- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
12.3.4.3.3. I/O performance Copiar o linkLink copiado para a área de transferência!
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
12.3.4.4. Guest memory swapping metrics Copiar o linkLink copiado para a área de transferência!
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
12.3.4.5. Live migration metrics Copiar o linkLink copiado para a área de transferência!
The following metrics can be queried to show live migration status:
kubevirt_migrate_vmi_data_processed_bytes- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_migrate_vmi_data_remaining_bytes- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_migrate_vmi_dirty_memory_rate_bytes- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_migrate_vmi_pending_count- The number of pending migrations. Type: Gauge.
kubevirt_migrate_vmi_scheduling_count- The number of scheduling migrations. Type: Gauge.
kubevirt_migrate_vmi_running_count- The number of running migrations. Type: Gauge.
kubevirt_migrate_vmi_succeeded- The number of successfully completed migrations. Type: Gauge.
kubevirt_migrate_vmi_failed- The number of failed migrations. Type: Gauge.
12.4. Exposing custom metrics for virtual machines Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the
node-exporter
12.4.1. Configuring the node exporter service Copiar o linkLink copiado para a área de transferência!
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift Container Platform CLI .
oc -
Log in to the cluster as a user with privileges.
cluster-admin -
Create the
cluster-monitoring-configobject in theConfigMapproject.openshift-monitoring -
Configure the
user-workload-monitoring-configobject in theConfigMapproject by settingopenshift-user-workload-monitoringtoenableUserWorkload.true
Procedure
Create the
YAML file. In the following example, the file is calledService.node-exporter-service.yamlkind: Service apiVersion: v1 metadata: name: node-exporter-service1 namespace: dynamation2 labels: servicetype: metrics3 spec: ports: - name: exmet4 protocol: TCP port: 91005 targetPort: 91006 type: ClusterIP selector: monitor: metrics7 - 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitoruses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIPservice. - 5
- The target port used by
node-exporter-serviceto listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitorlabel. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitorand a value ofmetricswill be matched.
Create the node-exporter service:
$ oc create -f node-exporter-service.yaml
12.4.2. Configuring a virtual machine with the node exporter service Copiar o linkLink copiado para a área de transferência!
Download the
node-exporter
systemd
Prerequisites
-
The pods for the component are running in the project.
openshift-user-workload-monitoring -
Grant the role to users who need to monitor this user-defined project.
monitoring-edit
Procedure
- Log on to the virtual machine.
Download the
file on to the virtual machine by using the directory path that applies to the version ofnode-exporterfile.node-exporter$ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gzExtract the executable and place it in the
directory./usr/bin$ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"Create a
file in this directory path:node_exporter.service. This/etc/systemd/systemservice file runs the node-exporter service when the virtual machine reboots.systemd[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.targetEnable and start the
service.systemd$ sudo systemctl enable node_exporter.service$ sudo systemctl start node_exporter.service
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
$ curl http://localhost:9100/metricsExample output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05
12.4.3. Creating a custom monitoring label for virtual machines Copiar o linkLink copiado para a área de transferência!
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift Container Platform CLI .
oc -
Log in as a user with privileges.
cluster-admin - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
spec of your virtual machine configuration file. In this example, the labeltemplatehas the valuemonitor.metricsspec: template: metadata: labels: monitor: metrics-
Stop and restart the virtual machine to create a new pod with the label name given to the label.
monitor
12.4.3.1. Querying the node-exporter service for metrics Copiar o linkLink copiado para a área de transferência!
Metrics are exposed for virtual machines through an HTTP service endpoint under the
/metrics
Prerequisites
-
You have access to the cluster as a user with privileges or the
cluster-adminrole.monitoring-edit - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
$ oc get service -n <namespace> <node-exporter-service>To list all available metrics for the node-exporter service, query the
resource.metrics$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"Example output
node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0
12.4.4. Creating a ServiceMonitor resource for the node exporter service Copiar o linkLink copiado para a área de transferência!
You can use a Prometheus client library and scrape metrics from the
/metrics
ServiceMonitor
Prerequisites
-
You have access to the cluster as a user with privileges or the
cluster-adminrole.monitoring-edit - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Create a YAML file for the
resource configuration. In this example, the service monitor matches any service with the labelServiceMonitorand queries themetricsport every 30 seconds.exmetapiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor1 namespace: dynamation2 spec: endpoints: - interval: 30s3 port: exmet4 scheme: http selector: matchLabels: servicetype: metricsCreate the
configuration for the node-exporter service.ServiceMonitor$ oc create -f node-exporter-metrics-monitor.yaml
12.4.4.1. Accessing the node exporter service outside the cluster Copiar o linkLink copiado para a área de transferência!
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with privileges or the
cluster-adminrole.monitoring-edit - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Expose the node-exporter service.
$ oc expose service -n <namespace> <node_exporter_service_name>Obtain the FQDN (Fully Qualified Domain Name) for the route.
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.hostExample output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.orgUse the
command to display metrics for the node-exporter service.curl$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metricsExample output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423
12.5. Virtual machine health checks Copiar o linkLink copiado para a área de transferência!
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the
VirtualMachine
12.5.1. About readiness and liveness probes Copiar o linkLink copiado para a área de transferência!
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the
spec.readinessProbe
spec.livenessProbe
VirtualMachine
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-pingcommand to determine if the QEMU guest agent is running on the virtual machine.
12.5.1.1. Defining an HTTP readiness probe Copiar o linkLink copiado para a área de transferência!
Define an HTTP readiness probe by setting the
spec.readinessProbe.httpGet
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: httpGet:1 port: 15002 path: /healthz3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 1204 periodSeconds: 205 timeoutSeconds: 106 failureThreshold: 37 successThreshold: 38 # ...- 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.5.1.2. Defining a TCP readiness probe Copiar o linkLink copiado para a área de transferência!
Define a TCP readiness probe by setting the
spec.readinessProbe.tcpSocket
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: initialDelaySeconds: 1201 periodSeconds: 202 tcpSocket:3 port: 15004 timeoutSeconds: 105 # ...- 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.5.1.3. Defining an HTTP liveness probe Copiar o linkLink copiado para a área de transferência!
Define an HTTP liveness probe by setting the
spec.livenessProbe.httpGet
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: livenessProbe: initialDelaySeconds: 1201 periodSeconds: 202 httpGet:3 port: 15004 path: /healthz5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 106 # ...- 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthzpath returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.5.2. Defining a watchdog Copiar o linkLink copiado para a área de transferência!
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
: The VM powers down immediately. If
poweroffis set tospec.runningortrueis not set tospec.runStrategy, then the VM reboots.manual - : The VM reboots in place and the guest operating system cannot react.
resetNoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
: The VM gracefully powers down by stopping all services.
shutdown
Watchdog is not available for Windows VMs.
12.5.2.1. Configuring a watchdog device for the virtual machine Copiar o linkLink copiado para a área de transferência!
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
The VM must have kernel support for an watchdog device. Red Hat Enterprise Linux (RHEL) images support
i6300esb.i6300esb
Procedure
Create a
file with the following contents:YAMLapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff"1 # ...- 1
- Specify
poweroff,reset, orshutdown.
The example above configures the
watchdog device on a RHEL8 VM with the poweroff action and exposes the device asi6300esb./dev/watchdogThis device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
Verification
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
$ lspci | grep watchdog -iRun one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
# echo c > /proc/sysrq-triggerStop the watchdog service:
# pkill -9 watchdog
12.5.2.2. Installing the watchdog agent on the guest Copiar o linkLink copiado para a área de transferência!
You install the watchdog agent on the guest and start the
watchdog
Procedure
- Log in to the virtual machine as root user.
Install the
package and its dependencies:watchdog# yum install watchdogUncomment the following line in the
file and save the changes:/etc/watchdog.conf#watchdog-device = /dev/watchdogEnable the
service to start on boot:watchdog# systemctl enable --now watchdog.service
12.5.3. Defining a guest agent ping probe Copiar o linkLink copiado para a área de transferência!
Define a guest agent ping probe by setting the
spec.readinessProbe.guestAgentPing
The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- The QEMU guest agent must be installed and enabled on the virtual machine.
Procedure
Include details of the guest agent ping probe in the VM configuration file. For example:
Sample guest agent ping probe
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: guestAgentPing: {}1 initialDelaySeconds: 1202 periodSeconds: 203 timeoutSeconds: 104 failureThreshold: 35 successThreshold: 36 # ...- 1
- The guest agent ping probe to connect to the VM.
- 2
- Optional: The time, in seconds, after the VM starts before the guest agent probe is initiated.
- 3
- Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 4
- Optional: The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 5
- Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 6
- Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.6. OpenShift Virtualization runbooks Copiar o linkLink copiado para a área de transferência!
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub. To diagnose and resolve issues that trigger OpenShift Virtualization alerts, follow the procedures in the runbooks.
OpenShift Virtualization alerts are displayed in the Virtualization → Overview tab in the web console.
12.6.1. CDIDataImportCronOutdated Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDIDataImportCronOutdated
12.6.2. CDIDataVolumeUnusualRestartCount Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDIDataVolumeUnusualRestartCount
12.6.3. CDIDefaultStorageClassDegraded Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDIDefaultStorageClassDegraded
12.6.4. CDIMultipleDefaultVirtStorageClasses Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDIMultipleDefaultVirtStorageClasses
12.6.5. CDINoDefaultStorageClass Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDINoDefaultStorageClass
12.6.6. CDINotReady Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDINotReady
12.6.7. CDIOperatorDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDIOperatorDown
12.6.8. CDIStorageProfilesIncomplete Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CDIStorageProfilesIncomplete
12.6.9. CnaoDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CnaoDown
12.6.10. CnaoNMstateMigration Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
CnaoNMstateMigration
12.6.11. HCOInstallationIncomplete Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
HCOInstallationIncomplete
12.6.12. HPPNotReady Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
HPPNotReady
12.6.13. HPPOperatorDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
HPPOperatorDown
12.6.14. HPPSharingPoolPathWithOS Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
HPPSharingPoolPathWithOS
12.6.15. KubemacpoolDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
KubemacpoolDown
12.6.16. KubeMacPoolDuplicateMacsFound Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
KubeMacPoolDuplicateMacsFound
12.6.17. KubeVirtComponentExceedsRequestedCPU Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
KubeVirtComponentExceedsRequestedCPU
12.6.18. KubeVirtComponentExceedsRequestedMemory Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
KubeVirtComponentExceedsRequestedMemory
12.6.19. KubeVirtCRModified Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
KubeVirtCRModified
12.6.20. KubeVirtDeprecatedAPIRequested Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
KubeVirtDeprecatedAPIRequested
12.6.21. KubeVirtNoAvailableNodesToRunVMs Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
KubeVirtNoAvailableNodesToRunVMs
12.6.22. KubevirtVmHighMemoryUsage Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
KubevirtVmHighMemoryUsage
12.6.23. KubeVirtVMIExcessiveMigrations Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
KubeVirtVMIExcessiveMigrations
12.6.24. LowKVMNodesCount Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
LowKVMNodesCount
12.6.25. LowReadyVirtControllersCount Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
LowReadyVirtControllersCount
12.6.26. LowReadyVirtOperatorsCount Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
LowReadyVirtOperatorsCount
12.6.27. LowVirtAPICount Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
LowVirtAPICount
12.6.28. LowVirtControllersCount Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
LowVirtControllersCount
12.6.29. LowVirtOperatorCount Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
LowVirtOperatorCount
12.6.30. NetworkAddonsConfigNotReady Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
NetworkAddonsConfigNotReady
12.6.31. NoLeadingVirtOperator Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
NoLeadingVirtOperator
12.6.32. NoReadyVirtController Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
NoReadyVirtController
12.6.33. NoReadyVirtOperator Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
NoReadyVirtOperator
12.6.34. OrphanedVirtualMachineInstances Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
OrphanedVirtualMachineInstances
12.6.35. OutdatedVirtualMachineInstanceWorkloads Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
OutdatedVirtualMachineInstanceWorkloads
12.6.36. SingleStackIPv6Unsupported Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
SingleStackIPv6Unsupported
12.6.37. SSPCommonTemplatesModificationReverted Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
SSPCommonTemplatesModificationReverted
12.6.38. SSPDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
SSPDown
12.6.39. SSPFailingToReconcile Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
SSPFailingToReconcile
12.6.40. SSPHighRateRejectedVms Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
SSPHighRateRejectedVms
12.6.41. SSPTemplateValidatorDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
SSPTemplateValidatorDown
12.6.42. UnsupportedHCOModification Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
UnsupportedHCOModification
12.6.43. VirtAPIDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtAPIDown
12.6.44. VirtApiRESTErrorsBurst Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtApiRESTErrorsBurst
12.6.45. VirtApiRESTErrorsHigh Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
VirtApiRESTErrorsHigh
12.6.46. VirtControllerDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtControllerDown
12.6.47. VirtControllerRESTErrorsBurst Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtControllerRESTErrorsBurst
12.6.48. VirtControllerRESTErrorsHigh Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
VirtControllerRESTErrorsHigh
12.6.49. VirtHandlerDaemonSetRolloutFailing Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtHandlerDaemonSetRolloutFailing
12.6.50. VirtHandlerRESTErrorsBurst Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtHandlerRESTErrorsBurst
12.6.51. VirtHandlerRESTErrorsHigh Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
VirtHandlerRESTErrorsHigh
12.6.52. VirtOperatorDown Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtOperatorDown
12.6.53. VirtOperatorRESTErrorsBurst Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VirtOperatorRESTErrorsBurst
12.6.54. VirtOperatorRESTErrorsHigh Copiar o linkLink copiado para a área de transferência!
-
The alert is deprecated.
VirtOperatorRESTErrorsHigh
12.6.55. VirtualMachineCRCErrors Copiar o linkLink copiado para a área de transferência!
The runbook for the
alert is deprecated because the alert was renamed toVirtualMachineCRCErrors.VMStorageClassWarning-
View the runbook for the alert.
VMStorageClassWarning
-
View the runbook for the
12.6.56. VMCannotBeEvicted Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VMCannotBeEvicted
12.6.57. VMStorageClassWarning Copiar o linkLink copiado para a área de transferência!
-
View the runbook for the alert.
VMStorageClassWarning
Chapter 13. Support Copiar o linkLink copiado para a área de transferência!
13.1. Support overview Copiar o linkLink copiado para a área de transferência!
You can collect data about your environment, monitor the health of your cluster and virtual machines (VMs), and troubleshoot OpenShift Virtualization resources with the following tools.
13.1.1. Web console Copiar o linkLink copiado para a área de transferência!
The OpenShift Container Platform web console displays resource usage, alerts, events, and trends for your cluster and for OpenShift Virtualization components and resources.
| Page | Description |
|---|---|
| Overview page | Cluster details, status, alerts, inventory, and resource usage |
| Virtualization → Overview tab | OpenShift Virtualization resources, usage, alerts, and status |
| Virtualization → Top consumers tab | Top consumers of CPU, memory, and storage |
| Virtualization → Migrations tab | Progress of live migrations |
| VirtualMachines → VirtualMachine → VirtualMachine details → Metrics tab | VM resource usage, storage, network, and migration |
| VirtualMachines → VirtualMachine → VirtualMachine details → Events tab | List of VM events |
| VirtualMachines → VirtualMachine → VirtualMachine details → Diagnostics tab | VM status conditions and volume snapshot status |
13.1.2. Collecting data for Red Hat Support Copiar o linkLink copiado para a área de transferência!
When you submit a support case to Red Hat Support, it is helpful to provide debugging information. You can gather debugging information by performing the following steps:
- Collecting data about your environment
-
Configure Prometheus and Alertmanager and collect
must-gatherdata for OpenShift Container Platform and OpenShift Virtualization. - Collecting data about VMs
-
Collect
must-gatherdata and memory dumps from VMs.
must-gathertool for OpenShift Virtualization-
Configure and use the
must-gathertool.
13.1.3. Troubleshooting Copiar o linkLink copiado para a área de transferência!
Troubleshoot OpenShift Virtualization components and VMs and resolve issues that trigger alerts in the web console.
- Events
- View important life-cycle information for VMs, namespaces, and resources.
- Logs
- View and configure logs for OpenShift Virtualization components and VMs.
- Troubleshooting data volumes
- Troubleshoot data volumes by analyzing conditions and events.
13.2. Collecting data for Red Hat Support Copiar o linkLink copiado para a área de transferência!
When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools:
- must-gather tool
-
The
must-gathertool collects diagnostic information, including resource definitions and service logs. - Prometheus
- Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing.
- Alertmanager
- The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems. For information about the OpenShift Container Platform monitoring stack, see About OpenShift Container Platform monitoring.
13.2.1. Collecting data about your environment Copiar o linkLink copiado para a área de transferência!
Collecting data about your environment minimizes the time required to analyze and determine the root cause.
Prerequisites
- Set the retention time for Prometheus metrics data to a minimum of seven days.
- Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster.
- Record the exact number of affected nodes and virtual machines.
13.2.2. Collecting data about virtual machines Copiar o linkLink copiado para a área de transferência!
Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.
Prerequisites
- Linux VMs: Install the latest QEMU guest agent.
Windows VMs:
- Record the Windows patch update details.
- Install the latest VirtIO drivers.
- Install the latest QEMU guest agent.
- If Remote Desktop Protocol (RDP) is enabled, connect by using the desktop viewer to determine whether there is a problem with the connection software.
Procedure
-
Collect must-gather data for the VMs using the script.
/usr/bin/gather - Collect screenshots of VMs that have crashed before you restart them.
- Collect memory dumps from VMs before remediation attempts.
- Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.
13.2.3. Using the must-gather tool for OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
You can collect data about OpenShift Virtualization resources by running the
must-gather
The default data collection includes information about the following resources:
- OpenShift Virtualization Operator namespaces, including child objects
- OpenShift Virtualization custom resource definitions
- Namespaces that contain virtual machines
- Basic virtual machine definitions
Instance types information is not currently collected by default; you can, however, run a command to optionally collect it.
Procedure
Run the following command to collect data about OpenShift Virtualization:
$ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.17 \ -- /usr/bin/gather
13.2.3.1. must-gather tool options Copiar o linkLink copiado para a área de transferência!
You can specify a combination of scripts and environment variables for the following options:
- Collecting detailed virtual machine (VM) information from a namespace
- Collecting detailed information about specified VMs
- Collecting image, image-stream, and image-stream-tags information
-
Limiting the maximum number of parallel processes used by the tool
must-gather
13.2.3.1.1. Parameters Copiar o linkLink copiado para a área de transferência!
Environment variables
You can specify environment variables for a compatible script.
NS=<namespace_name>-
Collect virtual machine information, including
virt-launcherpod details, from the namespace that you specify. TheVirtualMachineandVirtualMachineInstanceCR data is collected for all namespaces. VM=<vm_name>-
Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the
NSenvironment variable. PROS=<number_of_processes>Modify the maximum number of parallel processes that the
tool uses. The default value ismust-gather.5ImportantUsing too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended.
Scripts
Each script is compatible only with certain environment variable combinations.
/usr/bin/gather-
Use the default
must-gatherscript, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with thePROSvariable. /usr/bin/gather --vms_details-
Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to OpenShift Virtualization resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the
must-gathertool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use theVMvariable. /usr/bin/gather --images-
Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the
PROSvariable. /usr/bin/gather --instancetypes- Collect instance types information. This information is not currently collected by default; you can, however, optionally collect it.
13.2.3.1.2. Usage and examples Copiar o linkLink copiado para a área de transferência!
Environment variables are optional. You can run a script by itself or with one or more compatible environment variables.
| Script | Compatible environment variable |
|---|---|
|
| *
|
|
| * For a namespace:
* For a VM:
*
|
|
| *
|
Syntax
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.17 \
-- <environment_variable_1> <environment_variable_2> <script_name>
Default data collection parallel processes
By default, five processes run in parallel.
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.17 \
-- PROS=5 /usr/bin/gather
- 1
- You can modify the number of parallel processes by changing the default.
Detailed VM information
The following command collects detailed VM information for the
my-vm
mynamespace
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.17 \
-- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details
- 1
- The
NSenvironment variable is mandatory if you use theVMenvironment variable.
Image, image-stream, and image-stream-tags information
The following command collects image, image-stream, and image-stream-tags information from the cluster:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.17 \
/usr/bin/gather --images
Instance types information
The following command collects instance types information from the cluster:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.17 \
/usr/bin/gather --instancetypes
13.3. Troubleshooting Copiar o linkLink copiado para a área de transferência!
OpenShift Virtualization provides tools and logs for troubleshooting virtual machines and virtualization components.
You can troubleshoot OpenShift Virtualization components by using the tools provided in the web console or by using the
oc
13.3.1. Events Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform events are records of important life-cycle information and are useful for monitoring and troubleshooting virtual machine, namespace, and resource issues.
VM events: Navigate to the Events tab of the VirtualMachine details page in the web console.
- Namespace events
You can view namespace events by running the following command:
$ oc get events -n <namespace>See the list of events for details about specific events.
- Resource events
You can view resource events by running the following command:
$ oc describe <resource> <resource_name>
13.3.2. Logs Copiar o linkLink copiado para a área de transferência!
You can review the following logs for troubleshooting:
13.3.2.1. Viewing virtual machine logs with the web console Copiar o linkLink copiado para a área de transferência!
You can view virtual machine logs with the OpenShift Container Platform web console.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Select a virtual machine to open the VirtualMachine details page.
- On the Details tab, click the pod name to open the Pod details page.
- Click the Logs tab to view the logs.
13.3.2.2. Viewing OpenShift Virtualization pod logs Copiar o linkLink copiado para a área de transferência!
You can view logs for OpenShift Virtualization pods by using the
oc
You can configure the verbosity level of the logs by editing the
HyperConverged
13.3.2.2.1. Viewing OpenShift Virtualization pod logs with the CLI Copiar o linkLink copiado para a área de transferência!
You can view logs for the OpenShift Virtualization pods by using the
oc
Procedure
View a list of pods in the OpenShift Virtualization namespace by running the following command:
$ oc get pods -n openshift-cnvExample 13.1. Example output
NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32mView the pod log by running the following command:
$ oc logs -n openshift-cnv <pod_name>NoteIf a pod fails to start, you can use the
option to view logs from the last attempt.--previousTo monitor log output in real time, use the
option.-fExample 13.2. Example output
{"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"} {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"} {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"} {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"} {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"} {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"}
13.3.2.2.2. Configuring OpenShift Virtualization pod log verbosity Copiar o linkLink copiado para a área de transferência!
You can configure the verbosity level of OpenShift Virtualization pod logs by editing the
HyperConverged
Procedure
To set log verbosity for specific components, open the
CR in your default text editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvSet the log level for one or more components by editing the
stanza. For example:spec.logVerbosityConfigapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 51 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6- 1
- The log verbosity value must be an integer in the range
1–9, where a higher number indicates a more detailed log. In this example, thevirtAPIcomponent logs are exposed if their priority level is5or higher.
- Apply your changes by saving and exiting the editor.
13.3.2.2.3. Common error messages Copiar o linkLink copiado para a área de transferência!
The following error messages might appear in OpenShift Virtualization logs:
ErrImagePullorImagePullBackOff- Indicates an incorrect deployment configuration or problems with the images that are referenced.
13.3.2.3. Viewing aggregated OpenShift Virtualization logs with the LokiStack Copiar o linkLink copiado para a área de transferência!
You can view aggregated logs for OpenShift Virtualization pods and containers by using the LokiStack in the web console.
Prerequisites
- You deployed the LokiStack.
Procedure
- Navigate to Observe → Logs in the web console.
-
Select application, for pod logs, or infrastructure, for OpenShift Virtualization control plane pods and containers, from the log type list.
virt-launcher - Click Show Query to display the query field.
- Enter the LogQL query in the query field and click Run Query to display the filtered logs.
13.3.2.3.1. OpenShift Virtualization LogQL queries Copiar o linkLink copiado para a área de transferência!
You can view and filter aggregated logs for OpenShift Virtualization components by running Loki Query Language (LogQL) queries on the Observe → Logs page in the web console.
The default log type is infrastructure. The
virt-launcher
Optional: You can include or exclude strings or regular expressions by using line filter expressions.
If the query matches a large number of logs, the query might time out.
| Component | LogQL query |
|---|---|
| All |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Container |
|
|
| You must select application from the log type list before running this query.
|
You can filter log lines to include or exclude strings or regular expressions by using line filter expressions.
| Line filter expression | Description |
|---|---|
|
| Log line contains string |
|
| Log line does not contain string |
|
| Log line contains regular expression |
|
| Log line does not contain regular expression |
Example line filter expression
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|= "error" != "timeout"
13.3.3. Troubleshooting data volumes Copiar o linkLink copiado para a área de transferência!
You can check the
Conditions
Events
DataVolume
13.3.3.1. About data volume conditions and events Copiar o linkLink copiado para a área de transferência!
You can diagnose data volume issues by examining the output of the
Conditions
Events
$ oc describe dv <DataVolume>
The
Conditions
Types
-
Bound -
Running -
Ready
The
Events
-
of event
Type -
for logging
Reason -
of the event
Source -
containing additional diagnostic information.
Message
The output from
oc describe
Events
An event is generated when the
Status
Reason
Message
For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the
Conditions
13.3.3.2. Analyzing data volume conditions and events Copiar o linkLink copiado para a área de transferência!
By inspecting the
Conditions
Events
describe
There are many different combinations of conditions. Each must be evaluated in its unique context.
Examples of various combinations follow.
- - A successfully bound PVC displays in this example.
BoundNote that the
isType, so theBoundisStatus. If the PVC is not bound, theTrueisStatus.FalseWhen the PVC is bound, an event is generated stating that the PVC is bound. In this case, the
isReasonandBoundisStatus. TheTrueindicates which PVC owns the data volume.Message, in theMessagesection, provides further details including how long the PVC has been bound (Events) and by what resource (Age), in this caseFrom:datavolume-controllerExample output
Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound - - In this case, note that
RunningisTypeandRunningisStatus, indicating that an event has occurred that caused an attempted operation to fail, changing the Status fromFalsetoTrue.FalseHowever, note that
isReasonand theCompletedfield indicatesMessage.Import CompleteIn the
section, theEventsandReasoncontain additional troubleshooting information about the failed operation. In this example, theMessagedisplays an inability to connect due to aMessage, listed in the404section’s firstEvents.WarningFrom this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume:
Example output
Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found - – If
ReadyisTypeandReadyisStatus, then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, theTrueisStatus:FalseExample output
Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready
Chapter 14. Backup and restore Copiar o linkLink copiado para a área de transferência!
14.1. Backup and restore by using VM snapshots Copiar o linkLink copiado para a área de transferência!
You can back up and restore virtual machines (VMs) by using snapshots. Snapshots are supported by the following storage providers:
- Red Hat OpenShift Data Foundation
- Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API
Online snapshots have a default time deadline of five minutes (
5m
Online snapshots are supported for virtual machines that have hot plugged virtual disks. However, hot plugged disks that are not in the virtual machine specification are not included in the snapshot.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent if it is not included with your operating system. The QEMU guest agent is included with the default Red Hat templates.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
14.1.1. About snapshots Copiar o linkLink copiado para a área de transferência!
A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a previous development version.
A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state).
When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken.
The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation.
You can perform the following snapshot actions:
- Create a new snapshot
- List all snapshots attached to a specific VM
- Restore a VM from a snapshot
- Delete an existing VM snapshot
VM snapshot controller and custom resources
The VM snapshot feature introduces three new API objects defined as custom resource definitions (CRDs) for managing snapshots:
-
: Represents a user request to create a snapshot. It contains information about the current state of the VM.
VirtualMachineSnapshot -
: Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM.
VirtualMachineSnapshotContent -
: Represents a user request to restore a VM from a snapshot.
VirtualMachineRestore
The VM snapshot controller binds a
VirtualMachineSnapshotContent
VirtualMachineSnapshot
14.1.2. Creating snapshots Copiar o linkLink copiado para a área de transferência!
You can create snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line.
14.1.2.1. Creating a snapshot by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a snapshot of a virtual machine (VM) by using the OpenShift Container Platform web console.
The VM snapshot includes disks that meet the following requirements:
- Either a data volume or a persistent volume claim
- Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
-
If the VM is running, click the options menu
and select Stop to power it down.
- Click the Snapshots tab and then click Take Snapshot.
- Enter the snapshot name.
- Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot.
- If your VM has disks that cannot be included in the snapshot and you wish to proceed, select I am aware of this warning and wish to proceed.
- Click Save.
14.1.2.2. Creating a snapshot by using the command line Copiar o linkLink copiado para a área de transferência!
You can create a virtual machine (VM) snapshot for an offline or online VM by creating a
VirtualMachineSnapshot
Prerequisites
- Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots.
-
Install the OpenShift CLI ().
oc - Optional: Power down the VM for which you want to create a snapshot.
Procedure
Create a YAML file to define a
object that specifies the name of the newVirtualMachineSnapshotand the name of the source VM as in the following example:VirtualMachineSnapshotapiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>Create the
object:VirtualMachineSnapshot$ oc create -f <snapshot_name>.yamlThe snapshot controller creates a
object, binds it to theVirtualMachineSnapshotContent, and updates theVirtualMachineSnapshotandstatusfields of thereadyToUseobject.VirtualMachineSnapshotOptional: If you are taking an online snapshot, you can use the
command and monitor the status of the snapshot:waitEnter the following command:
$ oc wait <vm_name> <snapshot_name> --for condition=ReadyVerify the status of the snapshot:
-
- The online snapshot operation is still in progress.
InProgress -
- The online snapshot operation completed successfully.
Succeeded - - The online snapshot operaton failed.
FailedNoteOnline snapshots have a default time deadline of five minutes (
). If the snapshot does not complete successfully in five minutes, the status is set to5m. Afterwards, the file system will be thawed and the VM unfrozen but the status remainsfaileduntil you delete the failed snapshot image.failedTo change the default time deadline, add the
attribute to the VM snapshot spec with the time designated in minutes (FailureDeadline) or in seconds (m) that you want to specify before the snapshot operation times out.sTo set no deadline, you can specify
, though this is generally not recommended, as it can result in an unresponsive VM.0If you do not specify a unit of time such as
orm, the default is seconds (s).s
-
Verification
Verify that the
object is created and bound withVirtualMachineSnapshotand that theVirtualMachineSnapshotContentflag is set toreadyToUse:true$ oc describe vmsnapshot <snapshot_name>Example output
apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False"1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True"2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d4 - 1
- The
statusfield of theProgressingcondition specifies if the snapshot is still being created. - 2
- The
statusfield of theReadycondition specifies if the snapshot creation process is complete. - 3
- Specifies if the snapshot is ready to be used.
- 4
- Specifies that the snapshot is bound to a
VirtualMachineSnapshotContentobject created by the snapshot controller.
-
Check the property of the
spec:volumeBackupsresource to verify that the expected PVCs are included in the snapshot.VirtualMachineSnapshotContent
14.1.3. Verifying online snapshots by using snapshot indications Copiar o linkLink copiado para a área de transferência!
Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation.
Prerequisites
- You must have attempted to create an online VM snapshot.
Procedure
Display the output from the snapshot indications by performing one of the following actions:
-
Use the command line to view indicator output in the stanza of the
statusobject YAML.VirtualMachineSnapshot - In the web console, click VirtualMachineSnapshot → Status in the Snapshot details screen.
-
Use the command line to view indicator output in the
Verify the status of your online VM snapshot by viewing the values of the
parameter:status.indications-
indicates that the VM was running during online snapshot creation.
Online -
indicates that the QEMU guest agent was running during online snapshot creation.
GuestAgent -
indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error.
NoGuestAgent
-
14.1.4. Restoring virtual machines from snapshots Copiar o linkLink copiado para a área de transferência!
You can restore virtual machines (VMs) from snapshots by using the OpenShift Container Platform web console or the command line.
14.1.4.1. Restoring a VM from a snapshot by using the web console Copiar o linkLink copiado para a área de transferência!
You can restore a virtual machine (VM) to a previous configuration represented by a snapshot in the OpenShift Container Platform web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
-
If the VM is running, click the options menu
and select Stop to power it down.
- Click the Snapshots tab to view a list of snapshots associated with the VM.
- Select a snapshot to open the Snapshot Details screen.
-
Click the options menu
and select Restore VirtualMachineSnapshot.
- Click Restore.
14.1.4.2. Restoring a VM from a snapshot by using the command line Copiar o linkLink copiado para a área de transferência!
You can restore an existing virtual machine (VM) to a previous configuration by using the command line. You can only restore from an offline VM snapshot.
Prerequisites
- Power down the VM you want to restore.
Procedure
Create a YAML file to define a
object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source as in the following example:VirtualMachineRestoreapiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>Create the
object:VirtualMachineRestore$ oc create -f <vm_restore>.yamlThe snapshot controller updates the status fields of the
object and replaces the existing VM configuration with the snapshot content.VirtualMachineRestore
Verification
Verify that the VM is restored to the previous state represented by the snapshot and that the
flag is set tostatus.complete:true$ oc get vmrestore <vm_restore>Example output
apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1NoteIf the
condition hasProgressing, the VM is still being restored.status: "True"
14.1.5. Deleting snapshots Copiar o linkLink copiado para a área de transferência!
You can delete snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line.
14.1.5.1. Deleting a snapshot by using the web console Copiar o linkLink copiado para a área de transferência!
You can delete an existing virtual machine (VM) snapshot by using the web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
- Click the Snapshots tab to view a list of snapshots associated with the VM.
-
Click the options menu
beside a snapshot and select Delete VirtualMachineSnapshot.
- Click Delete.
14.1.5.2. Deleting a virtual machine snapshot in the CLI Copiar o linkLink copiado para a área de transferência!
You can delete an existing virtual machine (VM) snapshot by deleting the appropriate
VirtualMachineSnapshot
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Delete the
object:VirtualMachineSnapshot$ oc delete vmsnapshot <snapshot_name>The snapshot controller deletes the
along with the associatedVirtualMachineSnapshotobject.VirtualMachineSnapshotContent
Verification
Verify that the snapshot is deleted and no longer attached to this VM:
$ oc get vmsnapshot
14.2. Backing up and restoring virtual machines Copiar o linkLink copiado para a área de transferência!
Red Hat supports using OpenShift Virtualization 4.14 or later with OADP 1.3.x or later.
OADP versions earlier than 1.3.0 are not supported for back up and restore of OpenShift Virtualization.
Back up and restore virtual machines by using the OpenShift API for Data Protection.
You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. You can then install the Data Protection Application.
OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options:
- Container Storage Interface (CSI) backups
- Container Storage Interface (CSI) backups with DataMover
The following storage options are excluded:
- File system backup and restore
- Volume snapshot backup and restore
For more information, see Backing up applications with File System Backup: Kopia or Restic.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog.
See Using Operator Lifecycle Manager on restricted networks for details.
14.2.1. Installing and configuring OADP with OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you install OADP by installing the OADP Operator.
The latest version of the OADP Operator installs Velero 1.14.
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin
Procedure
- Install the OADP Operator according to the instructions for your storage provider.
-
Install the Data Protection Application (DPA) with the and
kubevirtOADP plugins.openshift Back up virtual machines by creating a
custom resource (CR).BackupWarningRed Hat support is limited to only the following options:
- CSI backups
- CSI backups with DataMover.
You restore the
CR by creating aBackupCR.Restore
14.2.2. Installing the Data Protection Application Copiar o linkLink copiado para a área de transferência!
You install the Data Protection Application (DPA) by creating an instance of the
DataProtectionApplication
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a
with the default name,Secret.cloud-credentialsNoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
with an emptySecretfile. If there is no defaultcredentials-velero, the installation will fail.Secret
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
manifest:DataProtectionApplicationapiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - kubevirt - gcp - csi - openshift resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia podConfig: nodeSelector: <node_selector> backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <default_secret> objectStorage: bucket: <bucket_name> prefix: <prefix>where:
namespace-
Specifies the default namespace for OADP which is
openshift-adp. The namespace is a variable and is configurable. kubevirt-
Specifies that the
kubevirtplugin is mandatory for OpenShift Virtualization. gcp-
Specifies the plugin for the backup provider, for example,
gcp, if it exists. csi-
Specifies that the
csiplugin is mandatory for backing up PVs with CSI snapshots. Thecsiplugin uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location. openshift-
Specifies that the
openshiftplugin is mandatory. resourceTimeout- Specifies how many minutes to wait for several Velero resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability, before timeout occurs. The default is 10m.
nodeAgent- Specifies the administrative agent that routes the administrative requests to servers.
enable-
Set this value to
trueif you want to enablenodeAgentand perform File System Backup. uploaderType-
Specifies the uploader type. Enter
kopiaas your uploader to use the Built-in DataMover. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. nodeSelector- Specifies the nodes on which Kopia are available. By default, Kopia runs on all nodes.
provider- Specifies the backup provider.
name-
Specifies the correct default name for the
Secret, for example,cloud-credentials-gcp, if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify aSecretname, the default name is used. bucket- Specifies a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
prefix-
Specifies a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
$ oc get all -n openshift-adpNAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96sVerify that the
(DPA) is reconciled by running the following command:DataProtectionApplication$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}-
Verify the is set to
type.Reconciled Verify the backup storage location and confirm that the
isPHASEby running the following command:Available$ oc get backupstoragelocations.velero.io -n openshift-adpNAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true-
Verify that the is in
PHASE.Available
Legal Notice
Copiar o linkLink copiado para a área de transferência!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.