Este contenido no está disponible en el idioma seleccionado.
Virtualization
OpenShift Virtualization installation and usage.
Abstract
Chapter 1. About Copiar enlaceEnlace copiado en el portapapeles!
1.1. About OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
Learn about OpenShift Virtualization’s capabilities and support scope.
1.1.1. What you can do with OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization provides the scalable, enterprise-grade virtualization functionality in Red Hat OpenShift. You can use it to manage virtual machines (VMs) exclusively or alongside container workloads.
OpenShift Virtualization adds new objects into your Red Hat OpenShift Service on AWS cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include:
- Creating and managing Linux and Windows VMs
- Running pod and VM workloads alongside each other in a cluster
- Connecting to VMs through a variety of consoles and CLI tools
- Importing and cloning existing VMs
- Managing network interface controllers and storage disks attached to VMs
- Live migrating VMs between nodes
You can manage your cluster and virtualization resources by using the Virtualization perspective of the Red Hat OpenShift Service on AWS web console, and by using the OpenShift CLI (oc
).
You can use OpenShift Virtualization with OVN-Kubernetes.
You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate
and ocp4-moderate-node
profiles. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies.
For information about partnering with Independent Software Vendors (ISVs) and Services partners for specialized storage, networking, backup, and additional functionality, see the Red Hat Ecosystem Catalog.
1.1.2. Comparing OpenShift Virtualization to VMware vSphere Copiar enlaceEnlace copiado en el portapapeles!
If you are familiar with VMware vSphere, the following table lists OpenShift Virtualization components that you can use to accomplish similar tasks. However, because OpenShift Virtualization is conceptually different from vSphere, and much of its functionality comes from the underlying Red Hat OpenShift Service on AWS, OpenShift Virtualization does not have direct alternatives for all vSphere concepts or components.
vSphere concept | OpenShift Virtualization | Explanation |
---|---|---|
Datastore |
Persistent volume (PV) + |
Stores VM disks. A PV represents existing storage and is attached to a VM through a PVC. When created with the |
Dynamic Resource Scheduling (DRS) |
Pod eviction policy + | Provides active resource balancing. A combination of pod eviction policies and a descheduler allows VMs to be live migrated to more appropriate nodes to keep node resource utilization manageable. |
NSX |
Multus + | Provides an overlay network configuration. There is no direct equivalent for NSX in OpenShift Virtualization, but you can use the OVN-Kubernetes network provider or install certified third-party CNI plug-ins. |
Storage Policy Based Management (SPBM) | Storage class | Provides policy-based storage selection. Storage classes represent various storage types and describe storage capabilities, such as quality of service, backup policy, reclaim policy, and whether volume expansion is allowed. A PVC can request a specific storage class to satisfy application requirements. |
vCenter | OpenShift Metrics and Monitoring | Provides host and VM metrics. You can view metrics and monitor the overall health of the cluster and VMs by using the Red Hat OpenShift Service on AWS web console. |
vMotion | Live migration |
Moves a running VM to another node without interruption. For live migration to be available, the PVC attached to the VM must have the |
vSwitch |
NMState Operator + | Provides a physical network configuration. You can use the NMState Operator to apply state-driven network configuration and manage various network interface types, including Linux bridges and network bonds. With Multus, you can attach multiple network interfaces and connect VMs to external networks. |
1.1.3. Supported cluster versions for OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
The latest stable release of OpenShift Virtualization 4.19 is 4.19.6.
OpenShift Virtualization 4.19 is supported for use on Red Hat OpenShift Service on AWS 4 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of Red Hat OpenShift Service on AWS.
OpenShift Virtualization is currently available on x86-64 CPUs. Arm-based nodes are not yet supported.
1.1.4. About volume and access modes for virtual machine disks Copiar enlaceEnlace copiado en el portapapeles!
If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.
For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog.
For best results, use the ReadWriteMany
(RWX) access mode and the Block
volume mode. This is important for the following reasons:
-
ReadWriteMany
(RWX) access mode is required for live migration. -
The
Block
volume mode performs significantly better than theFilesystem
volume mode. This is because theFilesystem
volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.
You cannot live migrate virtual machines with the following configurations:
-
Storage volume with
ReadWriteOnce
(RWO) access mode - Passthrough features such as GPUs
Set the evictionStrategy
field to None
for these virtual machines. The None
strategy powers down VMs during node reboots.
1.2. Security policies Copiar enlaceEnlace copiado en el portapapeles!
Learn about OpenShift Virtualization security and authorization.
Key points
-
OpenShift Virtualization adheres to the
restricted
Kubernetes pod security standards profile, which aims to enforce the current best practices for pod security. - Virtual machine (VM) workloads run as unprivileged pods.
-
Security context constraints (SCCs) are defined for the
kubevirt-controller
service account. - TLS certificates for OpenShift Virtualization components are renewed and rotated automatically.
1.2.1. About workload security Copiar enlaceEnlace copiado en el portapapeles!
By default, virtual machine (VM) workloads do not run with root privileges in OpenShift Virtualization, and there are no supported OpenShift Virtualization features that require root privileges.
For each VM, a virt-launcher
pod runs an instance of libvirt
in session mode to manage the VM process. In session mode, the libvirt
daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege.
1.2.2. TLS certificates Copiar enlaceEnlace copiado en el portapapeles!
TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually.
Automatic renewal schedules
TLS certificates are automatically deleted and replaced according to the following schedule:
- KubeVirt certificates are renewed daily.
- Containerized Data Importer controller (CDI) certificates are renewed every 15 days.
- MAC pool certificates are renewed every year.
Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption:
- Migrations
- Image uploads
- VNC and console connections
1.2.3. Authorization Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization uses role-based access control (RBAC) to define permissions for human users and service accounts. The permissions defined for service accounts control the actions that OpenShift Virtualization components can perform.
You can also use RBAC roles to manage user access to virtualization features. For example, an administrator can create an RBAC role that provides the permissions required to launch a virtual machine. The administrator can then restrict access by binding the role to specific users.
1.2.3.1. Default cluster roles for OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
By using cluster role aggregation, OpenShift Virtualization extends the default Red Hat OpenShift Service on AWS cluster roles to include permissions for accessing virtualization objects. Roles unique to OpenShift Virtualization are not aggregated with Red Hat OpenShift Service on AWS roles.
Default cluster role | OpenShift Virtualization cluster role | OpenShift Virtualization cluster role description |
---|---|---|
|
| A user that can view all OpenShift Virtualization resources in the cluster but cannot create, delete, modify, or access them. For example, the user can see that a virtual machine (VM) is running but cannot shut it down or gain access to its console. |
|
| A user that can modify all OpenShift Virtualization resources in the cluster. For example, the user can create VMs, access VM consoles, and delete VMs. |
|
|
A user that has full permissions to all OpenShift Virtualization resources, including the ability to delete collections of resources. The user can also view and modify the OpenShift Virtualization runtime configuration, which is located in the |
|
|
A user that can create, delete, and update VM live migration requests, which are represented by namespaced |
1.2.3.2. RBAC roles for storage features in OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
The following permissions are granted to the Containerized Data Importer (CDI), including the cdi-operator
and cdi-controller
service accounts.
1.2.3.2.1. Cluster-wide RBAC roles Copiar enlaceEnlace copiado en el portapapeles!
CDI cluster role | Resources | Verbs |
---|---|---|
|
|
|
|
| |
|
|
|
|
| |
|
|
|
|
| |
|
|
|
API group | Resources | Verbs |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Allow list: |
|
|
Allow list: |
|
|
|
|
API group | Resources | Verbs |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.2.3.2.2. Namespaced RBAC roles Copiar enlaceEnlace copiado en el portapapeles!
API group | Resources | Verbs |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
API group | Resources | Verbs |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.2.3.3. Additional SCCs and permissions for the kubevirt-controller service account Copiar enlaceEnlace copiado en el portapapeles!
Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.
The virt-controller
is a cluster controller that creates the virt-launcher
pods for virtual machines in the cluster.
By default, virt-launcher
pods run with the default
service account in the namespace. If your compliance controls require a unique service account, assign one to the VM. The setting applies to the VirtualMachineInstance
object and the virt-launcher
pod.
The kubevirt-controller
service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher
pods with the appropriate permissions. These extended permissions allow virtual machines to use OpenShift Virtualization features that are beyond the scope of typical pods.
The kubevirt-controller
service account is granted the following SCCs:
-
scc.AllowHostDirVolumePlugin = true
This allows virtual machines to use the hostpath volume plugin. -
scc.AllowPrivilegedContainer = false
This ensures thevirt-launcher
pod is not run as a privileged container. scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"}
-
SYS_NICE
allows setting the CPU affinity. -
NET_BIND_SERVICE
allows DHCP and Slirp operations.
-
Viewing the SCC and RBAC definitions for the kubevirt-controller
You can view the SecurityContextConstraints
definition for the kubevirt-controller
by using the oc
tool:
oc get scc kubevirt-controller -o yaml
$ oc get scc kubevirt-controller -o yaml
You can view the RBAC definition for the kubevirt-controller
clusterrole by using the oc
tool:
oc get clusterrole kubevirt-controller -o yaml
$ oc get clusterrole kubevirt-controller -o yaml
1.3. OpenShift Virtualization Architecture Copiar enlaceEnlace copiado en el portapapeles!
The Operator Lifecycle Manager (OLM) deploys operator pods for each component of OpenShift Virtualization:
-
Compute:
virt-operator
-
Storage:
cdi-operator
-
Network:
cluster-network-addons-operator
-
Scaling:
ssp-operator
OLM also deploys the hyperconverged-cluster-operator
pod, which is responsible for the deployment, configuration, and life cycle of other components, and several helper pods: hco-webhook
, and hyperconverged-cluster-cli-download
.
After all operator pods are successfully deployed, you should create the HyperConverged
custom resource (CR). The configurations set in the HyperConverged
CR serve as the single source of truth and the entrypoint for OpenShift Virtualization, and guide the behavior of the CRs.
The HyperConverged
CR creates corresponding CRs for the operators of all other components within its reconciliation loop. Each operator then creates resources such as daemon sets, config maps, and additional components for the OpenShift Virtualization control plane. For example, when the HyperConverged Operator (HCO) creates the KubeVirt
CR, the OpenShift Virtualization Operator reconciles it and creates additional resources such as virt-controller
, virt-handler
, and virt-api
.
The OLM deploys the Hostpath Provisioner (HPP) Operator, but it is not functional until you create a hostpath-provisioner
CR.
1.3.1. About the HyperConverged Operator (HCO) Copiar enlaceEnlace copiado en el portapapeles!
The HCO, hco-operator
, provides a single entry point for deploying and managing OpenShift Virtualization and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators.
Component | Description |
---|---|
|
Validates the |
|
Provides the |
| Contains all operators, CRs, and objects needed by OpenShift Virtualization. |
| A Scheduling, Scale, and Performance (SSP) CR. This is automatically created by the HCO. |
| A Containerized Data Importer (CDI) CR. This is automatically created by the HCO. |
|
A CR that instructs and is managed by the |
1.3.2. About the Containerized Data Importer (CDI) Operator Copiar enlaceEnlace copiado en el portapapeles!
The CDI Operator, cdi-operator
, manages CDI and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume.
Component | Description |
---|---|
| Manages the authorization to upload VM disks into PVCs by issuing secure upload tokens. |
| Directs external disk upload traffic to the appropriate upload server pod so that it can be written to the correct PVC. Requires a valid upload token. |
| Helper pod that imports a virtual machine image into a PVC when creating a data volume. |
1.3.3. About the Cluster Network Addons Operator Copiar enlaceEnlace copiado en el portapapeles!
The Cluster Network Addons Operator, cluster-network-addons-operator
, deploys networking components on a cluster and manages the related resources for extended network functionality.
Component | Description |
---|---|
| Manages TLS certificates of Kubemacpool’s webhooks. |
| Provides a MAC address pooling service for virtual machine (VM) network interface cards (NICs). |
| Marks network bridges available on nodes as node resources. |
| Installs Container Network Interface (CNI) plugins on cluster nodes, enabling the attachment of VMs to Linux bridges through network attachment definitions. |
1.3.4. About the Hostpath Provisioner (HPP) Operator Copiar enlaceEnlace copiado en el portapapeles!
The HPP Operator, hostpath-provisioner-operator
, deploys and manages the multi-node HPP and related resources.
Component | Description |
---|---|
| Provides a worker for each node where the HPP is designated to run. The pods mount the specified backing storage on the node. |
| Implements the Container Storage Interface (CSI) driver interface of the HPP. |
| Implements the legacy driver interface of the HPP. |
1.3.5. About the Scheduling, Scale, and Performance (SSP) Operator Copiar enlaceEnlace copiado en el portapapeles!
The SSP Operator, ssp-operator
, deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator.
1.3.6. About the OpenShift Virtualization Operator Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Virtualization Operator, virt-operator
, deploys, upgrades, and manages OpenShift Virtualization without disrupting current virtual machine (VM) workloads. In addition, the OpenShift Virtualization Operator deploys the common instance types and common preferences.
Component | Description |
---|---|
| HTTP API server that serves as the entry point for all virtualization-related flows. |
|
Observes the creation of a new VM instance object and creates a corresponding pod. When the pod is scheduled on a node, |
|
Monitors any changes to a VM and instructs |
|
Contains the VM that was created by the user as implemented by |
Chapter 2. Getting started Copiar enlaceEnlace copiado en el portapapeles!
2.1. Getting started with OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
You can explore the features and functionalities of OpenShift Virtualization by installing and configuring a basic environment.
Cluster configuration procedures require cluster-admin
privileges.
2.1.1. Planning and installing OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
Plan and install OpenShift Virtualization on an Red Hat OpenShift Service on AWS cluster:
Planning and installation resources
2.1.2. Creating and managing virtual machines Copiar enlaceEnlace copiado en el portapapeles!
Create a virtual machine (VM):
Create a VM from a Red Hat image.
You can create a VM by using a Red Hat template.
- You can create a VM by importing a custom image from a container registry or a web page, by uploading an image from your local machine, or by cloning a persistent volume claim (PVC).
Connect a VM to a secondary network:
Open Virtual Network (OVN)-Kubernetes secondary network.
NoteVMs are connected to the pod network by default.
Connect to a VM:
- Connect to the serial console or VNC console of a VM.
- Connect to a VM by using SSH.
- Connect to the desktop viewer for Windows VMs.
Manage a VM:
2.1.3. Migrating to OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
To migrate virtual machines from an external provider such as VMware vSphere, Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization, or another Red Hat OpenShift Service on AWS cluster, use the Migration Toolkit for Virtualization (MTV). You can also migrate Open Virtual Appliance (OVA) files created by VMware vSphere.
Migration Toolkit for Virtualization is not part of OpenShift Virtualization and requires separate installation. For this reason, all links in this procedure lead outside of OpenShift Virtualization documentation.
Prerequisites
- The Migration Toolkit for Virtualization Operator is installed.
2.1.4. Next steps Copiar enlaceEnlace copiado en el portapapeles!
2.2. Using the CLI tools Copiar enlaceEnlace copiado en el portapapeles!
You can manage OpenShift Virtualization resources by using the virtctl
command-line tool.
You can access and modify virtual machine (VM) disk images by using the libguestfs
command-line tool. You deploy libguestfs
by using the virtctl libguestfs
command.
2.2.1. Installing virtctl Copiar enlaceEnlace copiado en el portapapeles!
To install virtctl
on Red Hat Enterprise Linux (RHEL) 9, Linux, Windows, and MacOS operating systems, you download and install the virtctl
binary file.
To install virtctl
on RHEL 8, you enable the OpenShift Virtualization repository and then install the kubevirt-virtctl
package.
2.2.1.1. Installing the virtctl binary on RHEL 9, Linux, Windows, or macOS Copiar enlaceEnlace copiado en el portapapeles!
You can download the virtctl
binary for your operating system from the Red Hat OpenShift Service on AWS web console and then install it.
Procedure
- Navigate to the Virtualization → Overview page in the web console.
-
Click the Download virtctl link to download the
virtctl
binary for your operating system. Install
virtctl
:For RHEL 9 and other Linux operating systems:
Decompress the archive file:
tar -xvf <virtctl-version-distribution.arch>.tar.gz
$ tar -xvf <virtctl-version-distribution.arch>.tar.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to make the
virtctl
binary executable:chmod +x <path/virtctl-file-name>
$ chmod +x <path/virtctl-file-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the
virtctl
binary to a directory in yourPATH
environment variable.You can check your path by running the following command:
echo $PATH
$ echo $PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
KUBECONFIG
environment variable:export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig
$ export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For Windows:
- Decompress the archive file.
-
Navigate the extracted folder hierarchy and double-click the
virtctl
executable file to install the client. Move the
virtctl
binary to a directory in yourPATH
environment variable.You can check your path by running the following command:
path
C:\> path
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For macOS:
- Decompress the archive file.
Move the
virtctl
binary to a directory in yourPATH
environment variable.You can check your path by running the following command:
echo $PATH
echo $PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.1.2. Installing the virtctl RPM on RHEL 8 Copiar enlaceEnlace copiado en el portapapeles!
You can install the virtctl
RPM package on Red Hat Enterprise Linux (RHEL) 8 by enabling the OpenShift Virtualization repository and installing the kubevirt-virtctl
package.
Prerequisites
- Each host in your cluster must be registered with Red Hat Subscription Manager (RHSM) and have an active Red Hat OpenShift Service on AWS subscription.
Procedure
Enable the OpenShift Virtualization repository by using the
subscription-manager
CLI tool to run the following command:subscription-manager repos --enable cnv-4.19-for-rhel-8-x86_64-rpms
# subscription-manager repos --enable cnv-4.19-for-rhel-8-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
kubevirt-virtctl
package by running the following command:yum install kubevirt-virtctl
# yum install kubevirt-virtctl
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2. virtctl commands Copiar enlaceEnlace copiado en el portapapeles!
The virtctl
client is a command-line utility for managing OpenShift Virtualization resources.
The virtual machine (VM) commands also apply to virtual machine instances (VMIs) unless otherwise specified.
2.2.2.1. virtctl information commands Copiar enlaceEnlace copiado en el portapapeles!
You use virtctl
information commands to view information about the virtctl
client.
Command | Description |
---|---|
|
View the |
|
View a list of |
| View a list of options for a specific command. |
|
View a list of global command options for any |
2.2.2.2. VM information commands Copiar enlaceEnlace copiado en el portapapeles!
You can use virtctl
to view information about virtual machines (VMs) and virtual machine instances (VMIs).
Command | Description |
---|---|
| View the file systems available on a guest machine. |
| View information about the operating systems on a guest machine. |
| View the logged-in users on a guest machine. |
2.2.2.3. VM manifest creation commands Copiar enlaceEnlace copiado en el portapapeles!
You can use virtctl create
commands to create manifests for virtual machines, instance types, and preferences.
Command | Description |
---|---|
|
Create a |
| Create a VM manifest, specifying a name for the VM. |
| Create a VM manifest with a cloud-init configuration to create the selected user and either add an SSH public key from the supplied string, or a password from a file. |
| Create a VM manifest with a user and password combination injected from the selected secret. |
| Create a VM manifest with an SSH public key injected from the selected secret. |
|
Create a VM manifest, specifying a config map to use as the sysprep volume. The config map must contain a valid answer file named |
| Create a VM manifest that uses an existing cluster-wide instance type. |
| Create a VM manifest that uses an existing namespaced instance type. |
| Create a manifest for a cluster-wide instance type. |
| Create a manifest for a namespaced instance type. |
| Create a manifest for a cluster-wide VM preference, specifying a name for the preference. |
| Create a manifest for a namespaced VM preference. |
2.2.2.4. VM management commands Copiar enlaceEnlace copiado en el portapapeles!
You use virtctl
virtual machine (VM) management commands to manage and migrate virtual machines (VMs) and virtual machine instances (VMIs).
Command | Description |
---|---|
| Start a VM. |
| Start a VM in a paused state. This option enables you to interrupt the boot process from the VNC console. |
| Stop a VM. |
| Force stop a VM. This option might cause data inconsistency or data loss. |
| Pause a VM. The machine state is kept in memory. |
| Unpause a VM. |
| Migrate a VM. |
| Cancel a VM migration. |
| Restart a VM. |
2.2.2.5. VM connection commands Copiar enlaceEnlace copiado en el portapapeles!
You use virtctl
connection commands to expose ports and connect to virtual machines (VMs) and virtual machine instances (VMIs).
Command | Description |
---|---|
| Connect to the serial console of a VM. |
| Create a service that forwards a designated port of a VM and expose the service on the specified port of the node.
Example: |
| Copy a file from your machine to a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key. |
| Copy a file from a VM to your machine. This command uses the private key of an SSH key pair. The VM must be configured with the public key. |
| Open an SSH connection with a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key. |
| Connect to the VNC console of a VM.
You must have |
| Display the port number and connect manually to a VM by using any viewer through the VNC connection. |
| Specify a port number to run the proxy on the specified port, if that port is available. If a port number is not specified, the proxy runs on a random port. |
2.2.2.6. VM export commands Copiar enlaceEnlace copiado en el portapapeles!
Use virtctl vmexport
commands to create, download, or delete a volume exported from a VM, VM snapshot, or persistent volume claim (PVC). Certain manifests also contain a header secret, which grants access to the endpoint to import a disk image in a format that OpenShift Virtualization can use.
Command | Description |
---|---|
|
Create a
|
|
Delete a |
|
Download the volume defined in a
Optional:
|
|
Create a |
| Retrieve the manifest for an existing export. The manifest does not include the header secret. |
| Create a VM export for a VM example, and retrieve the manifest. The manifest does not include the header secret. |
| Create a VM export for a VM snapshot example, and retrieve the manifest. The manifest does not include the header secret. |
| Retrieve the manifest for an existing export. The manifest includes the header secret. |
| Retrieve the manifest for an existing export in json format. The manifest does not include the header secret. |
| Retrieve the manifest for an existing export. The manifest includes the header secret and writes it to the file specified. |
2.2.2.7. VM memory dump commands Copiar enlaceEnlace copiado en el portapapeles!
You can use the virtctl memory-dump
command to output a VM memory dump on a PVC. You can specify an existing PVC or use the --create-claim
flag to create a new PVC.
Prerequisites
-
The PVC volume mode must be
FileSystem
. The PVC must be large enough to contain the memory dump.
The formula for calculating the PVC size is
(VMMemorySize + 100Mi) * FileSystemOverhead
, where100Mi
is the memory dump overhead.You must enable the hot plug feature gate in the
HyperConverged
custom resource by running the following command:oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "add", "path": "/spec/featureGates", \ "value": "HotplugVolumes"}]'
$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "add", "path": "/spec/featureGates", \ "value": "HotplugVolumes"}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Downloading the memory dump
You must use the virtctl vmexport download
command to download the memory dump:
virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> \ --volume=<volume_name> --output=<output_file>
$ virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> \
--volume=<volume_name> --output=<output_file>
Command | Description |
---|---|
|
Save the memory dump of a VM on a PVC. The memory dump status is displayed in the Optional:
|
|
Rerun the This command overwrites the previous memory dump. |
| Remove a memory dump. You must remove a memory dump manually if you want to change the target PVC.
This command removes the association between the VM and the PVC, so that the memory dump is not displayed in the |
2.2.2.8. Hot plug and hot unplug commands Copiar enlaceEnlace copiado en el portapapeles!
You use virtctl
to add or remove resources from running virtual machines (VMs) and virtual machine instances (VMIs).
Command | Description |
---|---|
| Hot plug a data volume or persistent volume claim (PVC). Optional:
|
| Hot unplug a virtual disk. |
| Hot plug a Linux bridge network interface. |
| Hot unplug a Linux bridge network interface. |
2.2.2.9. Image upload commands Copiar enlaceEnlace copiado en el portapapeles!
You use the virtctl image-upload
commands to upload a VM image to a data volume.
Command | Description |
---|---|
| Upload a VM image to a data volume that already exists. |
| Upload a VM image to a new data volume of a specified requested size. |
|
Upload a VM image to a new data volume and create an associated |
2.2.3. Deploying libguestfs by using virtctl Copiar enlaceEnlace copiado en el portapapeles!
You can use the virtctl guestfs
command to deploy an interactive container with libguestfs-tools
and a persistent volume claim (PVC) attached to it.
Procedure
To deploy a container with
libguestfs-tools
, mount the PVC, and attach a shell to it, run the following command:virtctl guestfs -n <namespace> <pvc_name>
$ virtctl guestfs -n <namespace> <pvc_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The PVC name is a required argument. If you do not include it, an error message appears.
2.2.3.1. Libguestfs and virtctl guestfs commands Copiar enlaceEnlace copiado en el portapapeles!
Libguestfs
tools help you access and modify virtual machine (VM) disk images. You can use libguestfs
tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks.
You can also use the virtctl guestfs
command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter virt-
on the command line and press the Tab key. For example:
Command | Description |
---|---|
| Edit a file interactively in your terminal. |
| Inject an ssh key into the guest and create a login. |
| See how much disk space is used by a VM. |
| See the full list of all RPMs installed on a guest by creating an output file containing the full list. |
|
Display the output file list of all RPMs created using the |
| Seal a virtual machine disk image to be used as a template. |
By default, virtctl guestfs
creates a session with everything needed to manage a VM disk. However, the command also supports several flag options if you want to customize the behavior:
Flag Option | Description |
---|---|
|
Provides help for |
| To use a PVC from a specific namespace.
If you do not use the
If you do not include a |
|
Lists the
You can configure the container to use a custom image by using the |
|
Indicates that
By default,
If a cluster does not have any
If not set, the |
|
Shows the pull policy for the
You can also overwrite the image’s pull policy by setting the |
The command also checks if a PVC is in use by another pod, in which case an error message appears. However, once the libguestfs-tools
process starts, the setup cannot avoid a new pod using the same PVC. You must verify that there are no active virtctl guestfs
pods before starting the VM that accesses the same PVC.
The virtctl guestfs
command accepts only a single PVC attached to the interactive pod.
2.2.4. Using Ansible Copiar enlaceEnlace copiado en el portapapeles!
To use the Ansible collection for OpenShift Virtualization, see Red Hat Ansible Automation Hub (Red Hat Hybrid Cloud Console).
Chapter 3. Installing Copiar enlaceEnlace copiado en el portapapeles!
3.1. Preparing your cluster for OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
Before you install OpenShift Virtualization, review this section to ensure that your cluster meets the requirements.
3.1.1. OpenShift Virtualization on Red Hat OpenShift Service on AWS Copiar enlaceEnlace copiado en el portapapeles!
You can run OpenShift Virtualization on a Red Hat OpenShift Service on AWS cluster.
Before you set up your cluster, review the following summary of supported features and limitations:
- Installing
You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes. For example, you can use the
c5n.metal
type value for a machine based on x86_64 architecture.For more information, see the Red Hat OpenShift Service on AWS documentation about installing on AWS.
- Accessing virtual machines (VMs)
-
There is no change to how you access VMs by using the
virtctl
CLI tool or the Red Hat OpenShift Service on AWS web console. You can expose VMs by using a
NodePort
orLoadBalancer
service.- The load balancer approach is preferable because Red Hat OpenShift Service on AWS automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, Red Hat OpenShift Service on AWS removes the load balancer and its associated resources.
- Networking
-
If your application requires a flat layer 2 network that does not need egress traffic, consider using OVN-Kubernetes secondary overlay networks with a
Layer2
topology.
- Storage
You can use any storage solution that is certified by the storage vendor to work with the underlying platform.
ImportantAWS bare metal, Red Hat OpenShift Service on AWS, and Red Hat OpenShift Service on AWS classic architecture clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor.
Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations as shown in the following table:
Expand Table 3.1. EFS and EBS performance and functionality limitations Feature EBS volume EFS volume Shared storage solutions gp2
gp3
io2
VM live migration
Not available
Not available
Available
Available
Available
Fast VM creation by using cloning
Available
Not available
Available
VM backup and restore by using snapshots
Available
Not available
Available
Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities.
3.1.2. Hardware and operating system requirements Copiar enlaceEnlace copiado en el portapapeles!
Review the following hardware and operating system requirements for OpenShift Virtualization.
3.1.2.1. CPU requirements Copiar enlaceEnlace copiado en el portapapeles!
Supported by Red Hat Enterprise Linux (RHEL) 9.
See Red Hat Ecosystem Catalog for supported CPUs.
NoteIf your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.
See Configuring a required node affinity rule for details.
- Support for AMD and Intel 64-bit architectures (x86-64-v2).
- Support for Intel 64 or AMD64 CPU extensions.
- Intel VT or AMD-V hardware virtualization extensions enabled.
- NX (no execute) flag enabled.
3.1.2.2. Operating system requirements Copiar enlaceEnlace copiado en el portapapeles!
- Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes.
3.1.2.3. Storage requirements Copiar enlaceEnlace copiado en el portapapeles!
- Supported by Red Hat OpenShift Service on AWS.
-
If the storage provisioner supports snapshots, you must associate a
VolumeSnapshotClass
object with the default storage class.
3.1.2.3.1. About volume and access modes for virtual machine disks Copiar enlaceEnlace copiado en el portapapeles!
If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.
For a list of known storage providers for OpenShift Virtualization, see the Red Hat Ecosystem Catalog.
For best results, use the ReadWriteMany
(RWX) access mode and the Block
volume mode. This is important for the following reasons:
-
ReadWriteMany
(RWX) access mode is required for live migration. -
The
Block
volume mode performs significantly better than theFilesystem
volume mode. This is because theFilesystem
volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.
You cannot live migrate virtual machines with the following configurations:
-
Storage volume with
ReadWriteOnce
(RWO) access mode - Passthrough features such as GPUs
Set the evictionStrategy
field to None
for these virtual machines. The None
strategy powers down VMs during node reboots.
3.1.3. Live migration requirements Copiar enlaceEnlace copiado en el portapapeles!
-
Shared storage with
ReadWriteMany
(RWX) access mode. Sufficient RAM and network bandwidth.
NoteYou must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default number of migrations that can run in parallel in the cluster is 5.
- If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
- A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
3.1.4. Physical resource overhead requirements Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization is an add-on to Red Hat OpenShift Service on AWS and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the Red Hat OpenShift Service on AWS requirements. Oversubscribing the physical resources in a cluster can affect performance.
The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.
Memory overhead
Calculate the memory overhead values for OpenShift Virtualization by using the equations below.
Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per worker node ≈ 360 MiB
Memory overhead per worker node ≈ 360 MiB
Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.
Virtual machine memory overhead
Memory overhead per virtual machine ≈ (0.002 × requested memory) \ + 218 MiB \ + 8 MiB × (number of vCPUs) \ + 16 MiB × (number of graphics devices) \ + (additional memory overhead)
Memory overhead per virtual machine ≈ (0.002 × requested memory) \
+ 218 MiB \
+ 8 MiB × (number of vCPUs) \
+ 16 MiB × (number of graphics devices) \
+ (additional memory overhead)
- 1
- Required for the processes that run in the
virt-launcher
pod. - 2
- Number of virtual CPUs requested by the virtual machine.
- 3
- Number of virtual graphics cards requested by the virtual machine.
- 4
- Additional memory overhead:
- If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
- If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.
- If Trusted Platform Module (TPM) is enabled, add 53 MiB.
CPU overhead
Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.
Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores
CPU overhead for infrastructure nodes ≈ 4 cores
OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.
Virtual machine CPU overhead
If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.
Storage overhead
Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.
Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB
Aggregated storage overhead per node ≈ 10 GiB
10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.
Virtual machine storage overhead
Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.
Example
As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.
3.2. Installing OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
Install OpenShift Virtualization to add virtualization functionality to your Red Hat OpenShift Service on AWS cluster.
3.2.1. Installing the OpenShift Virtualization Operator Copiar enlaceEnlace copiado en el portapapeles!
Install the OpenShift Virtualization Operator by using the Red Hat OpenShift Service on AWS web console or the command line.
3.2.1.1. Installing the OpenShift Virtualization Operator by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can deploy the OpenShift Virtualization Operator by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- Install Red Hat OpenShift Service on AWS 4 on your cluster.
-
Log in to the Red Hat OpenShift Service on AWS web console as a user with
cluster-admin
permissions. - Create a machine pool based on a bare metal compute node instance type. For more information, see "Creating a machine pool" in the Additional resources of this section.
Procedure
- From the Administrator perspective, click Operators → OperatorHub.
- In the Filter by keyword field, type Virtualization.
- Select the OpenShift Virtualization Operator tile with the Red Hat source label.
- Read the information about the Operator and click Install.
On the Install Operator page:
- Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your Red Hat OpenShift Service on AWS version.
For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory
openshift-cnv
namespace, which is automatically created if it does not exist.WarningAttempting to install the OpenShift Virtualization Operator in a namespace other than
openshift-cnv
causes the installation to fail.For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.
While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.
WarningBecause OpenShift Virtualization is only supported when used with the corresponding Red Hat OpenShift Service on AWS version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.
-
Click Install to make the Operator available to the
openshift-cnv
namespace. - When the Operator installs successfully, click Create HyperConverged.
- Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
- Click Create to launch OpenShift Virtualization.
Verification
- Navigate to the Workloads → Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.
3.2.1.2. Installing the OpenShift Virtualization Operator by using the command line Copiar enlaceEnlace copiado en el portapapeles!
Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster.
3.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv
namespace access to the OpenShift Virtualization Operators.
To subscribe, configure Namespace
, OperatorGroup
, and Subscription
objects by applying a single manifest to your cluster.
Prerequisites
- Install Red Hat OpenShift Service on AWS 4 on your cluster.
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create the required
Namespace
,OperatorGroup
, andSubscription
objects for OpenShift Virtualization by running the following command:oc apply -f <file name>.yaml
$ oc apply -f <file name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can configure certificate rotation parameters in the YAML file.
3.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can deploy the OpenShift Virtualization Operator by using the oc
CLI.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Subscribe to the OpenShift Virtualization catalog in the
openshift-cnv
namespace. -
Log in as a user with
cluster-admin
privileges. - Create a machine pool based on a bare metal compute node instance type.
Procedure
Create a YAML file that contains the following manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the OpenShift Virtualization Operator by running the following command:
oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Ensure that OpenShift Virtualization deployed successfully by watching the
PHASE
of the cluster service version (CSV) in theopenshift-cnv
namespace. Run the following command:watch oc get csv -n openshift-cnv
$ watch oc get csv -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following output displays if deployment was successful:
Example output
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.19.6 OpenShift Virtualization 4.19.6 Succeeded
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.19.6 OpenShift Virtualization 4.19.6 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.2. Next steps Copiar enlaceEnlace copiado en el portapapeles!
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
3.3. Uninstalling OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
You uninstall OpenShift Virtualization by using the web console or the command-line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.
3.3.1. Uninstalling OpenShift Virtualization by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You uninstall OpenShift Virtualization by using the web console to perform the following tasks:
You must first delete all virtual machines, and virtual machine instances.
You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
3.3.1.1. Deleting the HyperConverged custom resource Copiar enlaceEnlace copiado en el portapapeles!
To uninstall OpenShift Virtualization, you first delete the HyperConverged
custom resource (CR).
Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions.
Procedure
- Navigate to the Operators → Installed Operators page.
- Select the OpenShift Virtualization Operator.
- Click the OpenShift Virtualization Deployment tab.
-
Click the Options menu
beside
kubevirt-hyperconverged
and select Delete HyperConverged. - Click Delete in the confirmation window.
3.3.1.2. Deleting Operators from a cluster using the web console Copiar enlaceEnlace copiado en el portapapeles!
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster web console using an account with
dedicated-admin
permissions.
Procedure
- Navigate to the Operators → Installed Operators page.
- Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
3.3.1.3. Deleting a namespace using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can delete a namespace by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You have access to the Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions.
Procedure
- Navigate to Administration → Namespaces.
- Locate the namespace that you want to delete in the list of namespaces.
-
On the far right side of the namespace listing, select Delete Namespace from the Options menu
.
- When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
- Click Delete.
3.3.1.4. Deleting OpenShift Virtualization custom resource definitions Copiar enlaceEnlace copiado en el portapapeles!
You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.
Prerequisites
-
You have access to the Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions.
Procedure
- Navigate to Administration → CustomResourceDefinitions.
-
Select the Label filter and enter
operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
in the Search field to display the OpenShift Virtualization CRDs. -
Click the Options menu
beside each CRD and select Delete CustomResourceDefinition.
3.3.2. Uninstalling OpenShift Virtualization by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can uninstall OpenShift Virtualization by using the OpenShift CLI (oc
).
Prerequisites
-
You have access to the Red Hat OpenShift Service on AWS cluster using an account with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). - You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.
Procedure
Delete the
HyperConverged
custom resource:oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
$ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OpenShift Virtualization Operator subscription:
oc delete subscription kubevirt-hyperconverged -n openshift-cnv
$ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OpenShift Virtualization
ClusterServiceVersion
resource:oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
$ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OpenShift Virtualization namespace:
oc delete namespace openshift-cnv
$ oc delete namespace openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the OpenShift Virtualization custom resource definitions (CRDs) by running the
oc delete crd
command with thedry-run
option:oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
$ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the CRDs by running the
oc delete crd
command without thedry-run
option:oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
$ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Post-installation configuration Copiar enlaceEnlace copiado en el portapapeles!
4.1. Postinstallation configuration Copiar enlaceEnlace copiado en el portapapeles!
The following procedures are typically performed after OpenShift Virtualization is installed. You can configure the components that are relevant for your environment:
- Node placement rules for OpenShift Virtualization Operators, workloads, and controllers
- Enabling the creation of load balancer services by using the Red Hat OpenShift Service on AWS web console
- Defining a default storage class for the Container Storage Interface (CSI)
- Configuring local storage by using the Hostpath Provisioner (HPP)
4.2. Specifying nodes for OpenShift Virtualization components Copiar enlaceEnlace copiado en el portapapeles!
The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement rules for some components after installing OpenShift Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads.
4.2.1. About node placement rules for OpenShift Virtualization components Copiar enlaceEnlace copiado en el portapapeles!
You can use node placement rules for the following tasks:
- Deploy virtual machines only on nodes intended for virtualization workloads.
- Deploy Operators only on infrastructure nodes.
- Maintain separation between workloads.
Depending on the object, you can use one or more of the following rule types:
nodeSelector
- Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
- Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied.
tolerations
- Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
4.2.2. Applying node placement rules Copiar enlaceEnlace copiado en el portapapeles!
You can apply node placement rules by editing a Subscription
, HyperConverged
, or HostPathProvisioner
object using the command line.
Prerequisites
-
The
oc
CLI tool is installed. - You are logged in with cluster administrator permissions.
Procedure
Edit the object in your default editor by running the following command:
oc edit <resource_type> <resource_name> -n openshift-cnv
$ oc edit <resource_type> <resource_name> -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes.
4.2.3. Node placement rule examples Copiar enlaceEnlace copiado en el portapapeles!
You can specify node placement rules for a OpenShift Virtualization component by editing a Subscription
, HyperConverged
, or HostPathProvisioner
object.
4.2.3.1. Subscription object node placement rule examples Copiar enlaceEnlace copiado en el portapapeles!
To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription
object during OpenShift Virtualization installation.
Currently, you cannot configure node placement rules for the Subscription
object by using the web console.
The Subscription
object does not support the affinity
node pplacement rule.
Example Subscription
object with nodeSelector
rule
- 1
- OLM deploys the OpenShift Virtualization Operators on nodes labeled
example.io/example-infra-key = example-infra-value
.
Example Subscription
object with tolerations
rule
- 1
- OLM deploys OpenShift Virtualization Operators on nodes labeled
key = virtualization:NoSchedule
taint. Only pods with the matching tolerations are scheduled on these nodes.
4.2.3.2. HyperConverged object node placement rule example Copiar enlaceEnlace copiado en el portapapeles!
To specify the nodes where OpenShift Virtualization deploys its components, you can edit the nodePlacement
object in the HyperConverged custom resource (CR) file that you create during OpenShift Virtualization installation.
Example HyperConverged
object with nodeSelector
rule
Example HyperConverged
object with affinity
rule
- 1
- Infrastructure resources are placed on nodes labeled
example.io/example-infra-key = example-value
. - 2
- workloads are placed on nodes labeled
example.io/example-workloads-key = example-workloads-value
. - 3
- Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.
Example HyperConverged
object with tolerations
rule
- 1
- Nodes reserved for OpenShift Virtualization components are labeled with the
key = virtualization:NoSchedule
taint. Only pods with matching tolerations are scheduled on reserved nodes.
4.2.3.3. HostPathProvisioner object node placement rule example Copiar enlaceEnlace copiado en el portapapeles!
You can edit the HostPathProvisioner
object directly or by using the web console.
You must schedule the hostpath provisioner and the OpenShift Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines.
After you deploy a virtual machine (VM) with the hostpath provisioner (HPP) storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM.
You can configure node placement rules by specifying nodeSelector
, affinity
, or tolerations
for the spec.workload
field of the HostPathProvisioner
object that you create when you install the hostpath provisioner.
Example HostPathProvisioner
object with nodeSelector
rule
- 1
- Workloads are placed on nodes labeled
example.io/example-workloads-key = example-workloads-value
.
4.3. Postinstallation network configuration Copiar enlaceEnlace copiado en el portapapeles!
By default, OpenShift Virtualization is installed with a single, internal pod network.
4.3.1. Installing networking Operators Copiar enlaceEnlace copiado en el portapapeles!
4.3.2. Configuring a Linux bridge network Copiar enlaceEnlace copiado en el portapapeles!
After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).
4.3.2.1. Creating a Linux bridge NNCP Copiar enlaceEnlace copiado en el portapapeles!
You can create a NodeNetworkConfigurationPolicy
(NNCP) manifest for a Linux bridge network.
Prerequisites
- You have installed the Kubernetes NMState Operator.
Procedure
Create the
NodeNetworkConfigurationPolicy
manifest. This example includes sample values that you must replace with your own information.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the policy.
- 2
- Name of the interface.
- 3
- Optional: Human-readable description of the interface.
- 4
- The type of interface. This example creates a bridge.
- 5
- The requested state for the interface after creation.
- 6
- Disables IPv4 in this example.
- 7
- Disables STP in this example.
- 8
- The node NIC to which the bridge is attached.
To create the NNCP manifest for a Linux bridge using OSA with IBM Z®, you must disable VLAN filtering by the setting the rx-vlan-filter
to false
in the NodeNetworkConfigurationPolicy
manifest.
Alternatively, if you have SSH access to the node, you can disable VLAN filtering by running the following command:
sudo ethtool -K <osa-interface-name> rxvlan off
$ sudo ethtool -K <osa-interface-name> rxvlan off
4.3.2.2. Creating a Linux bridge NAD by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the Red Hat OpenShift Service on AWS web console.
A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
Procedure
- In the web console, click Networking → NetworkAttachmentDefinitions.
Click Create Network Attachment Definition.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
- Enter a unique Name and optional Description.
- Select CNV Linux bridge from the Network Type list.
- Enter the name of the bridge in the Bridge Name field.
Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
NoteOSA interfaces on IBM Z® do not support VLAN filtering and VLAN-tagged traffic is dropped. Avoid using VLAN-tagged NADs with OSA interfaces.
- Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
- Click Create.
4.3.3. Configuring a network for live migration Copiar enlaceEnlace copiado en el portapapeles!
After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
4.3.3.1. Configuring a dedicated secondary network for live migration Copiar enlaceEnlace copiado en el portapapeles!
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition
object to the HyperConverged
custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster as a user with the
cluster-admin
role. - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
NetworkAttachmentDefinition
manifest according to the following example:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
NetworkAttachmentDefinition
object. - 2
- Specify the name of the NIC to be used for live migration.
- 3
- Specify the name of the CNI plugin that provides the network for the NAD.
- 4
- Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the name of the
NetworkAttachmentDefinition
object to thespec.liveMigrationConfig
stanza of theHyperConverged
CR:Example
HyperConverged
manifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the Multus
NetworkAttachmentDefinition
object to be used for live migrations.
-
Save your changes and exit the editor. The
virt-handler
pods restart and connect to the secondary network.
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.3.2. Selecting a dedicated network by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
4.3.4. Enabling load balancer service creation by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can enable the creation of load balancer services for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You are logged in as a user with the
cluster-admin
role. - You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization → Overview.
- On the Settings tab, click Cluster.
- Expand General settings and SSH configuration.
- Set SSH over LoadBalancer service to on.
4.4. Postinstallation storage configuration Copiar enlaceEnlace copiado en el portapapeles!
The following storage configuration tasks are mandatory:
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
Optional: You can configure local storage by using the hostpath provisioner (HPP).
See the storage configuration overview for more options, including configuring the Containerized Data Importer (CDI), data volumes, and automatic boot source updates.
4.4.1. Configuring local storage by using the HPP Copiar enlaceEnlace copiado en el portapapeles!
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP Operator creates the HPP provisioner.
The HPP is a local storage provisioner designed for OpenShift Virtualization. To use the HPP, you must create an HPP custom resource (CR).
HPP storage pools must not be in the same partition as the operating system. Otherwise, the storage pools might fill the operating system partition. If the operating system partition is full, performance can be effected or the node can become unstable or unusable.
4.4.1.1. Creating a storage class for the CSI driver with the storagePools stanza Copiar enlaceEnlace copiado en el portapapeles!
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass
value with volumeBindingMode
parameter set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
Procedure
Create a
storageclass_csi.yaml
file to define the storage class:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the default value isDelete
. - 2
- The
volumeBindingMode
parameter determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements. - 3
- Specify the name of the storage pool defined in the HPP CR.
- Save the file and exit.
Create the
StorageClass
object by running the following command:oc create -f storageclass_csi.yaml
$ oc create -f storageclass_csi.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Configuring certificate rotation Copiar enlaceEnlace copiado en el portapapeles!
Configure certificate rotation parameters to replace existing certificates.
4.5.1. Configuring certificate rotation Copiar enlaceEnlace copiado en el portapapeles!
You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged
custom resource (CR).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the
HyperConverged
CR by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
spec.certConfig
fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golangParseDuration
format.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the YAML file to your cluster.
4.5.2. Troubleshooting certificate rotation parameters Copiar enlaceEnlace copiado en el portapapeles!
Deleting one or more certConfig
values causes them to revert to the default values, unless the default values conflict with one of the following conditions:
-
The value of
ca.renewBefore
must be less than or equal to the value ofca.duration
. -
The value of
server.duration
must be less than or equal to the value ofca.duration
. -
The value of
server.renewBefore
must be less than or equal to the value ofserver.duration
.
If the default values conflict with these conditions, you will receive an error.
If you remove the server.duration
value in the following example, the default value of 24h0m0s
is greater than the value of ca.duration
, conflicting with the specified conditions.
Example
This results in the following error message:
error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration
error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration
The error message only mentions the first conflict. Review all certConfig values before you proceed.
Chapter 5. Updating Copiar enlaceEnlace copiado en el portapapeles!
5.1. Updating OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
Learn how to keep OpenShift Virtualization updated and compatible with Red Hat OpenShift Service on AWS.
5.1.1. About updating OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
When you install OpenShift Virtualization, you select an update channel and an approval strategy. The update channel determines the versions that OpenShift Virtualization will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. Both settings can impact supportability.
5.1.1.1. Recommended settings Copiar enlaceEnlace copiado en el portapapeles!
To maintain a supportable environment, use the following settings:
- Update channel: stable
- Approval strategy: Automatic
With these settings, the update process automatically starts when a new version of the Operator is available in the stable channel. This ensures that your OpenShift Virtualization and Red Hat OpenShift Service on AWS versions remain compatible, and that your version of OpenShift Virtualization is suitable for production environments.
Each minor version of OpenShift Virtualization is supported only if you run the corresponding Red Hat OpenShift Service on AWS version. For example, you must run OpenShift Virtualization 4.19 on Red Hat OpenShift Service on AWS 4.19.
5.1.1.2. What to expect Copiar enlaceEnlace copiado en el portapapeles!
- The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes.
- Updating OpenShift Virtualization does not interrupt network connections.
- Data volumes and their associated persistent volume claims are preserved during an update.
If you have virtual machines running that use AWS Elastic Block Store (EBS) storage, they cannot be live migrated and might block an Red Hat OpenShift Service on AWS cluster update.
As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Set the evictionStrategy
field to None
and the runStrategy
field to Always
.
5.1.1.3. How updates work Copiar enlaceEnlace copiado en el portapapeles!
- Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during Red Hat OpenShift Service on AWS installation, makes external Operators available to your cluster.
- OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update Red Hat OpenShift Service on AWS to the next minor version. You cannot update OpenShift Virtualization to the next minor version without first updating Red Hat OpenShift Service on AWS.
5.1.1.4. RHEL 9 compatibility Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization 4.19 is based on Red Hat Enterprise Linux (RHEL) 9. You can update to OpenShift Virtualization 4.19 from a version that was based on RHEL 8 by following the standard OpenShift Virtualization update procedure. No additional steps are required.
As in previous versions, you can perform the update without disrupting running workloads. OpenShift Virtualization 4.19 supports live migration from RHEL 8 nodes to RHEL 9 nodes.
5.1.1.4.1. RHEL 9 machine type Copiar enlaceEnlace copiado en el portapapeles!
All VM templates that are included with OpenShift Virtualization now use the RHEL 9 machine type by default: machineType: pc-q35-rhel9.<y>.0
, where <y>
is a single digit corresponding to the latest minor version of RHEL 9. For example, the value pc-q35-rhel9.2.0
is used for RHEL 9.2.
Updating OpenShift Virtualization does not change the machineType
value of any existing VMs. These VMs continue to function as they did before the update. You can optionally change a VM’s machine type so that it can benefit from RHEL 9 improvements.
Before you change a VM’s machineType
value, you must shut down the VM.
5.1.2. Monitoring update status Copiar enlaceEnlace copiado en el portapapeles!
To monitor the status of a OpenShift Virtualization Operator update, watch the cluster service version (CSV) PHASE
. You can also monitor the CSV conditions in the web console or by running the command provided here.
The PHASE
and conditions values are approximations that are based on available information.
Prerequisites
-
Log in to the cluster as a user with the
cluster-admin
role. -
Install the OpenShift CLI (
oc
).
Procedure
Run the following command:
oc get csv -n openshift-cnv
$ oc get csv -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the output, checking the
PHASE
field. For example:Example output
VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing
VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command:
oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'
$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A successful upgrade results in the following output:
Example output
ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully
ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.3. VM workload updates Copiar enlaceEnlace copiado en el portapapeles!
When you update OpenShift Virtualization, virtual machine workloads, including libvirt
, virt-launcher
, and qemu
, update automatically if they support live migration.
Each virtual machine has a virt-launcher
pod that runs the virtual machine instance (VMI). The virt-launcher
pod runs an instance of libvirt
, which is used to manage the virtual machine (VM) process.
You can configure how workloads are updated by editing the spec.workloadUpdateStrategy
stanza of the HyperConverged
custom resource (CR). There are two available workload update methods: LiveMigrate
and Evict
.
Because the Evict
method shuts down VMI pods, only the LiveMigrate
update strategy is enabled by default.
When LiveMigrate
is the only update strategy enabled:
- VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled.
VMIs that do not support live migration are not disrupted or updated.
-
If a VMI has the
LiveMigrate
eviction strategy but does not support live migration, it is not updated.
-
If a VMI has the
If you enable both LiveMigrate
and Evict
:
-
VMIs that support live migration use the
LiveMigrate
update strategy. -
VMIs that do not support live migration use the
Evict
update strategy. If a VMI is controlled by aVirtualMachine
object that hasrunStrategy: Always
set, a new VMI is created in a new pod with updated components.
Migration attempts and timeouts
When updating workloads, live migration fails if a pod is in the Pending
state for the following periods:
- 5 minutes
-
If the pod is pending because it is
Unschedulable
. - 15 minutes
- If the pod is stuck in the pending state for any reason.
When a VMI fails to migrate, the virt-controller
tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher
pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely.
Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging.
5.1.3.1. Configuring workload update methods Copiar enlaceEnlace copiado en el portapapeles!
You can configure workload update methods by editing the HyperConverged
custom resource (CR).
Prerequisites
To use live migration as an update method, you must first enable live migration in the cluster.
NoteIf a
VirtualMachineInstance
CR containsevictionStrategy: LiveMigrate
and the virtual machine instance (VMI) does not support live migration, the VMI will not update.-
You have installed the OpenShift CLI (
oc
).
Procedure
To open the
HyperConverged
CR in your default editor, run the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
workloadUpdateStrategy
stanza of theHyperConverged
CR. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The methods that can be used to perform automated workload updates. The available values are
LiveMigrate
andEvict
. If you enable both options as shown in this example, updates useLiveMigrate
for VMIs that support live migration andEvict
for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove theworkloadUpdateStrategy
stanza or setworkloadUpdateMethods: []
to leave the array empty. - 2
- The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If
LiveMigrate
is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. - 3
- A disruptive method that shuts down VMI pods during upgrade.
Evict
is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by aVirtualMachine
object that hasrunStrategy: Always
configured, a new VMI is created in a new pod with updated components. - 4
- The number of VMIs that can be forced to be updated at a time by using the
Evict
method. This does not apply to theLiveMigrate
method. - 5
- The interval to wait before evicting the next batch of workloads. This does not apply to the
LiveMigrate
method.
NoteYou can configure live migration limits and timeouts by editing the
spec.liveMigrationConfig
stanza of theHyperConverged
CR.- To apply your changes, save and exit the editor.
5.1.3.2. Viewing outdated VM workloads Copiar enlaceEnlace copiado en el portapapeles!
You can view a list of outdated virtual machine (VM) workloads by using the CLI.
If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads
alert fires.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
To view a list of outdated virtual machine instances (VMIs), run the following command:
oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
$ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To ensure that VMIs update automatically, configure workload updates.
5.1.4. Advanced options Copiar enlaceEnlace copiado en el portapapeles!
The stable release channel and the Automatic approval strategy are recommended for most OpenShift Virtualization installations. Use other settings only if you understand the risks.
5.1.4.1. Changing update settings Copiar enlaceEnlace copiado en el portapapeles!
You can change the update channel and approval strategy for your OpenShift Virtualization Operator subscription by using the web console.
Prerequisites
- You have installed the OpenShift Virtualization Operator.
- You have administrator permissions.
Procedure
- Click Operators → Installed Operators.
- Select OpenShift Virtualization from the list.
- Click the Subscription tab.
- In the Subscription details section, click the setting that you want to change. For example, to change the approval strategy from Manual to Automatic, click Manual.
- In the window that opens, select the new update channel or approval strategy.
- Click Save.
5.1.4.2. Manual approval strategy Copiar enlaceEnlace copiado en el portapapeles!
If you use the Manual approval strategy, you must manually approve every pending update. If Red Hat OpenShift Service on AWS and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. To avoid risking the supportability and functionality of your cluster, use the Automatic approval strategy.
If you must use the Manual approval strategy, maintain a supportable cluster by approving pending Operator updates as soon as they become available.
5.1.4.3. Manually approving a pending Operator update Copiar enlaceEnlace copiado en el portapapeles!
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
- In the Administrator perspective of the Red Hat OpenShift Service on AWS web console, navigate to Operators → Installed Operators.
- Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
- Click the Subscription tab. Any updates requiring approval are displayed next to Upgrade status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for update. When satisfied, click Approve.
- Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
5.1.5. Early access releases Copiar enlaceEnlace copiado en el portapapeles!
You can gain access to builds in development by subscribing to the candidate update channel for your version of OpenShift Virtualization. These releases have not been fully tested by Red Hat and are not supported, but you can use them on non-production clusters to test capabilities and bug fixes being developed for that version.
The stable channel, which matches the underlying Red Hat OpenShift Service on AWS version and is fully tested, is suitable for production systems. You can switch between the stable and candidate channels in Operator Hub. However, updating from a candidate channel release to a stable channel release is not tested by Red Hat.
Some candidate releases are promoted to the stable channel. However, releases present only in candidate channels might not contain all features that will be made generally available (GA), and some features in candidate builds might be removed before GA. Additionally, candidate releases might not offer update paths to later GA releases.
The candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable.
Chapter 6. Creating a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
6.1. Creating virtual machines from instance types Copiar enlaceEnlace copiado en el portapapeles!
You can simplify virtual machine (VM) creation by using instance types, whether you use the Red Hat OpenShift Service on AWS web console or the CLI to create VMs.
Creating a VM from an instance type in OpenShift Virtualization 4.15 and higher is supported on Red Hat OpenShift Service on AWS clusters. In OpenShift Virtualization 4.14, creating a VM from an instance type is a Technology Preview feature and is not supported on Red Hat OpenShift Service on AWS clusters.
6.1.1. About instance types Copiar enlaceEnlace copiado en el portapapeles!
An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install OpenShift Virtualization.
To create a new instance type, you must first create a manifest, either manually or by using the virtctl
CLI tool. You then create the instance type object by applying the manifest to your cluster.
OpenShift Virtualization provides two CRDs for configuring instance types:
-
A namespaced object:
VirtualMachineInstancetype
-
A cluster-wide object:
VirtualMachineClusterInstancetype
These objects use the same VirtualMachineInstancetypeSpec
.
6.1.1.1. Required attributes Copiar enlaceEnlace copiado en el portapapeles!
When you configure an instance type, you must define the cpu
and memory
attributes. Other attributes are optional.
When you create a VM from an instance type, you cannot override any parameters defined in the instance type.
Because instance types require defined CPU and memory attributes, OpenShift Virtualization always rejects additional requests for these resources when creating a VM from an instance type.
You can manually create an instance type manifest. For example:
Example YAML file with required fields
You can create an instance type manifest by using the virtctl
CLI utility. For example:
Example virtctl
command with required fields
virtctl create instancetype --cpu 2 --memory 256Mi
$ virtctl create instancetype --cpu 2 --memory 256Mi
where:
--cpu <value>
- Specifies the number of vCPUs to allocate to the guest. Required.
--memory <value>
- Specifies an amount of memory to allocate to the guest. Required.
You can immediately create the object from the new manifest by running the following command:
virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -
$ virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -
6.1.1.2. Optional attributes Copiar enlaceEnlace copiado en el portapapeles!
In addition to the required cpu
and memory
attributes, you can include the following optional attributes in the VirtualMachineInstancetypeSpec
:
annotations
- List annotations to apply to the VM.
gpus
- List vGPUs for passthrough.
hostDevices
- List host devices for passthrough.
ioThreadsPolicy
- Define an IO threads policy for managing dedicated disk access.
launchSecurity
- Configure Secure Encrypted Virtualization (SEV).
nodeSelector
- Specify node selectors to control the nodes where this VM is scheduled.
schedulerName
- Define a custom scheduler to use for this VM instead of the default scheduler.
6.1.2. Pre-defined instance types Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization includes a set of pre-defined instance types called common-instancetypes
. Some are specialized for specific workloads and others are workload-agnostic.
These instance type resources are named according to their series, version, and size. The size value follows the .
delimiter and ranges from nano
to 8xlarge
.
Use case | Series | Characteristics | vCPU to memory ratio | Example resource |
---|---|---|---|---|
Network | N |
| 1:2 |
|
Overcommitted | O |
| 1:4 |
|
Compute Exclusive | CX |
| 1:2 |
|
General Purpose | U |
| 1:4 |
|
Memory Intensive | M |
| 1:8 |
|
6.1.3. Specifying an instance type or preference Copiar enlaceEnlace copiado en el portapapeles!
You can specify an instance type, a preference, or both to define a set of workload sizing and runtime characteristics for reuse across multiple VMs.
6.1.3.1. Using flags to specify instance types and preferences Copiar enlaceEnlace copiado en el portapapeles!
Specify instance types and preferences by using flags.
Prerequisites
- You must have an instance type, preference, or both on the cluster.
Procedure
To specify an instance type when creating a VM, use the
--instancetype
flag. To specify a preference, use the--preference
flag. The following example includes both flags:virtctl create vm --instancetype <my_instancetype> --preference <my_preference>
$ virtctl create vm --instancetype <my_instancetype> --preference <my_preference>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To specify a namespaced instance type or preference, include the
kind
in the value passed to the--instancetype
or--preference
flag command. The namespaced instance type or preference must be in the same namespace you are creating the VM in. The following example includes flags for a namespaced instance type and a namespaced preference:virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>
$ virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.3.2. Inferring an instance type or preference Copiar enlaceEnlace copiado en el portapapeles!
Inferring instance types, preferences, or both is enabled by default, and the inferFromVolumeFailure
policy of the inferFromVolume
attribute is set to Ignore
. When inferring from the boot volume, errors are ignored, and the VM is created with the instance type and preference left unset.
However, when flags are applied, the inferFromVolumeFailure
policy defaults to Reject
. When inferring from the boot volume, errors result in the rejection of the creation of that VM.
You can use the --infer-instancetype
and --infer-preference
flags to infer which instance type, preference, or both to use to define the workload sizing and runtime characteristics of a VM.
Prerequisites
-
You have installed the
virtctl
tool.
Procedure
To explicitly infer instance types from the volume used to boot the VM, use the
--infer-instancetype
flag. To explicitly infer preferences, use the--infer-preference
flag. The following command includes both flags:virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference
$ virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To infer an instance type or preference from a volume other than the volume used to boot the VM, use the
--infer-instancetype-from
and--infer-preference-from
flags to specify any of the virtual machine’s volumes. In the example below, the virtual machine boots fromvolume-a
but infers the instancetype and preference fromvolume-b
.virtctl create vm \ --volume-import=type:pvc,src:my-ns/my-pvc-a,name:volume-a \ --volume-import=type:pvc,src:my-ns/my-pvc-b,name:volume-b \ --infer-instancetype-from volume-b \ --infer-preference-from volume-b
$ virtctl create vm \ --volume-import=type:pvc,src:my-ns/my-pvc-a,name:volume-a \ --volume-import=type:pvc,src:my-ns/my-pvc-b,name:volume-b \ --infer-instancetype-from volume-b \ --infer-preference-from volume-b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.3.3. Setting the inferFromVolume labels Copiar enlaceEnlace copiado en el portapapeles!
Use the following labels on your PVC, data source, or data volume to instruct the inference mechanism which instance type, preference, or both to use when trying to boot from a volume.
-
A cluster-wide instance type:
instancetype.kubevirt.io/default-instancetype
label. -
A namespaced instance type:
instancetype.kubevirt.io/default-instancetype-kind
label. Defaults to theVirtualMachineClusterInstancetype
label if left empty. -
A cluster-wide preference:
instancetype.kubevirt.io/default-preference
label. -
A namespaced preference:
instancetype.kubevirt.io/default-preference-kind
label. Defaults toVirtualMachineClusterPreference
label, if left empty.
Prerequisites
- You must have an instance type, preference, or both on the cluster.
-
You have installed the OpenShift CLI (
oc
).
Procedure
To apply a label to a data source, use
oc label
. The following command applies a label that points to a cluster-wide instance type:oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>
$ oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.4. Creating a VM from an instance type by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
Procedure
In the web console, navigate to Virtualization → Catalog.
The InstanceTypes tab opens by default.
NoteWhen configuring a downward-metrics device on an IBM Z® system that uses a VM preference, set the
spec.preference.name
value torhel.9.s390x
or another available preference with the format*.s390x
.Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows bootable volume quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
6.1.5. Changing the instance type of a VM Copiar enlaceEnlace copiado en el portapapeles!
You can change the instance type associated with a running virtual machine (VM) by using the web console. The change takes effect immediately.
Prerequisites
- You created the VM by using an instance type.
Procedure
- In the Red Hat OpenShift Service on AWS web console, click Virtualization → VirtualMachines.
- Select a VM to open the VirtualMachine details page.
- Click the Configuration tab.
- On the Details tab, click the instance type text to open the Edit Instancetype dialog. For example, click 1 CPU | 2 GiB Memory.
Edit the instance type by using the Series and Size lists.
- Select an item from the Series list to show the relevant sizes for that series. For example, select General Purpose.
- Select the VM’s new instance type from the Size list. For example, select medium: 1 CPUs, 4Gi Memory, which is available in the General Purpose series.
- Click Save.
Verification
- Click the YAML tab.
- Click Reload.
- Review the VM YAML to confirm that the instance type changed.
6.2. Creating virtual machines from templates Copiar enlaceEnlace copiado en el portapapeles!
You can create virtual machines (VMs) from Red Hat templates by using the Red Hat OpenShift Service on AWS web console.
6.2.1. About VM templates Copiar enlaceEnlace copiado en el portapapeles!
You can use VM templates to help you easily create VMs.
- Expedite creation with boot sources
You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label.
Templates without a boot source are labeled Boot source required. See Managing automatic boot source updates for details.
- Customize before starting the VM
You can customize the disk source and VM parameters before you start the VM.
NoteIf you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Removing a deprecated designation from a customized VM template by using the web console.
- Single-node OpenShift
-
Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the
evictionStrategy
field for templates or VMs that use data volumes or storage profiles.
6.2.2. Creating a VM from a template Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from a template with an available boot source by using the Red Hat OpenShift Service on AWS web console. You can customize template or VM parameters, such as data sources, Cloud-init, or SSH keys, before you start the VM.
You can choose between two views in the web console to create the VM:
- A virtualization-focused view, which provides a concise list of virtualization-related options at the top of the view
- A general view, which provides access to the various web console options, including Virtualization
Procedure
From the Red Hat OpenShift Service on AWS web console, choose your view:
- For a virtualization-focused view, select Administrator → Virtualization → Catalog.
- For a general view, navigate to Virtualization → Catalog.
- Click the Template catalog tab.
- Click the Boot source available checkbox to filter templates with boot sources. The catalog displays the default templates.
Click All templates to view the available templates for your filters.
-
To focus on particular templates, enter the keyword in the
Filter by keyword
field. - Choose a template project from the All projects dropdown menu, or view all projects.
-
To focus on particular templates, enter the keyword in the
Click a template tile to view its details.
- Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the Mount Windows drivers disk checkbox.
- If you do not need to customize the template or VM parameters, click Quick create VirtualMachine to create a VM from the template.
If you need to customize the template or VM parameters, do the following:
- Click Customize VirtualMachine. The Customize and create VirtualMachine page displays the Overview, YAML, Scheduling, Environment, Network interfaces, Disks, Scripts, and Metadata tabs.
-
Click the Scripts tab to edit the parameters that must be set before the VM boots, such as
Cloud-init
,SSH key
, orSysprep
(Windows VM only). - Optional: Click the Start this virtualmachine after creation (Always) checkbox.
Click Create VirtualMachine.
The VirtualMachine details page displays the provisioning status.
6.2.2.1. Removing a deprecated designation from a customized VM template by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed.
You can remove the deprecated designation from the customized template.
Procedure
- Navigate to Virtualization → Templates in the web console.
- From the list of VM templates, click the template marked as deprecated.
- Click Edit next to the pencil icon beside Labels.
Remove the following two labels:
-
template.kubevirt.io/type: "base"
-
template.kubevirt.io/version: "version"
-
- Click Save.
- Click the pencil icon beside the number of existing Annotations.
Remove the following annotation:
-
template.kubevirt.io/deprecated
-
- Click Save.
6.2.2.2. Creating a custom VM template in the web console Copiar enlaceEnlace copiado en el portapapeles!
You create a virtual machine template by editing a YAML file example in the Red Hat OpenShift Service on AWS web console.
Procedure
- In the web console, click Virtualization → Templates in the side menu.
-
Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the
openshift
project by default. - Click Create Template.
- Specify the template parameters by editing the YAML file.
Click Create.
The template is displayed on the Templates page.
- Optional: Click Download to download and save the YAML file.
Chapter 7. Advanced VM creation Copiar enlaceEnlace copiado en el portapapeles!
7.1. Creating VMs in the web console Copiar enlaceEnlace copiado en el portapapeles!
7.1.1. Creating virtual machines from Red Hat images overview Copiar enlaceEnlace copiado en el portapapeles!
Red Hat images are golden images. They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images
project as snapshots or persistent volume claims (PVCs). You can optionally use a custom namespace for golden images.
Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates.
Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console.
You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods:
Do not create VMs in the default openshift-*
namespaces. Instead, create a new namespace or use an existing namespace without the openshift
prefix.
7.1.1.1. About golden images Copiar enlaceEnlace copiado en el portapapeles!
A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently.
7.1.1.1.1. How do golden images work? Copiar enlaceEnlace copiado en el portapapeles!
Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences.
After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image.
7.1.1.1.2. Red Hat implementation of golden images Copiar enlaceEnlace copiado en el portapapeles!
Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs.
7.1.1.2. About VM boot sources Copiar enlaceEnlace copiado en el portapapeles!
Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications.
Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source.
Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster’s default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the previous default storage class.
7.1.1.3. Configuring a custom namespace for golden images Copiar enlaceEnlace copiado en el portapapeles!
The default namespace for golden images is openshift-virtualization-os-images
, but you can configure a custom namespace to restrict user access to the default boot sources.
7.1.1.3.1. Configuring a custom namespace for golden images by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can configure a custom namespace for golden images in your cluster by using the Red Hat OpenShift Service on AWS web console.
Procedure
- In the web console, select Virtualization → Overview.
- Select the Settings tab.
- On the Cluster tab, select General settings → Bootable volumes project.
Select a namespace to use for golden images.
- If you already created a namespace, select it from the Project list.
If you did not create a namespace, scroll to the bottom of the list and click Create project.
- Enter a name for your new namespace in the Name field of the Create project dialog.
- Click Create.
7.1.1.3.2. Configuring a custom namespace for golden images by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can configure a custom namespace for golden images in your cluster by setting the spec.commonBootImageNamespace
field in the HyperConverged
custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc
). - You created a namespace to use for golden images.
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the custom namespace by updating the value of the
spec.commonBootImageNamespace
field:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace to use for golden images.
- Save your changes and exit the editor.
7.1.2. Creating VMs by importing images from web pages Copiar enlaceEnlace copiado en el portapapeles!
You can create virtual machines (VMs) by importing operating system images from web pages.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
7.1.2.1. Creating a VM from an image on a web page by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) by importing an image from a web page by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You must have access to the web page that contains the image.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list.
-
Enter the image URL. Example:
https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software
- Set the disk size.
- Click Next.
- Click Create VirtualMachine.
7.1.2.2. Creating a VM from an image on a web page by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from an image on a web page by using the command line.
When the VM is created, the data volume with the image is imported into persistent storage.
Prerequisites
- You must have access credentials for the web page that contains the image.
-
You have installed the
virtctl
CLI. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
VirtualMachine
manifest for your VM and save it as a YAML file. For example, to create a minimal Red Hat Enterprise Linux (RHEL) VM from an image on a web page, run the following command:virtctl create vm --name vm-rhel-9 --instancetype u1.small --preference rhel.9 --volume-import type:http,url:https://example.com/rhel9.qcow2,size:10Gi
$ virtctl create vm --name vm-rhel-9 --instancetype u1.small --preference rhel.9 --volume-import type:http,url:https://example.com/rhel9.qcow2,size:10Gi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
VirtualMachine
manifest for your VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the VM by running the following command:
oc create -f <vm_manifest_file>.yaml
$ oc create -f <vm_manifest_file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
oc create
command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes toSucceeded
. You can start the VM.Data volume provisioning happens in the background, so there is no need to monitor the process.
Verification
The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the status of the data volume:
oc get dv <data_volume_name>
$ oc get dv <data_volume_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the provisioning is successful, the data volume phase is
Succeeded
:Example output
NAME PHASE PROGRESS RESTARTS AGE imported-volume-6dcpf Succeeded 100.0% 18s
NAME PHASE PROGRESS RESTARTS AGE imported-volume-6dcpf Succeeded 100.0% 18s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that provisioning is complete and that the VM has started by accessing its serial console:
virtctl console <vm_name>
$ virtctl console <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is running and the serial console is accessible, the output looks as follows:
Example output
Successfully connected to vm-rhel-9 console. The escape sequence is ^]
Successfully connected to vm-rhel-9 console. The escape sequence is ^]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.3. Creating VMs by uploading images Copiar enlaceEnlace copiado en el portapapeles!
You can create virtual machines (VMs) by uploading operating system images from your local machine.
You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
You must also install VirtIO drivers on Windows VMs.
7.1.3.1. Creating a VM from an uploaded image by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from an uploaded operating system image by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You must have an
IMG
,ISO
, orQCOW2
image file.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list.
- Browse to the image on your local machine and set the disk size.
- Click Customize VirtualMachine.
- Click Create VirtualMachine.
7.1.3.1.1. Generalizing a VM image Copiar enlaceEnlace copiado en el portapapeles!
You can generalize a Red Hat Enterprise Linux (RHEL) image to remove all system-specific configuration data before you use the image to create a golden image, a preconfigured snapshot of a virtual machine (VM). You can use a golden image to deploy new VMs.
You can generalize a RHEL VM by using the virtctl
, guestfs
, and virt-sysprep
tools.
Prerequisites
- You have a RHEL virtual machine (VM) to use as a base VM.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the
virtctl
tool.
Procedure
Stop the RHEL VM if it is running, by entering the following command:
virtctl stop <my_vm_name>
$ virtctl stop <my_vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Clone the virtual machine to avoid losing the data from your original VM. You can then generalize the cloned VM.
Retrieve the
dataVolume
that stores the root filesystem for the VM by running the following command:oc get vm <my_vm_name> -o jsonpath="{.spec.template.spec.volumes}{'\n'}"
$ oc get vm <my_vm_name> -o jsonpath="{.spec.template.spec.volumes}{'\n'}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[{"dataVolume":{"name":"<my_vm_volume>"},"name":"rootdisk"},{"cloudInitNoCloud":{...}]
[{"dataVolume":{"name":"<my_vm_volume>"},"name":"rootdisk"},{"cloudInitNoCloud":{...}]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the persistent volume claim (PVC) that matches the listed
dataVolume
by running the followimg command:oc get pvc
$ oc get pvc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound …
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound …
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster configuration does not enable you to clone a VM, to avoid losing the data from your original VM, you can clone the VM PVC to a data volume instead. You can then use the cloned PVC to create a golden image.
If you are creating a golden image by cloning a PVC, continue with the next steps, using the cloned PVC.
Deploy a new interactive container with
libguestfs-tools
and attach the PVC to it by running the following command:virtctl guestfs <my-vm-volume> --uid 107
$ virtctl guestfs <my-vm-volume> --uid 107
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command opens a shell for you to run the next command.
Remove all configurations specific to your system by running the following command:
virt-sysprep -a disk.img
$ virt-sysprep -a disk.img
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the Red Hat OpenShift Service on AWS console, click Virtualization → Catalog.
- Click Add volume.
In the Add volume window:
- From the Source type list, select Use existing Volume.
- From the Volume project list, select your project.
- From the Volume name list, select the correct PVC.
- In the Volume name field, enter a name for the new golden image.
- From the Preference list, select the RHEL version you are using.
- From the Default Instance Type list, select the instance type with the correct CPU and memory requirements for the version of RHEL you selected previously.
- Click Save.
The new volume appears in the Select volume to boot from list. This is your new golden image. You can use this volume to create new VMs.
7.1.3.2. Creating a Windows VM Copiar enlaceEnlace copiado en el portapapeles!
You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You created a Windows installation DVD or USB with the Windows Media Creation Tool. See Create Windows 10 installation media in the Microsoft documentation.
-
You created an
autounattend.xml
answer file. See Answer files (unattend.xml) in the Microsoft documentation.
Procedure
Upload the Windows image as a new PVC:
- Navigate to Storage → PersistentVolumeClaims in the web console.
- Click Create PersistentVolumeClaim → With Data upload form.
- Browse to the Windows image and select it.
Enter the PVC name, select the storage class and size and then click Upload.
The Windows image is uploaded to a PVC.
Configure a new VM by cloning the uploaded PVC:
- Navigate to Virtualization → Catalog.
- Select a Windows template tile and click Customize VirtualMachine.
- Select Clone (clone PVC) from the Disk source list.
- Select the PVC project, the Windows image PVC, and the disk size.
Apply the answer file to the VM:
- Click Customize VirtualMachine parameters.
- On the Sysprep section of the Scripts tab, click Edit.
-
Browse to the
autounattend.xml
answer file and click Save.
Set the run strategy of the VM:
- Clear Start this VirtualMachine after creation so that the VM does not start immediately.
- Click Create VirtualMachine.
-
On the YAML tab, replace
running:false
withrunStrategy: RerunOnFailure
and click Save.
Click the Options menu
and select Start.
The VM boots from the
sysprep
disk containing theautounattend.xml
answer file.
7.1.3.2.1. Generalizing a Windows VM image Copiar enlaceEnlace copiado en el portapapeles!
You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM).
Before generalizing the VM, you must ensure the sysprep
tool cannot detect an answer file after the unattended Windows installation.
Prerequisites
- A running Windows VM with the QEMU guest agent installed.
Procedure
- In the Red Hat OpenShift Service on AWS console, click Virtualization → VirtualMachines.
- Select a Windows VM to open the VirtualMachine details page.
- Click Configuration → Disks.
-
Click the Options menu
beside the
sysprep
disk and select Detach. - Click Detach.
-
Rename
C:\Windows\Panther\unattend.xml
to avoid detection by thesysprep
tool. Start the
sysprep
program by running the following command:%WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
%WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
After the
sysprep
tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.
You can now specialize the VM.
7.1.3.2.2. Specializing a Windows VM image Copiar enlaceEnlace copiado en el portapapeles!
Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM.
Prerequisites
- You must have a generalized Windows disk image.
-
You must create an
unattend.xml
answer file. See the Microsoft documentation for details.
Procedure
- In the Red Hat OpenShift Service on AWS console, click Virtualization → Catalog.
- Select a Windows template and click Customize VirtualMachine.
- Select PVC (clone PVC) from the Disk source list.
- Select the PVC project and PVC name of the generalized Windows image.
- Click Customize VirtualMachine parameters.
- Click the Scripts tab.
-
In the Sysprep section, click Edit, browse to the
unattend.xml
answer file, and click Save. - Click Create VirtualMachine.
During the initial boot, Windows uses the unattend.xml
answer file to specialize the VM. The VM is now ready to use.
7.1.3.3. Creating a VM from an uploaded image by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can upload an operating system image by using the virtctl
command-line tool. You can use an existing data volume or create a new data volume for the image.
Prerequisites
-
You must have an
ISO
,IMG
, orQCOW2
operating system image file. -
For best performance, compress the image file by using the virt-sparsify tool or the
xz
orgzip
utilities. - The client machine must be configured to trust the Red Hat OpenShift Service on AWS router’s certificate.
-
You have installed the
virtctl
CLI. -
You have installed the OpenShift CLI (
oc
).
Procedure
Upload the image by running the
virtctl image-upload
command:virtctl image-upload dv <datavolume_name> \ --size=<datavolume_size> \ --image-path=</path/to/image> \
$ virtctl image-upload dv <datavolume_name> \
1 --size=<datavolume_size> \
2 --image-path=</path/to/image> \
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note-
If you do not want to create a new data volume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. When you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new data volume, omit the
Optional. To verify that a data volume was created, view all data volumes by running the following command:
oc get dvs
$ oc get dvs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.4. Cloning VMs Copiar enlaceEnlace copiado en el portapapeles!
You can clone virtual machines (VMs) or create new VMs from snapshots.
Cloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported.
7.1.4.1. Cloning a VM by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can clone an existing VM by using the web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
Click Actions.
Alternatively, access the same menu in the tree view by right-clicking the VM.
- Select Clone.
- On the Clone VirtualMachine page, enter the name of the new VM.
- (Optional) Select the Start cloned VM checkbox to start the cloned VM.
- Click Clone.
7.1.4.2. Creating a VM from an existing snapshot by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a new VM by copying an existing snapshot.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
- Click the Snapshots tab.
-
Click the Options menu
for the snapshot you want to copy.
- Select Create VirtualMachine.
- Enter the name of the virtual machine.
- (Optional) Select the Start this VirtualMachine after creation checkbox to start the new virtual machine.
- Click Create.
7.2. Creating VMs using the CLI Copiar enlaceEnlace copiado en el portapapeles!
7.2.1. Creating virtual machines from the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine
manifest. You can simplify VM configuration by using an instance type in your VM manifest.
You can also create VMs from instance types by using the web console.
7.2.1.1. Creating a VM from a VirtualMachine manifest Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from a VirtualMachine
manifest. To simplify the creation of these manifests, you can use the virtctl
command-line tool.
Prerequisites
-
You have installed the
virtctl
CLI. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
VirtualMachine
manifest for your VM and save it as a YAML file. For example, to create a minimal Red Hat Enterprise Linux (RHEL) VM, run the following command:virtctl create vm --name rhel-9-minimal --volume-import type:ds,src:openshift-virtualization-os-images/rhel9
$ virtctl create vm --name rhel-9-minimal --volume-import type:ds,src:openshift-virtualization-os-images/rhel9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
VirtualMachine
manifest for your VM:NoteThis example manifest does not configure VM authentication.
Example manifest for a RHEL VM
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The VM name.
- 2
- The boot source for the guest operating system.
- 3
- The namespace for the boot source. Golden images are stored in the
openshift-virtualization-os-images
namespace. - 4
- The instance type is inferred from the selected
DataSource
object. - 5
- The preference is inferred from the selected
DataSource
object.
Create a virtual machine by using the manifest file:
oc create -f <vm_manifest_file>.yaml
$ oc create -f <vm_manifest_file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Start the virtual machine:
virtctl start <vm_name>
$ virtctl start <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
7.2.2. Creating VMs by using container disks Copiar enlaceEnlace copiado en el portapapeles!
You can create virtual machines (VMs) by using container disks built from operating system images.
You can enable auto updates for your container disks. See Managing automatic boot source updates for details.
If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can prune DeploymentConfig
objects to resolve this issue:
You create a VM from a container disk by performing the following steps:
- Build an operating system image into a container disk and upload it to your container registry.
- If your container registry does not have TLS, configure your environment to disable TLS for your registry.
- Create a VM with the container disk as the disk source by using the web console or the command line.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
7.2.2.1. Building and uploading a container disk Copiar enlaceEnlace copiado en el portapapeles!
You can build a virtual machine (VM) image into a container disk and upload it to a registry.
The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted.
For Red Hat Quay, you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed.
Prerequisites
-
You must have
podman
installed. - You must have a QCOW2 or RAW image file.
Procedure
Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of
107
, and placed in the/disk/
directory inside the container. Permissions for the/disk/
directory must then be set to0440
.The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal
scratch
image in the second stage to store the result:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where
<vm_image>
is the image in either QCOW2 or RAW format. If you use a remote image, replace<vm_image>.qcow2
with the complete URL.
Build and tag the container:
podman build -t <registry>/<container_disk_name>:latest .
$ podman build -t <registry>/<container_disk_name>:latest .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the container image to the registry:
podman push <registry>/<container_disk_name>:latest
$ podman push <registry>/<container_disk_name>:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.2.2. Disabling TLS for a container registry Copiar enlaceEnlace copiado en el portapapeles!
You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries
field of the HyperConverged
custom resource.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a list of insecure registries to the
spec.storageImport.insecureRegistries
field.Example
HyperConverged
custom resourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the examples in this list with valid registry hostnames.
7.2.2.3. Creating a VM from a container disk by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) by importing a container disk from a container registry by using the Red Hat OpenShift Service on AWS web console.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list.
-
Enter the container image URL. Example:
https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
- Set the disk size.
- Click Next.
- Click Create VirtualMachine.
7.2.2.4. Creating a VM from a container disk by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from a container disk by using the command line.
Prerequisites
- You must have access credentials for the container registry that contains the container disk.
-
You have installed the
virtctl
CLI. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
VirtualMachine
manifest for your VM and save it as a YAML file. For example, to create a minimal Red Hat Enterprise Linux (RHEL) VM from a container disk, run the following command:virtctl create vm --name vm-rhel-9 --instancetype u1.small --preference rhel.9 --volume-containerdisk src:registry.redhat.io/rhel9/rhel-guest-image:9.5
$ virtctl create vm --name vm-rhel-9 --instancetype u1.small --preference rhel.9 --volume-containerdisk src:registry.redhat.io/rhel9/rhel-guest-image:9.5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
VirtualMachine
manifest for your VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the VM by running the following command:
oc create -f <vm_manifest_file>.yaml
$ oc create -f <vm_manifest_file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Monitor the status of the VM:
oc get vm <vm_name>
$ oc get vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the provisioning is successful, the VM status is
Running
:Example output
NAME AGE STATUS READY vm-rhel-9 18s Running True
NAME AGE STATUS READY vm-rhel-9 18s Running True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that provisioning is complete and that the VM has started by accessing its serial console:
virtctl console <vm_name>
$ virtctl console <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is running and the serial console is accessible, the output looks as follows:
Example output
Successfully connected to vm-rhel-9 console. The escape sequence is ^]
Successfully connected to vm-rhel-9 console. The escape sequence is ^]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.3. Creating VMs by cloning PVCs Copiar enlaceEnlace copiado en el portapapeles!
You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
You clone a PVC by creating a data volume that references a source PVC.
7.2.3.1. About cloning Copiar enlaceEnlace copiado en el portapapeles!
When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods:
- CSI volume cloning
- Smart cloning
Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods.
7.2.3.1.1. CSI volume cloning Copiar enlaceEnlace copiado en el portapapeles!
Container Storage Interface (CSI) cloning uses CSI driver features to more efficiently clone a source data volume.
CSI volume cloning has the following requirements:
- The CSI driver that backs the storage class of the persistent volume claim (PVC) must support volume cloning.
-
For provisioners not recognized by the CDI, the corresponding storage profile must have the
cloneStrategy
set to CSI Volume Cloning. - The source and target PVCs must have the same storage class and volume mode.
-
If you create the data volume, you must have permission to create the
datavolumes/source
resource in the source namespace. - The source volume must not be in use.
7.2.3.1.2. Smart cloning Copiar enlaceEnlace copiado en el portapapeles!
When a Container Storage Interface (CSI) plugin with snapshot capabilities is available, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) from a snapshot, which then allows efficient cloning of additional PVCs.
Smart cloning has the following requirements:
- A snapshot class associated with the storage class must exist.
- The source and target PVCs must have the same storage class and volume mode.
-
If you create the data volume, you must have permission to create the
datavolumes/source
resource in the source namespace. - The source volume must not be in use.
7.2.3.1.3. Host-assisted cloning Copiar enlaceEnlace copiado en el portapapeles!
When the requirements for neither Container Storage Interface (CSI) volume cloning nor smart cloning have been met, host-assisted cloning is used as a fallback method. Host-assisted cloning is less efficient than either of the two other cloning methods.
Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created.
Example PVC target annotation
Example event
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible
7.2.3.2. Creating a VM from a PVC by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You must have access to the namespace that contains the source PVC.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list.
- Select the PVC project and the PVC name.
- Set the disk size.
- Click Next.
- Click Create VirtualMachine.
7.2.3.3. Creating a VM from a PVC by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line.
You can clone a PVC by using one of the following options:
Cloning a PVC to a new data volume.
This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.
Cloning a PVC by creating a
VirtualMachine
manifest with adataVolumeTemplates
stanza.This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC.
7.2.3.3.1. Optimizing clone Performance at scale in OpenShift Data Foundation Copiar enlaceEnlace copiado en el portapapeles!
When you use OpenShift Data Foundation, the storage profile configures the default cloning strategy as csi-clone
. However, this method has limitations, as shown in the following link. After a certain number of clones are created from a persistent volume claim (PVC), a background flattening process begins, which can significantly reduce clone creation performance at scale.
To improve performance when creating hundreds of clones from a single source PVC, use the VolumeSnapshot
cloning method instead of the default csi-clone
strategy.
Procedure
Create a VolumeSnapshot
custom resource (CR) of the source image by using the following content:
-
Add the
spec.source.snapshot
stanza to reference theVolumeSnapshot
as the source for theDataVolume clone
:
spec: source: snapshot: namespace: golden-ns name: golden-volumesnapshot
spec:
source:
snapshot:
namespace: golden-ns
name: golden-volumesnapshot
7.2.3.3.2. Cloning a PVC to a data volume Copiar enlaceEnlace copiado en el portapapeles!
You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line.
You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.
Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the kubevirt
content type.
Smart-cloning is faster and more efficient than host-assisted cloning because it uses snapshots to clone PVCs. Smart-cloning is supported by storage providers that support snapshots, such as Red Hat OpenShift Data Foundation.
Cloning between different volume modes is not supported for smart-cloning.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - The VM with the source PVC must be powered down.
- If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace.
Additional prerequisites for smart-cloning:
- Your storage provider must support snapshots.
- The source and target PVCs must have the same storage provider and volume mode.
The value of the
driver
key of theVolumeSnapshotClass
object must match the value of theprovisioner
key of theStorageClass
object as shown in the following example:Example
VolumeSnapshotClass
objectkind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ...
kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
StorageClass
objectkind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com
kind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a
DataVolume
manifest as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the data volume by running the following command:
oc create -f <datavolume>.yaml
$ oc create -f <datavolume>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteData volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned.
7.2.3.3.3. Creating a VM from a cloned PVC by using a data volume template Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template. This method creates a data volume whose lifecycle is independent on the original VM.
Prerequisites
- The VM with the source PVC must be powered down.
-
You have installed the
virtctl
CLI. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
VirtualMachine
manifest for your VM and save it as a YAML file, for example:virtctl create vm --name rhel-9-clone --volume-import type:pvc,src:my-project/imported-volume-q5pr9
$ virtctl create vm --name rhel-9-clone --volume-import type:pvc,src:my-project/imported-volume-q5pr9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
VirtualMachine
manifest for your VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the virtual machine with the PVC-cloned data volume:
oc create -f <vm_manifest_file>.yaml
$ oc create -f <vm_manifest_file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Managing VMs Copiar enlaceEnlace copiado en el portapapeles!
8.1. Listing virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can list available virtual machines (VMs) by using the web console or the OpenShift CLI (oc
).
8.1.1. Listing virtual machines by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can either list all of the virtual machines (VMs) in your cluster or limit the list to VMs in a specified namespace by using the OpenShift CLI (oc
).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
List all of the VMs in your cluster by running the following command:
oc get vms -A
$ oc get vms -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all of the VMs in a specific namespace by running the following command:
oc get vms -n <namespace>
$ oc get vms -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.1.2. Listing virtual machines by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can list all of the virtual machines (VMs) in your cluster by using the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu to access the tree view with all of the projects and VMs in your cluster.
- Optional: Enable the Show only projects with VirtualMachines option above the tree view to limit the displayed projects.
- Optional: Click the Advanced search button next to the search bar to further filter VMs by one of the following: their name, the project they belong to, their labels, or the allocated vCPU and memory resources.
8.1.3. Organizing virtual machines by using the web console Copiar enlaceEnlace copiado en el portapapeles!
In addition to creating virtual machines (VMs) in different projects, you can use the tree view to further organize them in folders.
Procedure
- Click Virtualization → VirtualMachines from the side menu to access the tree view with all projects and VMs in your cluster.
Perform one of the following actions depending on your use case:
To move the VM to a new folder in the same project:
- Right-click the name of the VM in the tree view.
- Select Move to folder from the menu.
- Type the name of the folder to create in the "Search folder" bar.
- Click Create folder in the drop-down list.
- Click Save.
To move the VM to an existing folder in the same project:
- Click the name of the VM in the tree view and drag it to a folder in the same project. If the operation is permitted, the folder is highlighted in green when you drag the VM over it.
To move the VM from a folder to the project:
- Click the name of the VM in the tree view and drag it on the project name. If the operation is permitted, the project name is highlighted in green when you drag the VM over it.
8.2. Installing the QEMU guest agent and VirtIO drivers Copiar enlaceEnlace copiado en el portapapeles!
The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
8.2.1. Installing the QEMU guest agent Copiar enlaceEnlace copiado en el portapapeles!
8.2.1.1. Installing the QEMU guest agent on a Linux VM Copiar enlaceEnlace copiado en el portapapeles!
The qemu-guest-agent
is available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs)
To create snapshots of a VM in the Running
state with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken.
The conditions under which a snapshot is taken are reflected in the snapshot indications that are displayed in the web console or CLI. If these conditions do not meet your requirements, try creating the snapshot again, or use an offline snapshot
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
- Log in to the VM by using a console or SSH.
Install the QEMU guest agent by running the following command:
yum install -y qemu-guest-agent
$ yum install -y qemu-guest-agent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the service is persistent and start it:
systemctl enable --now qemu-guest-agent
$ systemctl enable --now qemu-guest-agent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following command to verify that
AgentConnected
is listed in the VM spec:oc get vm <vm_name>
$ oc get vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.1.2. Installing the QEMU guest agent on a Windows VM Copiar enlaceEnlace copiado en el portapapeles!
For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM.
To create snapshots of a VM in the Running
state with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken.
Note that in a Windows guest operating system, quiescing also requires the Volume Shadow Copy Service (VSS). Therefore, before you create a snapshot, ensure that VSS is enabled on the VM as well.
The conditions under which a snapshot is taken are reflected in the snapshot indications that are displayed in the web console or CLI. If these conditions do not meet your requirements, try creating the snapshot again or use an offline snapshot.
Procedure
-
In the Windows guest operating system, use the File Explorer to navigate to the
guest-agent
directory in thevirtio-win
CD drive. -
Run the
qemu-ga-x86_64.msi
installer.
Verification
Obtain a list of network services by running the following command:
net start
$ net start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the output contains the
QEMU Guest Agent
.
8.2.2. Installing VirtIO drivers on Windows VMs Copiar enlaceEnlace copiado en el portapapeles!
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download.
The container-native-virtualization/virtio-win
container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the VM.
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
8.2.2.1. Attaching VirtIO container disk to Windows VMs during installation Copiar enlaceEnlace copiado en el portapapeles!
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM.
Procedure
- When creating a Windows VM from a template, click Customize VirtualMachine.
- Select Mount Windows drivers disk.
- Click the Customize VirtualMachine parameters.
- Click Create VirtualMachine.
After the VM is created, the virtio-win
SATA CD disk will be attached to the VM.
8.2.2.2. Attaching VirtIO container disk to an existing Windows VM Copiar enlaceEnlace copiado en el portapapeles!
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM.
Procedure
- Navigate to the existing Windows VM, and click Actions → Stop.
- Go to VM Details → Configuration → Storage.
- Select the Mount Windows drivers disk checkbox.
- Click Save.
- Start the VM, and connect to a graphical console.
8.2.2.3. Installing VirtIO drivers during Windows installation Copiar enlaceEnlace copiado en el portapapeles!
You can install the VirtIO drivers while installing Windows on a virtual machine (VM).
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Prerequisites
-
A storage device containing the
virtio
drivers must be attached to the VM.
Procedure
-
In the Windows operating system, use the
File Explorer
to navigate to thevirtio-win
CD drive. Double-click the drive to run the appropriate installer for your VM.
For a 64-bit vCPU, select the
virtio-win-gt-x64
installer. 32-bit vCPUs are no longer supported.- Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default.
- After the installation is complete, select Finish.
- Reboot the VM.
Verification
-
Open the system disk on the PC. This is typically
C:
. - Navigate to Program Files → Virtio-Win.
If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful.
8.2.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM Copiar enlaceEnlace copiado en el portapapeles!
You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM).
This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps.
Prerequisites
- A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive.
Procedure
- Start the VM and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
- Open the Device Properties to identify the unknown device.
- Right-click the device and select Properties.
- Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the VM to complete the driver installation.
8.2.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive Copiar enlaceEnlace copiado en el portapapeles!
You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive.
Downloading the container-native-virtualization/virtio-win
container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time.
Prerequisites
-
You must have access to the Red Hat registry or to the downloaded
container-native-virtualization/virtio-win
container disk in a restricted environment. -
You have installed the
virtctl
CLI. -
You have installed the OpenShift CLI (
oc
).
Procedure
Add the
container-native-virtualization/virtio-win
container disk as a CD drive by editing theVirtualMachine
manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- OpenShift Virtualization boots the VM disks in the order defined in the
VirtualMachine
manifest. You can either define other VM disks that boot before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks.
Apply the changes:
If the VM is not running, run the following command:
virtctl start <vm> -n <namespace>
$ virtctl start <vm> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is running, reboot the VM or run the following command:
oc apply -f <vm.yaml>
$ oc apply -f <vm.yaml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- After the VM has started, install the VirtIO drivers from the SATA CD drive.
8.2.3. Updating VirtIO drivers Copiar enlaceEnlace copiado en el portapapeles!
8.2.3.1. Updating VirtIO drivers on a Windows VM Copiar enlaceEnlace copiado en el portapapeles!
Update the virtio
drivers on a Windows virtual machine (VM) by using the Windows Update service.
Prerequisites
- The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service.
Procedure
- In the Windows Guest operating system, click the Windows key and select Settings.
- Navigate to Windows Update → Advanced Options → Optional Updates.
- Install all updates from Red Hat, Inc..
- Reboot the VM.
Verification
- On the Windows VM, navigate to the Device Manager.
- Select a device.
- Select the Driver tab.
-
Click Driver Details and confirm that the
virtio
driver details displays the correct version.
8.3. Connecting to virtual machine consoles Copiar enlaceEnlace copiado en el portapapeles!
You can connect to the following consoles to access running virtual machines (VMs):
8.3.1. Connecting to the VNC console Copiar enlaceEnlace copiado en el portapapeles!
You can connect to the VNC console of a virtual machine by using the Red Hat OpenShift Service on AWS web console or the virtctl
command-line tool.
8.3.1.1. Connecting to the VNC console by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can connect to the VNC console of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Procedure
- On the Virtualization → VirtualMachines page, click a VM to open the VirtualMachine details page.
- Click the Console tab. The VNC console session starts automatically.
Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
8.3.1.2. Connecting to the VNC console by using virtctl Copiar enlaceEnlace copiado en el portapapeles!
You can use the virtctl
command-line tool to connect to the VNC console of a running virtual machine.
If you run the virtctl vnc
command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh
command with the -X
or -Y
flags.
Prerequisites
-
You must install the
virt-viewer
package.
Procedure
Run the following command to start the console session:
virtctl vnc <vm_name>
$ virtctl vnc <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the connection fails, run the following command to collect troubleshooting information:
virtctl vnc <vm_name> -v 4
$ virtctl vnc <vm_name> -v 4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.1.3. Generating a temporary token for the VNC console Copiar enlaceEnlace copiado en el portapapeles!
To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API.
Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command.
Prerequisites
-
A running VM with OpenShift Virtualization 4.14 or later and
ssp-operator
4.14 or later. -
You have installed the OpenShift CLI (
oc
).
Procedure
Set the
deployVmConsoleProxy
field value in the HyperConverged (HCO
) custom resource (CR) totrue
:oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/deployVmConsoleProxy", "value": true}]'
$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/deployVmConsoleProxy", "value": true}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a token by entering the following command:
curl --header "Authorization: Bearer ${TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>"
$ curl --header "Authorization: Bearer ${TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
<duration>
parameter can be set in hours and minutes, with a minimum duration of 10 minutes. For example:5h30m
. If this parameter is not set, the token is valid for 10 minutes by default.Sample output:
{ "token": "eyJhb..." }
{ "token": "eyJhb..." }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Use the token provided in the output to create a variable:
export VNC_TOKEN="<token>"
$ export VNC_TOKEN="<token>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now use the token to access the VNC console of a VM.
Verification
Log in to the cluster by entering the following command:
oc login --token ${VNC_TOKEN}
$ oc login --token ${VNC_TOKEN}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test access to the VNC console of the VM by using the
virtctl
command:virtctl vnc <vm_name> -n <namespace>
$ virtctl vnc <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
It is currently not possible to revoke a specific token.
To revoke a token, you must delete the service account that was used to create it. However, this also revokes all other tokens that were created by using the service account. Use the following command with caution:
virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access"
$ virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access"
8.3.1.3.1. Granting token generation permission for the VNC console by using the cluster role Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can install a cluster role and bind it to a user or service account to allow access to the endpoint that generates tokens for the VNC console.
Procedure
Choose to bind the cluster role to either a user or service account.
Run the following command to bind the cluster role to a user:
kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="${USER_NAME}"
$ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="${USER_NAME}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to bind the cluster role to a service account:
kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="${SERVICE_ACCOUNT_NAME}"
$ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="${SERVICE_ACCOUNT_NAME}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.2. Connecting to the serial console Copiar enlaceEnlace copiado en el portapapeles!
You can connect to the serial console of a virtual machine by using the Red Hat OpenShift Service on AWS web console or the virtctl
command-line tool.
Running concurrent VNC connections to a single virtual machine is not currently supported.
8.3.2.1. Connecting to the serial console by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can connect to the serial console of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Procedure
- On the Virtualization → VirtualMachines page, click a VM to open the VirtualMachine details page.
- Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Serial console from the console list.
Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
8.3.2.2. Connecting to the serial console by using virtctl Copiar enlaceEnlace copiado en el portapapeles!
You can use the virtctl
command-line tool to connect to the serial console of a running virtual machine.
If you run the virtctl vnc
command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh
command with the -X
or -Y
flags.
Prerequisites
-
You must install the
virt-viewer
package.
Procedure
Run the following command to start the console session:
virtctl console <vm_name>
$ virtctl console <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Press
Ctrl+]
to end the console session.virtctl vnc <vm_name>
$ virtctl vnc <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the connection fails, run the following command to collect troubleshooting information:
virtctl vnc <vm_name> -v 4
$ virtctl vnc <vm_name> -v 4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.3. Connecting to the desktop viewer Copiar enlaceEnlace copiado en el portapapeles!
You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP).
8.3.3.1. Connecting to the desktop viewer by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can connect to the desktop viewer of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. You can connect to the desktop viewer of a Windows virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Prerequisites
- You installed the QEMU guest agent on the Windows VM.
- You have an RDP client installed.
Procedure
- On the Virtualization → VirtualMachines page, click a VM to open the VirtualMachine details page.
- Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Desktop viewer from the console list.
- Click Create RDP Service to open the RDP Service dialog.
- Select Expose RDP Service and click Save to create a node port service.
-
Click Launch Remote Desktop to download an
.rdp
file and launch the desktop viewer. Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
8.4. Configuring SSH access to virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can configure SSH access to virtual machines (VMs) by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
virtctl ssh
command with the private key.You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
virtctl port-foward
command to your.ssh/config
file and connect to the VM by using OpenSSH.You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address.
8.4.1. Access configuration considerations Copiar enlaceEnlace copiado en el portapapeles!
Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster.
If the internal cluster network cannot handle the traffic load, you can configure a secondary network.
virtctl ssh
andvirtctl port-forwarding
commands- Simple to configure.
- Recommended for troubleshooting VMs.
-
virtctl port-forwarding
recommended for automated configuration of VMs with Ansible. - Dynamic public SSH keys can be used to provision VMs with Ansible.
- Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server.
- The API server must be able to handle the traffic load.
- The clients must be able to access the API server.
- The clients must have access credentials for the cluster.
- Cluster IP service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access an internal cluster IP address.
- Node port service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access at least one node.
- Load balancer service
- A load balancer must be configured.
- Each node must be able to handle the traffic load of one or more load balancer services.
- Secondary network
- Excellent performance because traffic does not go through the internal cluster network.
- Allows a flexible approach to network topology.
- Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network.
8.4.2. Using virtctl ssh Copiar enlaceEnlace copiado en el portapapeles!
You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh
command.
This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server.
8.4.2.1. About static and dynamic SSH key management Copiar enlaceEnlace copiado en el portapapeles!
You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
Static SSH key management
You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot.
You can add the key by using one of the following methods:
- Add a key to a single VM when you create it by using the web console or the command line.
- Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project.
Use cases
- As a VM owner, you can provision all your newly created VMs with a single key.
Dynamic SSH key management
You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources.
When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM.
Use cases
-
Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a
Secret
object that is applied to all VMs in a namespace. - User access: You can add your access credentials to all VMs that you create and manage.
Ansible provisioning:
- As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning.
- As a VM owner, you can create a VM and attach the keys used for Ansible provisioning.
Key rotation:
- As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace.
- As a workload owner, you can rotate the key for the VMs that you manage.
8.4.2.2. Static key management Copiar enlaceEnlace copiado en el portapapeles!
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time.
You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create.
If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually.
8.4.2.2.1. Adding a key when creating a VM from a template Copiar enlaceEnlace copiado en el portapapeles!
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
- Navigate to Virtualization → Catalog in the web console.
Click a template tile.
The guest operating system must support configuration from a cloud-init data source.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
8.4.2.2.2. Creating a VM from an instance type by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Procedure
In the web console, navigate to Virtualization → Catalog.
The InstanceTypes tab opens by default.
NoteWhen configuring a downward-metrics device on an IBM Z® system that uses a VM preference, set the
spec.preference.name
value torhel.9.s390x
or another available preference with the format*.s390x
.Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows bootable volume quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
8.4.2.2.3. Adding a key when creating a VM by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot.
The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a manifest file for a
VirtualMachine
object and aSecret
object:Example manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
VirtualMachine
andSecret
objects by running the following command:oc create -f <manifest_file>.yaml
$ oc create -f <manifest_file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the VM by running the following command:
virtctl start vm example-vm -n example-namespace
$ virtctl start vm example-vm -n example-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Get the VM configuration:
oc describe vm example-vm -n example-namespace
$ oc describe vm example-vm -n example-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4.2.3. Dynamic key management Copiar enlaceEnlace copiado en el portapapeles!
You can enable dynamic key injection for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console or the command line. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created.
8.4.2.3.1. Enabling dynamic key injection when creating a VM from a template Copiar enlaceEnlace copiado en el portapapeles!
You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the Red Hat OpenShift Service on AWS web console. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
- Navigate to Virtualization → Catalog in the web console.
- Click the Red Hat Enterprise Linux 9 VM tile.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
8.4.2.3.2. Creating a VM from an instance type by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. Then, you can add or revoke the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Procedure
In the web console, navigate to Virtualization → Catalog.
The InstanceTypes tab opens by default.
NoteWhen configuring a downward-metrics device on an IBM Z® system that uses a VM preference, set the
spec.preference.name
value torhel.9.s390x
or another available preference with the format*.s390x
.Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows bootable volume quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
- Click the Red Hat Enterprise Linux 9 VM tile.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
- Set Dynamic SSH key injection in the VirtualMachine details section to on.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
8.4.2.3.3. Enabling dynamic SSH key injection by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can enable dynamic key injection for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Then, you can update the public SSH key at runtime.
The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9.
Prerequisites
- The guest operating system is RHEL 9.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
- On the Configuration tab, click Scripts.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
8.4.2.3.4. Enabling dynamic key injection by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a manifest file for a
VirtualMachine
object and aSecret
object:Example manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
VirtualMachine
andSecret
objects by running the following command:oc create -f <manifest_file>.yaml
$ oc create -f <manifest_file>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the VM by running the following command:
virtctl start vm example-vm -n example-namespace
$ virtctl start vm example-vm -n example-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Get the VM configuration:
oc describe vm example-vm -n example-namespace
$ oc describe vm example-vm -n example-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4.2.4. Using the virtctl ssh command Copiar enlaceEnlace copiado en el portapapeles!
You can access a running virtual machine (VM) by using the virtcl ssh
command.
Prerequisites
-
You installed the
virtctl
command-line tool. - You added a public SSH key to the VM.
- You have an SSH client installed.
-
The environment where you installed the
virtctl
tool has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Run the
virtctl ssh
command:virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key>
$ virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the namespace, user name, and the SSH private key. The default SSH key location is
/home/user/.ssh
. If you save the key in a different location, you must specify the path.
Example
virtctl -n my-namespace ssh cloud-user@example-vm -i my-key
$ virtctl -n my-namespace ssh cloud-user@example-vm -i my-key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can copy the virtctl ssh
command in the web console by selecting Copy SSH command from the options
menu beside a VM on the VirtualMachines page.
Alternatively, right-click the VM in the tree view and select Copy SSH command from the pop-up menu to copy the virtctl ssh
command.
8.4.3. Using the virtctl port-forward command Copiar enlaceEnlace copiado en el portapapeles!
You can use your local OpenSSH client and the virtctl port-forward
command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs.
This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server.
Prerequisites
-
You have installed the
virtctl
client. - The virtual machine you want to access is running.
-
The environment where you installed the
virtctl
tool has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Add the following text to the
~/.ssh/config
file on your client machine:Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p
Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the VM by running the following command:
ssh <user>@vm/<vm_name>.<namespace>
$ ssh <user>@vm/<vm_name>.<namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4.4. Using a service for SSH access Copiar enlaceEnlace copiado en el portapapeles!
You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls.
If the cluster network cannot handle the traffic load, consider using a secondary network for VM access.
8.4.4.1. About services Copiar enlaceEnlace copiado en el portapapeles!
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort
and LoadBalancer
types, exposure to the outside world.
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIP
is the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePort
makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
For Red Hat OpenShift Service on AWS, you must use externalTrafficPolicy: Cluster
when configuring a load-balancing service, to minimize the network downtime during live migration.
8.4.4.2. Creating a service Copiar enlaceEnlace copiado en el portapapeles!
You can create a service to expose a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console, virtctl
command-line tool, or a YAML file.
8.4.4.2.1. Enabling load balancer service creation by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can enable the creation of load balancer services for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You are logged in as a user with the
cluster-admin
role. - You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization → Overview.
- On the Settings tab, click Cluster.
- Expand General settings and SSH configuration.
- Set SSH over LoadBalancer service to on.
8.4.4.2.2. Creating a service by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a node port or load balancer service for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You configured the cluster network to support either a load balancer or a node port.
- To create a load balancer service, you enabled the creation of load balancer services.
Procedure
- Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page.
- On the Details tab, select SSH over LoadBalancer from the SSH service type list.
-
Optional: Click the copy icon to copy the
SSH
command to your clipboard.
Verification
- Check the Services pane on the Details tab to view the new service.
8.4.4.2.3. Creating a service by using virtctl Copiar enlaceEnlace copiado en el portapapeles!
You can create a service for a virtual machine (VM) by using the virtctl
command-line tool.
Prerequisites
-
You installed the
virtctl
command-line tool. - You configured the cluster network to support the service.
-
The environment where you installed
virtctl
has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Create a service by running the following command:
virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port>
$ virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
ClusterIP
,NodePort
, orLoadBalancer
service type.
Example
virtctl expose vm example-vm --name example-service --type NodePort --port 22
$ virtctl expose vm example-vm --name example-service --type NodePort --port 22
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the service by running the following command:
oc get service
$ oc get service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
After you create a service with virtctl
, you must add special: key
to the spec.template.metadata.labels
stanza of the VirtualMachine
manifest. See Creating a service by using the command line.
8.4.4.2.4. Creating a service by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
VirtualMachine
manifest to add the label for service creation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add
special: key
to thespec.template.metadata.labels
stanza.
NoteLabels on a virtual machine are passed through to the pod. The
special: key
label must match the label in thespec.selector
attribute of theService
manifest.-
Save the
VirtualMachine
manifest file to apply your changes. Create a
Service
manifest to expose the VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
Service
manifest file. Create the service by running the following command:
oc create -f example-service.yaml
$ oc create -f example-service.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the VM to apply the changes.
Verification
Query the
Service
object to verify that it is available:oc get service -n example-namespace
$ oc get service -n example-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4.4.3. Connecting to a VM exposed by a service by using SSH Copiar enlaceEnlace copiado en el portapapeles!
You can connect to a virtual machine (VM) that is exposed by a service by using SSH.
Prerequisites
- You created a service to expose the VM.
- You have an SSH client installed.
- You are logged in to the cluster.
Procedure
Run the following command to access the VM:
ssh <user_name>@<ip_address> -p <port>
$ ssh <user_name>@<ip_address> -p <port>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service.
8.4.5. Using a secondary network for SSH access Copiar enlaceEnlace copiado en el portapapeles!
You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH.
Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method.
See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options.
Prerequisites
- You configured a secondary network.
- You created a network attachment definition.
8.4.5.1. Configuring a VM network interface by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can configure a network interface for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Click a VM to view the VirtualMachine details page.
- On the Configuration tab, click the Network interfaces tab.
- Click Add network interface.
- Enter the interface name and select the network attachment definition from the Network list.
- Click Save.
- Restart or live migrate the VM to apply the changes.
8.4.5.2. Connecting to a VM attached to a secondary network by using SSH Copiar enlaceEnlace copiado en el portapapeles!
You can connect to a virtual machine (VM) attached to a secondary network by using SSH.
Prerequisites
- You attached a VM to a secondary network with a DHCP server.
- You have an SSH client installed.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Obtain the IP address of the VM by running the following command:
oc describe vm <vm_name> -n <namespace>
$ oc describe vm <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the VM by running the following command:
ssh <user_name>@<ip_address> -i <ssh_key>
$ ssh <user_name>@<ip_address> -i <ssh_key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user
$ ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.5. Editing virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can update a virtual machine (VM) configuration by using the Red Hat OpenShift Service on AWS web console. You can update the YAML file or the VirtualMachine details page.
You can also edit a VM by using the command line.
8.5.1. Changing the instance type of a VM Copiar enlaceEnlace copiado en el portapapeles!
You can change the instance type associated with a running virtual machine (VM) by using the web console. The change takes effect immediately.
Prerequisites
- You created the VM by using an instance type.
Procedure
- In the Red Hat OpenShift Service on AWS web console, click Virtualization → VirtualMachines.
- Select a VM to open the VirtualMachine details page.
- Click the Configuration tab.
- On the Details tab, click the instance type text to open the Edit Instancetype dialog. For example, click 1 CPU | 2 GiB Memory.
Edit the instance type by using the Series and Size lists.
- Select an item from the Series list to show the relevant sizes for that series. For example, select General Purpose.
- Select the VM’s new instance type from the Size list. For example, select medium: 1 CPUs, 4Gi Memory, which is available in the General Purpose series.
- Click Save.
Verification
- Click the YAML tab.
- Click Reload.
- Review the VM YAML to confirm that the instance type changed.
8.5.2. Hot plugging memory on a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can add or remove the amount of memory allocated to a virtual machine (VM) without having to restart the VM by using the Red Hat OpenShift Service on AWS web console.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Edit CPU|Memory.
Enter the desired amount of memory and click Save.
NoteYou can hot plug up to three times the default initial amount of memory of the VM. Exceeding this limit requires a restart.
The system applies these changes immediately. If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a
RestartRequired
condition is added to the VM.
Memory hot plugging for virtual machines requires guest operating system support for the virtio-mem
driver. This support depends on the driver being included and enabled within the guest operating system, not on specific upstream kernel versions.
Supported guest operating systems:
- RHEL 9.4 and later
- RHEL 8.10 and later (hot-unplug is disabled by default)
-
Other Linux guests require kernel version 5.16 or later and the
virtio-mem
kernel module -
Windows guests require
virtio-mem
driver version 100.95.104.26200 or later
8.5.3. Hot plugging CPUs on a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can increase or decrease the number of CPU sockets allocated to a virtual machine (VM) without having to restart the VM by using the Red Hat OpenShift Service on AWS web console.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Edit CPU|Memory.
- Select the vCPU radio button.
Enter the desired number of vCPU sockets and click Save.
NoteYou can hot plug up to three times the default initial number of vCPU sockets of the VM. Exceeding this limit requires a restart.
If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a
RestartRequired
condition is added to the VM.
If a VM has the spec.template.spec.domain.devices.networkInterfaceMultiQueue
field enabled and CPUs are hot plugged, the following behavior occurs:
- Existing network interfaces that you attach before the CPU hot plug retain their original queue count, even after you add more virtual CPUs (vCPUs). The underlying virtualization technology causes this expected behavior.
- To update the queue count of existing interfaces to match the new vCPU configuration, you can restart the VM. A restart is only necessary if the update improves performance.
- New VirtIO network interfaces that you hot plugged after the CPU hotplug automatically receive a queue count that matches the updated vCPU configuration.
8.5.4. Editing a virtual machine by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can edit a virtual machine (VM) by using the command line.
Prerequisites
-
You installed the
oc
CLI.
Procedure
Obtain the virtual machine configuration by running the following command:
oc edit vm <vm_name>
$ oc edit vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the YAML configuration.
If you edit a running virtual machine, you need to do one of the following:
- Restart the virtual machine.
Run the following command for the new configuration to take effect:
oc apply vm <vm_name> -n <namespace>
$ oc apply vm <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.5.5. Adding a disk to a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can add a virtual disk to a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
- On the Disks tab, click Add disk.
Specify the Source, Name, Size, Type, Interface, and Storage Class.
- Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
-
Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the
kubevirt-storage-class-defaults
config map.
- Click Add.
If the VM is running, you must restart the VM to apply the change.
8.5.5.1. Storage fields Copiar enlaceEnlace copiado en el portapapeles!
Field | Description |
---|---|
Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). |
Use an existing PVC | Use a PVC that is already available in the cluster. |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. |
Import via Registry (creates PVC) | Import content via container registry. |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. |
Name |
Name of the disk. The name can contain lowercase letters ( |
Size | Size of the disk in GiB. |
Type | Type of disk. Example: Disk or CD-ROM |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
Storage Class | The storage class that is used to create the disk. |
Advanced storage settings
The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.
If you do not specify these parameters, the system uses the default storage profile values.
Parameter | Option | Parameter description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode | ReadWriteOnce (RWO) | Volume can be mounted as read-write by a single node. |
ReadWriteMany (RWX) | Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. |
8.5.6. Mounting a Windows driver disk on a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can mount a Windows driver disk on a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Storage.
Select the Mount Windows drivers disk checkbox.
The Windows driver disk is displayed in the list of mounted disks.
8.5.7. Adding a secret, config map, or service account to a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You add a secret, config map, or service account to a virtual machine by using the Red Hat OpenShift Service on AWS web console.
These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.
If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page.
Prerequisites
- The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click Configuration → Environment.
- Click Add Config Map, Secret or Service Account.
- Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
- Optional: Click Reload to revert the environment to its last saved state.
- Click Save.
Verification
- On the VirtualMachine details page, click Configuration → Disks and verify that the resource is displayed in the list of disks.
- Restart the virtual machine by clicking Actions → Restart.
You can now mount the secret, config map, or service account as you would mount any other disk.
8.5.8. Updating multiple virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can use the command line interface (CLI) to update multiple virtual machines (VMs) at the same time.
Prerequisites
-
You installed the
oc
CLI. -
You have access to the Red Hat OpenShift Service on AWS cluster, and you have
cluster-admin
permissions.
Procedure
Create a privileged service account by running the following commands:
oc adm new-project kubevirt-api-lifecycle-automation
$ oc adm new-project kubevirt-api-lifecycle-automation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create sa kubevirt-api-lifecycle-automation -n kubevirt-api-lifecycle-automation
$ oc create sa kubevirt-api-lifecycle-automation -n kubevirt-api-lifecycle-automation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create clusterrolebinding kubevirt-api-lifecycle-automation --clusterrole=cluster-admin --serviceaccount=kubevirt-api-lifecycle-automation:kubevirt-api-lifecycle-automation
$ oc create clusterrolebinding kubevirt-api-lifecycle-automation --clusterrole=cluster-admin --serviceaccount=kubevirt-api-lifecycle-automation:kubevirt-api-lifecycle-automation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine the pull URL for the
kubevirt-api-lifecycle
image by running the following command:oc get csv -n openshift-cnv -l=operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -ojson | jq '.items[0].spec.relatedImages[] | select(.name|test(".*kubevirt-api-lifecycle-automation.*")) | .image'
$ oc get csv -n openshift-cnv -l=operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -ojson | jq '.items[0].spec.relatedImages[] | select(.name|test(".*kubevirt-api-lifecycle-automation.*")) | .image'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy
Kubevirt-Api-Lifecycle-Automation
by creating a job object as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- Replace the image value with your pull URL for the image.
- 2
- Replace the
MACHINE_TYPE_GLOB
value with your own pattern. This pattern is used to detect deprecated machine types that need to be upgraded. - 3
- If the
RESTART_REQUIRED
emvironment variable is set totrue
, VMs are restarted after the machine type is updated. If you do not want VMs to be restarted, set the value tofalse
. - 4
- The
namespace
environment value indicates the namespace to look for VMs in. Leave the parameter empty for the job to go over all namespaces in the cluster. - 5
- You can use the
LABEL_SELECTOR
environment variable to select VMs that receive the job action. If you want the job to go over all VMs in the cluster, do not assign a value to the parameter.
8.5.8.1. Performing bulk actions on virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can perform bulk actions on multiple virtual machines (VMs) simultaneously by using the VirtualMachines list view in the web console. This allows you to efficiently manage a group of VMs with minimal manual effort.
Available bulk actions
- Label VMs - Add, edit, or remove labels that are applied across selected VMs.
- Delete VMs - Select multiple VMs to delete. The confirmation dialog displays the number of VMs selected for deletion.
- Move VMs to folder - Move selected VMs to a folder. All VMs must belong to the same namespace.
8.5.9. Configuring multiple IOThreads for fast storage access Copiar enlaceEnlace copiado en el portapapeles!
You can improve storage performance by configuring multiple IOThreads for a virtual machine (VM) that uses fast storage, such as solid-state drive (SSD) or non-volatile memory express (NVMe). This configuration option is only available by editing YAML of the VM.
Multiple IOThreads are supported only when blockMultiQueue
is enabled and the disk bus is set to virtio
. You must set this configuration for the configuration to work correctly.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the YAML tab to open the VM manifest.
In the YAML editor, locate the
spec.template.spec.domain
section and add or modify the following fields:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save.
The spec.template.spec.domain
setting cannot be changed while the VM is running. You must stop the VM before applying the changes, and then restart the VM for the new settings to take effect.
Additional resources for config maps, secrets, and service accounts
8.6. Editing boot order Copiar enlaceEnlace copiado en el portapapeles!
You can update the values for a boot order list by using the web console or the CLI.
With Boot Order in the Virtual Machine Overview page, you can:
- Select a disk or network interface controller (NIC) and add it to the boot order list.
- Edit the order of the disks or NICs in the boot order list.
- Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.
8.6.1. Adding items to a boot order list in the web console Copiar enlaceEnlace copiado en el portapapeles!
Add items to a boot order list by using the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
- Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
- Add any additional disks or NICs to the boot order list.
- Click Save.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.6.2. Editing a boot order list in the web console Copiar enlaceEnlace copiado en el portapapeles!
Edit the boot order list in the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
Choose the appropriate method to move the item in the boot order list:
- If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
- If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
- Click Save.
If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.6.3. Editing a boot order list in the YAML configuration file Copiar enlaceEnlace copiado en el portapapeles!
Edit the boot order list in a YAML configuration file by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the YAML configuration file for the virtual machine by running the following command:
oc edit vm <vm_name> -n <namespace>
$ oc edit vm <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the YAML file.
8.6.4. Removing items from a boot order list in the web console Copiar enlaceEnlace copiado en el portapapeles!
Remove items from a boot order list by using the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
-
Click the Remove icon
next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.7. Deleting virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can delete a virtual machine by using the web console or the oc
command line interface.
8.7.1. Deleting a virtual machine using the web console Copiar enlaceEnlace copiado en el portapapeles!
Deleting a virtual machine (VM) permanently removes it from the cluster.
If the VM is delete protected, the Delete action is disabled in the VM’s Actions menu.
Prequisite
- To delete the VM, you must first disable the VM’s delete protection setting, if enabled.
Procedure
From the Red Hat OpenShift Service on AWS web console, choose your view:
- For a virtualization-focused view, select Administrator → Virtualization → VirtualMachines.
- For a general view, navigate to Virtualization → VirtualMachines.
Click the Options menu
beside a VM and select Delete.
Alternatively, click the VM’s name to open the VirtualMachine details page and click Actions → Delete.
You can also right-click the VM in the tree view and select Delete from the pop-up menu.
- Optional: Select With grace period or clear Delete disks.
- Click Delete to permanently delete the VM.
8.7.2. Deleting a virtual machine by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can delete a virtual machine (VM) by using the oc
command-line interface (CLI). The oc
client enables you to perform actions on multiple VMs.
Prerequisites
- To delete the VM, you must first disable the VM’s delete protection setting, if enabled.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Delete the VM by running the following command:
oc delete vm <vm_name>
$ oc delete vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command only deletes a VM in the current project. Specify the
-n <project_name>
option if the VM you want to delete is in a different project or namespace.
8.8. Exporting virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes.
You create a VirtualMachineExport
custom resource (CR) by using the command-line interface.
Alternatively, you can use the virtctl vmexport
command to create a VirtualMachineExport
CR and to download exported volumes.
You can migrate virtual machines between OpenShift Virtualization clusters by using the Migration Toolkit for Virtualization.
8.8.1. Creating a VirtualMachineExport custom resource Copiar enlaceEnlace copiado en el portapapeles!
You can create a VirtualMachineExport
custom resource (CR) to export the following objects:
- Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM.
-
VM snapshot: Exports PVCs contained in a
VirtualMachineSnapshot
CR. -
PVC: Exports a PVC. If the PVC is used by another pod, such as the
virt-launcher
pod, the export remains in aPending
state until the PVC is no longer in use.
The VirtualMachineExport
CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress
or Route
.
The export server supports the following file formats:
-
raw
: Raw disk image file. -
gzip
: Compressed disk image file. -
dir
: PVC directory and files. -
tar.gz
: Compressed PVC file.
Prerequisites
- The VM must be shut down for a VM export.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
VirtualMachineExport
manifest to export a volume from aVirtualMachine
,VirtualMachineSnapshot
, orPersistentVolumeClaim
CR according to the following example and save it asexample-export.yaml
:VirtualMachineExport
exampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
VirtualMachineExport
CR:oc create -f example-export.yaml
$ oc create -f example-export.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
VirtualMachineExport
CR:oc get vmexport example-export -o yaml
$ oc get vmexport example-export -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The internal and external links for the exported volumes are displayed in the
status
stanza:Output example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.8.2. Accessing exported virtual machine manifests Copiar enlaceEnlace copiado en el portapapeles!
After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine
manifest and related information from the export server.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). You exported a virtual machine or VM snapshot by creating a
VirtualMachineExport
custom resource (CR).NoteVirtualMachineExport
objects that have thespec.source.kind: PersistentVolumeClaim
parameter do not generate virtual machine manifests.
Procedure
To access the manifests, you must first copy the certificates from the source cluster to the target cluster.
- Log in to the source cluster.
Save the certificates to the
cacert.crt
file by running the following command:oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt
$ oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<export_name>
with themetadata.name
value from theVirtualMachineExport
object.
-
Copy the
cacert.crt
file to the target cluster.
Decode the token in the source cluster and save it to the
token_decode
file by running the following command:oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode
$ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<export_name>
with themetadata.name
value from theVirtualMachineExport
object.
-
Copy the
token_decode
file to the target cluster. Get the
VirtualMachineExport
custom resource by running the following command:oc get vmexport <export_name> -o yaml
$ oc get vmexport <export_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
status.links
stanza, which is divided intoexternal
andinternal
sections. Note themanifests.url
fields within each section:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Contains the
VirtualMachine
manifest,DataVolume
manifest, if present, and aConfigMap
manifest that contains the public certificate for the external URL’s ingress or route. - 2
- Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token.
- 3
- Contains the
VirtualMachine
manifest,DataVolume
manifest, if present, and aConfigMap
manifest that contains the certificate for the internal URL’s export server.
- Log in to the target cluster.
Get the
Secret
manifest by running the following command:curl --cacert cacert.crt <secret_manifest_url> -H \ "x-kubevirt-export-token:token_decode" -H \ "Accept:application/yaml"
$ curl --cacert cacert.crt <secret_manifest_url> -H \
1 "x-kubevirt-export-token:token_decode" -H \
2 "Accept:application/yaml"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the manifests of
type: all
, such as theConfigMap
andVirtualMachine
manifests, by running the following command:curl --cacert cacert.crt <all_manifest_url> -H \ "x-kubevirt-export-token:token_decode" -H \ "Accept:application/yaml"
$ curl --cacert cacert.crt <all_manifest_url> -H \
1 "x-kubevirt-export-token:token_decode" -H \
2 "Accept:application/yaml"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
-
You can now create the
ConfigMap
andVirtualMachine
objects on the target cluster by using the exported manifests.
8.9. Managing virtual machine instances Copiar enlaceEnlace copiado en el portapapeles!
If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc
or virtctl
commands from the command-line interface (CLI).
The virtctl
command provides more virtualization options than the oc
command. For example, you can use virtctl
to pause a VM or expose a port.
8.9.1. About virtual machine instances Copiar enlaceEnlace copiado en el portapapeles!
A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc
command-line interface (CLI).
A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:
- List standalone VMIs and their details.
- Edit labels and annotations for a standalone VMI.
- Delete a standalone VMI.
When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.
Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.
When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the RestartRequired
VM condition. Changes are effective on the next reboot, and the condition is removed.
8.9.2. Listing all virtual machine instances using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc
command-line interface (CLI).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
List all VMIs by running the following command:
oc get vmis -A
$ oc get vmis -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.9.3. Listing standalone virtual machine instances using the web console Copiar enlaceEnlace copiado en el portapapeles!
Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).
VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.
Procedure
Click Virtualization → VirtualMachines from the side menu.
You can identify a standalone VMI by a dark colored badge next to its name.
8.9.4. Searching for standalone virtual machine instances by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can search for virtual machine instances (VMIs) by using the search bar on the VirtualMachines page. Use the advanced search to apply additional filters.
Procedure
- In the Red Hat OpenShift Service on AWS console, click Virtualization → VirtualMachines from the side menu.
- In the search bar at the top of the page, type a VM name, label, or IP address.
In the suggestions list, choose one of the following options:
- Click a VM name to open its details page.
- Click All search results found for … to view results on a dedicated page.
- Click a related suggestion to prefill search filters.
- Optional: To open advanced search options, click the sliders icon next to the search bar. Expand the Details section and specify one or more of the available filters: Name, Project, Description, Labels, Date created, vCPU, and Memory.
- Optional: Expand the Network section and enter an IP address to filter by.
- Click Search.
- Optional: If Advanced Cluster Management (ACM) is installed, use the Cluster dropdown to search across multiple clusters.
-
Optional: Click the Save search icon to store your search in the
kubevirt-user-settings
ConfigMap.
8.9.5. Editing a standalone virtual machine instance using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.
Procedure
- In the Red Hat OpenShift Service on AWS console, click Virtualization → VirtualMachines from the side menu.
- Select a standalone VMI to open the VirtualMachineInstance details page.
- On the Details tab, click the pencil icon beside Annotations or Labels.
- Make the relevant changes and click Save.
8.9.6. Deleting a standalone virtual machine instance using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can delete a standalone virtual machine instance (VMI) by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the VMI that you want to delete.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Delete the VMI by running the following command:
oc delete vmi <vmi_name>
$ oc delete vmi <vmi_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.9.7. Deleting a standalone virtual machine instance using the web console Copiar enlaceEnlace copiado en el portapapeles!
Delete a standalone virtual machine instance (VMI) from the web console.
Procedure
- In the Red Hat OpenShift Service on AWS web console, click Virtualization → VirtualMachines from the side menu.
- Click Actions → Delete VirtualMachineInstance.
- In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.
8.10. Controlling virtual machine states Copiar enlaceEnlace copiado en el portapapeles!
You can use virtctl
to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl
to force stop a VM or expose a port.
You can stop, start, restart, pause, and unpause virtual machines from the web console.
8.10.1. Enabling confirmations of virtual machine actions Copiar enlaceEnlace copiado en el portapapeles!
The Stop, Restart, and Pause actions can display confirmation dialogs if confirmation is enabled. By default, confirmation is disabled.
Procedure
- In the Virtualization section of the Red Hat OpenShift Service on AWS web console, navigate to Overview → Settings → Cluster → General settings.
- Toggle the VirtualMachine actions confirmation setting to On.
8.10.2. Starting a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can start a virtual machine (VM) from the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- In the tree view, select the project that contains the VM that you want to start.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple VMs:
-
Click the Options menu
located at the far right end of the row and click Start VirtualMachine.
-
Click the Options menu
To start the VM from the tree view:
- Click the > icon next to the project name to open the list of VMs.
- Right-click the name of the VM and select Start.
To view comprehensive information about the selected VM before you start it:
- Access the VirtualMachine details page by clicking the name of the VM.
- Click Actions → Start.
When you start VM that is provisioned from a URL
source for the first time, the VM has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.
8.10.3. Stopping a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can stop a virtual machine (VM) from the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- In the tree view, select the project that contains the VM that you want to stop.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple VMs:
-
Click the Options menu
located at the far right end of the row and click Stop VirtualMachine.
- If action confirmation is enabled, click Stop in the confirmation dialog.
-
Click the Options menu
To stop the VM from the tree view:
- Click the > icon next to the project name to open the list of VMs.
- Right-click the name of the VM and select Stop.
- If action confirmation is enabled, click Stop in the confirmation dialog.
To view comprehensive information about the selected VM before you stop it:
- Access the VirtualMachine details page by clicking the name of the VM.
- Click Actions → Stop.
- If action confirmation is enabled, click Stop in the confirmation dialog.
8.10.4. Restarting a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can restart a running virtual machine (VM) from the web console.
To avoid errors, do not restart a VM while it has a status of Importing.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- In the tree view, select the project that contains the VM that you want to restart.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple VMs:
-
Click the Options menu
located at the far right end of the row and click Restart.
- If action confirmation is enabled, click Restart in the confirmation dialog.
-
Click the Options menu
To restart the VM from the tree view:
- Click the > icon next to the project name to open the list of VMs.
- Right-click the name of the VM and select Restart.
- If action confirmation is enabled, click Restart in the confirmation dialog.
To view comprehensive information about the selected VM before you restart it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
- Click Actions → Restart.
- If action confirmation is enabled, click Restart in the confirmation dialog.
8.10.5. Pausing a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can pause a virtual machine (VM) from the web console.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- In the tree view, select the project that contains the VM that you want to pause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple VMs:
-
Click the Options menu
located at the far right end of the row and click Pause VirtualMachine.
- If action confirmation is enabled, click Pause in the confirmation dialog.
-
Click the Options menu
To pause the VM from the tree view:
- Click the > icon next to the project name to open the list of VMs.
- Right-click the name of the VM and select Pause.
- If action confirmation is enabled, click Pause in the confirmation dialog.
To view comprehensive information about the selected VM before you pause it:
- Access the VirtualMachine details page by clicking the name of the VM.
- Click Actions → Pause.
- If action confirmation is enabled, click Pause in the confirmation dialog.
8.10.6. Unpausing a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You can unpause a paused virtual machine (VM) from the web console.
Prerequisites
- At least one of your VMs must have a status of Paused.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- In the tree view, select the project that contains the VM that you want to unpause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple VMs:
-
Click the Options menu
located at the far right end of the row and click Unpause VirtualMachine.
-
Click the Options menu
To unpause the VM from the tree view:
- Click the > icon next to the project name to open the list of VMs.
- Right-click the name of the VM and select Unpause.
To view comprehensive information about the selected VM before you unpause it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
- Click Actions → Unpause.
8.10.7. Controlling the state of multiple virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can start, stop, restart, pause, and unpause multiple virtual machines (VMs) from the web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Optional: Enable the Show only projects with VirtualMachines option above the tree view to limit the displayed projects.
- Select a relevant project from the tree view.
Navigate to the appropriate menu for your use case:
To change the state of all VMs in the selected project:
- Right-click the name of the project in the tree view and select the intended action from the menu.
- If action confirmation is enabled, confirm the action in the confirmation dialog.
To change the state of specific VMs:
- Select a checkbox next to the VMs you want to work with. To select all VMs, click the checkbox in the VirtualMachines table header.
- Click Actions and select the intended action from the menu.
- If action confirmation is enabled, confirm the action in the confirmation dialog.
8.11. Using virtual Trusted Platform Module devices Copiar enlaceEnlace copiado en el portapapeles!
Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine
(VM) or VirtualMachineInstance
(VMI) manifest.
With OpenShift Virtualization 4.18 and newer, you can export virtual machines (VMs) with attached vTPM devices, create snapshots of these VMs, and restore VMs from these snapshots. However, cloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported.
8.11.1. About vTPM devices Copiar enlaceEnlace copiado en el portapapeles!
A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip.
OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. If you do not specify the storage class for this PVC, OpenShift Virtualization uses the default storage class for virtualization workloads. If the default storage class for virtualization workloads is not set, OpenShift Virtualization uses the default storage class for the cluster.
The storage class that is marked as default for virtualization workloads has the annotation storageclass.kubevirt.io/is-default-virt-class
set to "true". You can find this storage class by running the following command:
oc get sc -o jsonpath='{range .items[?(.metadata.annotations.storageclass\.kubevirt\.io/is-default-virt-class=="true")]}{.metadata.name}{"\n"}{end}'
$ oc get sc -o jsonpath='{range .items[?(.metadata.annotations.storageclass\.kubevirt\.io/is-default-virt-class=="true")]}{.metadata.name}{"\n"}{end}'
Similarly, the default storage class for the cluster has the annotation storageclass.kubernetes.io/is-default-class
set to "true". To find this storage class, run the following command:
oc get sc -o jsonpath='{range .items[?(.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{.metadata.name}{"\n"}{end}'
$ oc get sc -o jsonpath='{range .items[?(.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{.metadata.name}{"\n"}{end}'
To ensure consistent behavior, configure only one storage class as the default for virtualization workloads and for the cluster respectively.
It is recommended that you specify the storage class explicitly by setting the vmStateStorageClass
attribute in the HyperConverged
custom resource (CR):
If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one.
8.11.2. Adding a vTPM device to a virtual machine Copiar enlaceEnlace copiado en el portapapeles!
Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Run the following command to update the VM configuration:
oc edit vm <vm_name> -n <namespace>
$ oc edit vm <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the VM specification to add the vTPM device. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To apply your changes, save and exit the editor.
- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
8.12. Managing virtual machines with OpenShift Pipelines Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container.
By using OpenShift Pipelines tasks and the example pipeline, you can do the following:
- Create and manage virtual machines (VMs), persistent volume claims (PVCs), data volumes, and data sources.
- Run commands in VMs.
-
Manipulate disk images with
libguestfs
tools.
The tasks are located in the task catalog (ArtifactHub).
The example Windows pipeline is located in the pipeline catalog (ArtifactHub).
8.12.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
-
You have access to an Red Hat OpenShift Service on AWS cluster with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). - You have installed OpenShift Pipelines.
8.12.2. Supported virtual machine tasks Copiar enlaceEnlace copiado en el portapapeles!
The following table shows the supported tasks.
Task | Description |
---|---|
|
Create a virtual machine from a provided manifest or with |
| Create a virtual machine from a template. |
| Copy a virtual machine template. |
| Modify a virtual machine template. |
| Create or delete data volumes or data sources. |
| Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. |
|
Use the |
|
Use the |
| Wait for a specific status of a virtual machine instance and fail or succeed based on the status. |
Virtual machine creation in pipelines now utilizes ClusterInstanceType
and ClusterPreference
instead of template-based tasks, which have been deprecated. The create-vm-from-template
, copy-template
, and modify-vm-template
commands remain available but are not used in default pipeline tasks.
8.12.3. Windows EFI installer pipeline Copiar enlaceEnlace copiado en el portapapeles!
You can run the Windows EFI installer pipeline by using the web console or CLI.
The Windows EFI installer pipeline installs Windows 10, Windows 11, or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process.
The Windows EFI installer pipeline uses a config map file with sysprep
predefined by Red Hat OpenShift Service on AWS and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep
definition.
8.12.3.1. Running the example pipelines using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can run the example pipelines from the Pipelines menu in the web console.
Procedure
- Click Pipelines → Pipelines in the side menu.
- Select a pipeline to open the Pipeline details page.
- From the Actions list, select Start. The Start Pipeline dialog is displayed.
- Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status.
8.12.3.2. Running the example pipelines using the CLI Copiar enlaceEnlace copiado en el portapapeles!
Use a PipelineRun
resource to run the example pipelines. A PipelineRun
object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun
object for each task in the pipeline.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
To run the Microsoft Windows 11 installer pipeline, create the following
PipelineRun
manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the URL for the Windows 11 64-bit ISO file. The product’s language must be English (United States).
- 2
- Example
PipelineRun
objects have a special parameter,acceptEula
. By setting this parameter, you are agreeing to the applicable Microsoft user license agreements for each deployment or installation of the Microsoft products. If you set it to false, the pipeline exits at the first task.
Apply the
PipelineRun
manifest:oc apply -f windows11-customize-run.yaml
$ oc apply -f windows11-customize-run.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.12.4. Removing deprecated or unused resources Copiar enlaceEnlace copiado en el portapapeles!
You can clean up deprecated or unused resources associated with the Red Hat OpenShift Pipelines Operator.
Procedure
Remove any remaining OpenShift Pipelines resources from the cluster by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat OpenShift Pipelines Operator custom resource definitions (CRDs) have already been removed, the command may return an error. You can safely ignore this, as all other matching resources will still be deleted.
8.13. Advanced virtual machine management Copiar enlaceEnlace copiado en el portapapeles!
8.13.1. Working with resource quotas for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
Create and manage resource quotas for virtual machines.
8.13.1.1. Setting resource quota limits for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
By default, OpenShift Virtualization automatically manages CPU and memory limits for virtual machines (VMs) if a namespace enforces resource quotas that require limits to be set. The memory limit is automatically set to twice the requested memory and the CPU limit is set to one per vCPU.
You can customize the memory limit ratio for a specific namespace by adding the alpha.kubevirt.io/auto-memory-limits-ratio
label to the namespace. For example, the following command sets the memory limit ratio to 1.2:
oc label ns/my-virtualization-project alpha.kubevirt.io/auto-memory-limits-ratio=1.2
$ oc label ns/my-virtualization-project alpha.kubevirt.io/auto-memory-limits-ratio=1.2
Avoid managing resource quota limits manually. To prevent misconfigurations or scheduling issues, rely on the automatic resource limit management provided by OpenShift Virtualization unless you have a specific need to override the defaults.
Resource quotas that only use requests automatically work with VMs. If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.
Procedure
Set limits for a VM by editing the
VirtualMachine
manifest. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This configuration is supported because the
limits.memory
value is at least100Mi
larger than therequests.memory
value.
-
Save the
VirtualMachine
manifest.
8.13.2. Specifying nodes for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can place virtual machines (VMs) on specific nodes by using node placement rules.
8.13.2.1. About node placement for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:
- You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
- You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
- Your VMs require specific hardware features that are not present on all available nodes.
- You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.
You can use the following rule types in the spec
field of a VirtualMachine
manifest:
nodeSelector
- Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
-
Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the
VirtualMachine
workload type is based on thePod
object. tolerations
Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.
NoteAffinity rules only apply during scheduling. Red Hat OpenShift Service on AWS does not reschedule running workloads if the constraints are no longer met.
8.13.2.2. Node placement examples Copiar enlaceEnlace copiado en el portapapeles!
The following example YAML file snippets use nodePlacement
, affinity
, and tolerations
fields to customize node placement for virtual machines.
8.13.2.2.1. Example: VM node placement with nodeSelector Copiar enlaceEnlace copiado en el portapapeles!
In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1
and example-key-2 = example-value-2
labels.
If there are no nodes that fit this description, the virtual machine is not scheduled.
Example VM manifest
8.13.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity Copiar enlaceEnlace copiado en el portapapeles!
In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1
. If there is no such pod running on any node, the VM is not scheduled.
If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2
. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.
Example VM manifest
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.13.2.2.3. Example: VM node placement with node affinity Copiar enlaceEnlace copiado en el portapapeles!
In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1
or the label example.io/example-key = example-value-2
. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.
If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value
. However, if all candidate nodes have this label, the scheduler ignores this constraint.
Example VM manifest
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.13.2.2.4. Example: VM node placement with tolerations Copiar enlaceEnlace copiado en el portapapeles!
In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule
taint. Because this virtual machine has matching tolerations
, it can schedule onto the tainted nodes.
A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.
Example VM manifest
8.13.3. Configuring the default CPU model Copiar enlaceEnlace copiado en el portapapeles!
Use the defaultCPUModel
setting in the HyperConverged
custom resource (CR) to define a cluster-wide default CPU model.
The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster.
If the VM does not have a defined CPU model:
-
The
defaultCPUModel
is automatically set using the CPU model defined at the cluster-wide level.
-
The
If both the VM and the cluster have a defined CPU model:
- The VM’s CPU model takes precedence.
If neither the VM nor the cluster have a defined CPU model:
- The host-model is automatically set using the CPU model defined at the host level.
8.13.3.1. Configuring the default CPU model Copiar enlaceEnlace copiado en el portapapeles!
Configure the defaultCPUModel
by updating the HyperConverged
custom resource (CR). You can change the defaultCPUModel
while OpenShift Virtualization is running.
The defaultCPUModel
is case sensitive.
Prerequisites
- Install the OpenShift CLI (oc).
Procedure
Open the
HyperConverged
CR by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
defaultCPUModel
field to the CR and set the value to the name of a CPU model that exists in the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the YAML file to your cluster.
8.13.4. Using UEFI mode for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode.
8.13.4.1. About UEFI mode for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.
It stores all the information about initialization and startup in a file with a .efi
extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.
8.13.4.2. Booting virtual machines in UEFI mode Copiar enlaceEnlace copiado en el portapapeles!
You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine
manifest.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit or create a
VirtualMachine
manifest file. Use thespec.firmware.bootloader
stanza to configure UEFI mode:Booting in UEFI mode with secure boot active
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- OpenShift Virtualization requires System Management Mode (
SMM
) to be enabled for Secure Boot in UEFI mode to occur. - 2
- OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot.
Apply the manifest to your cluster by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.13.4.3. Enabling persistent EFI Copiar enlaceEnlace copiado en el portapapeles!
You can enable EFI persistence in a VM by configuring an RWX storage class at the cluster level and adjusting the settings in the EFI section of the VM.
Prerequisites
- You must have cluster administrator privileges.
- You must have a storage class that supports RWX access mode and FS volume mode.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Enable the
VMPersistentState
feature gate by running the following command:oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]'
$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.13.4.4. Configuring VMs with persistent EFI Copiar enlaceEnlace copiado en el portapapeles!
You can configure a VM to have EFI persistence enabled by editing its manifest file.
Prerequisites
-
VMPersistentState
feature gate enabled.
Procedure
Edit the VM manifest file and save to apply settings.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.13.5. Configuring PXE booting for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
8.13.5.1. PXE booting with a specified MAC address Copiar enlaceEnlace copiado en el portapapeles!
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition
object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Configure a PXE network on the cluster:
Create the network attachment definition file for PXE network
pxe-net-conf
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name for the
NetworkAttachmentDefinition
object. - 2
- The name for the configuration. It is recommended to match the configuration name to the
name
value of the network attachment definition. - 3
- The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin.
- 4
- The name of the Linux bridge configured on the node.
- 5
- Optional: A flag to enable the MAC spoof check. When set to
true
, you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. - 6
- Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
- 7
- Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is
true
.
Create the network attachment definition by using the file you created in the previous step:
oc create -f pxe-net-conf.yaml
$ oc create -f pxe-net-conf.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBoot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify that the network is connected to the previously created network attachment definition. In this scenario,
<pxe-net>
is connected to the network attachment definition called<pxe-net-conf>
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the virtual machine instance:
oc create -f vmi-pxe-boot.yaml
$ oc create -f vmi-pxe-boot.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the virtual machine instance to run:
oc get vmi vmi-pxe-boot -o yaml | grep -i phase
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the virtual machine instance using VNC:
virtctl vnc vmi-pxe-boot
$ virtctl vnc vmi-pxe-boot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
virtctl console vmi-pxe-boot
$ virtctl console vmi-pxe-boot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from Red Hat OpenShift Service on AWS.ip addr
$ ip addr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.13.5.2. OpenShift Virtualization networking glossary Copiar enlaceEnlace copiado en el portapapeles!
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- UserDefinedNetwork (UDN)
- A namespace-scoped CRD introduced by the user-defined network API that can be used to create a tenant network that isolates the tenant namespace from other namespaces.
- ClusterUserDefinedNetwork (CUDN)
- A cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster.
8.13.6. Scheduling virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.
8.13.6.1. Policy attributes Copiar enlaceEnlace copiado en el portapapeles!
You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.
Policy attribute | Description |
---|---|
force | The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU. |
require | Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model. |
optional | The VM is added to a node if that VM is supported by the host’s physical machine CPU. |
disable | The VM cannot be scheduled with CPU node discovery. |
forbid | The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. |
8.13.6.2. Setting a policy attribute and CPU feature Copiar enlaceEnlace copiado en el portapapeles!
You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.
8.13.6.3. Scheduling virtual machines with the supported CPU model Copiar enlaceEnlace copiado en el portapapeles!
You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.
Procedure
Edit the
domain
spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- CPU model for the VM.
8.13.6.4. Scheduling virtual machines with the host model Copiar enlaceEnlace copiado en el portapapeles!
When the CPU model for a virtual machine (VM) is set to host-model
, the VM inherits the CPU model of the node where it is scheduled.
Procedure
Edit the
domain
spec of your VM configuration file. The following example showshost-model
being specified for the virtual machine:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The VM that inherits the CPU model of the node where it is scheduled.
8.13.6.5. Scheduling virtual machines with a custom scheduler Copiar enlaceEnlace copiado en el portapapeles!
You can use a custom scheduler to schedule a virtual machine (VM) on a node.
Prerequisites
- A secondary scheduler is configured for your cluster.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Add the custom scheduler to the VM configuration by editing the
VirtualMachine
manifest. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the custom scheduler. If the
schedulerName
value does not match an existing scheduler, thevirt-launcher
pod stays in aPending
state until the specified scheduler is found.
Verification
Verify that the VM is using the custom scheduler specified in the
VirtualMachine
manifest by checking thevirt-launcher
pod events:View the list of pods in your cluster by entering the following command:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m
NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to display the pod events:
oc describe pod virt-launcher-vm-fedora-dpc87
$ oc describe pod virt-launcher-vm-fedora-dpc87
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value of the
From
field in the output verifies that the scheduler name matches the custom scheduler specified in theVirtualMachine
manifest:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.13.7. About high availability for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can enable high availability for virtual machines (VMs) by configuring remediating nodes.
You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks.
For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
8.13.8. Virtual machine control plane tuning Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization offers the following tuning options at the control-plane level:
-
The
highBurst
profile, which uses fixedQPS
andburst
rates, to create hundreds of virtual machines (VMs) in one batch - Migration setting adjustment based on workload type
8.13.8.1. Configuring a highBurst profile Copiar enlaceEnlace copiado en el portapapeles!
Use the highBurst
profile to create and maintain a large number of virtual machines (VMs) in one cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Apply the following patch to enable the
highBurst
tuning policy profile:oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]'
$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following command to verify the
highBurst
tuning policy profile is enabled:oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range $config, \ $value := .spec.configuration}} {{if eq $config "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{$config}} = {{$value}} {{end}} {{end}} {{"\n"}}
$ oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range $config, \ $value := .spec.configuration}} {{if eq $config "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{$config}} = {{$value}} {{end}} {{end}} {{"\n"}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.14. VM disks Copiar enlaceEnlace copiado en el portapapeles!
8.14.1. Hot-plugging VM disks Copiar enlaceEnlace copiado en el portapapeles!
You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI).
Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks.
A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Each VM has a virtio-scsi
controller so that hot plugged disks can use the scsi
bus. The virtio-scsi
controller overcomes the limitations of virtio
while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks.
Regular virtio
is not available for hot plugged disks because it is not scalable. Each virtio
disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand.
8.14.1.1. Hot plugging and hot unplugging a disk by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the Red Hat OpenShift Service on AWS web console.
The hot plugged disk remains attached to the VM until you unplug it.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have a data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a running VM to view its details.
- On the VirtualMachine details page, click Configuration → Disks.
Add a hot plugged disk:
- Click Add disk.
- In the Add disk (hot plugged) window, select the disk from the Source list and click Save.
Optional: Unplug a hot plugged disk:
-
Click the Options menu
beside the disk and select Detach.
- Click Detach.
-
Click the Options menu
Optional: Make a hot plugged disk persistent:
-
Click the Options menu
beside the disk and select Make persistent.
- Reboot the VM to apply the change.
-
Click the Options menu
8.14.1.2. Hot plugging and hot unplugging a disk by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have at least one data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
Hot plug a disk by running the following command:
virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>]
$ virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use the optional
--persist
flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the--persist
flag, you can no longer hot plug or hot unplug the virtual disk. The--persist
flag applies to virtual machines, not virtual machine instances. -
The optional
--serial
flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC.
-
Use the optional
Hot unplug a disk by running the following command:
virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC>
$ virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.14.2. Expanding virtual machine disks Copiar enlaceEnlace copiado en el portapapeles!
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk.
If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes.
You cannot reduce the size of a VM disk.
8.14.2.1. Increasing a VM disk size by expanding the PVC of the disk Copiar enlaceEnlace copiado en el portapapeles!
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. To specify the increased PVC volume, you can use the web console with the VM running. Alternatively, you can edit the PVC manifest in the CLI.
If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead.
8.14.2.1.1. Expanding a VM disk PVC in the web console Copiar enlaceEnlace copiado en el portapapeles!
You can increase the size of a VM disk PVC in the web console without leaving the VirtualMachines page and with the VM running.
Procedure
- In the Administrator or Virtualization perspective, open the VirtualMachines page.
- Select the running VM to open its Details page.
- Select the Configuration tab and click Storage.
Click the options menu
next to the disk you want to expand. Select the Edit option.
The Edit disk dialog opens.
- In the PersistentVolumeClaim size field, enter the desired size.
- Click Save.
You can enter any value greater than the current one. However, if the new value exceeds the available size, an error is displayed.
8.14.2.1.2. Expanding a VM disk PVC by editing its manifest Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
PersistentVolumeClaim
manifest of the VM disk that you want to expand:oc edit pvc <pvc_name>
$ oc edit pvc <pvc_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the disk size:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the new disk size.
8.14.2.2. Expanding available virtual storage by adding blank data volumes Copiar enlaceEnlace copiado en el portapapeles!
You can expand the available storage of a virtual machine (VM) by adding blank data volumes.
Prerequisites
- You must have at least one persistent volume.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
DataVolume
manifest as shown in the following example:Example
DataVolume
manifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the data volume by running the following command:
oc create -f <blank-image-datavolume>.yaml
$ oc create -f <blank-image-datavolume>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.14.3. Migrating VM disks to a different storage class Copiar enlaceEnlace copiado en el portapapeles!
You can migrate one or more virtual disks to a different storage class without stopping your virtual machine (VM) or virtual machine instance (VMI).
8.14.3.1. Migrating VM disks to a different storage class by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can migrate one or more disks attached to a virtual machine (VM) to a different storage class by using the Red Hat OpenShift Service on AWS web console. When performing this action on a running VM, the operation of the VM is not interrupted and the data on the migrated disks remains accessible.
With the OpenShift Virtualization Operator, you can only start storage class migration for one VM at the time and the VM must be running. If you need to migrate more VMs at once or migrate a mix of running and stopped VMs, consider using the Migration Toolkit for Containers (MTC).
Migration Toolkit for Containers is not part of OpenShift Virtualization and requires separate installation.
Storage class migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You must have a data volume or a persistent volume claim (PVC) available for storage class migration.
- The cluster must have a node available for live migration. As part of the storage class migration, the VM is live migrated to a different node.
- The VM must be running.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
Click the Options menu
beside the virtual machine and select Migration → Storage.
You can also access this option from the VirtualMachine details page by selecting Actions → Migration → Storage.
Alternatively, right-click the VM in the tree view and select Migration from the pop-up menu.
- On the Migration details page, choose whether to migrate the entire VM storage or selected volumes only. If you click Selected volumes, select any disks that you intend to migrate. Click Next to proceed.
- From the list of available options on the Destination StorageClass page, select the storage class to migrate to. Click Next to proceed.
- On the Review page, review the list of affected disks and the target storage class. To start the migration, click Migrate VirtualMachine storage.
- Stay on the Migrate VirtualMachine storage page to watch the progress and wait for the confirmation that the migration completed successfully.
Verification
- From the VirtualMachine details page, navigate to Configuration → Storage.
- Verify that all disks have the expected storage class listed in the Storage class column.
Chapter 9. Networking Copiar enlaceEnlace copiado en el portapapeles!
9.1. Networking overview Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with Red Hat OpenShift Service on AWS networking and its ecosystem.
OpenShift Virtualization support for single-stack IPv6 clusters is limited to the OVN-Kubernetes localnet and Linux bridge Container Network Interface (CNI) plugins.
Deploying OpenShift Virtualization on a single-stack IPv6 cluster is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The following figure illustrates the typical network setup of OpenShift Virtualization. Other configurations are also possible.
Figure 9.1. OpenShift Virtualization networking overview
Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads.
You can connect VMs to the default pod network and to any number of secondary networks.
The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality.
Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins.
The default pod network is overlay-based, tunneled through the underlying machine network.
The machine network can be defined over a selected set of network interface controllers (NICs).
Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks.
Connecting VMs directly to the underlay network is not supported on Red Hat OpenShift Service on AWS, Azure for Red Hat OpenShift Service on AWS, or Oracle® Cloud Infrastructure (OCI).
Connecting VMs to user-defined networks with the layer2
topology is recommended on public clouds.
Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network.
9.1.1. OpenShift Virtualization networking glossary Copiar enlaceEnlace copiado en el portapapeles!
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- UserDefinedNetwork (UDN)
- A namespace-scoped CRD introduced by the user-defined network API that can be used to create a tenant network that isolates the tenant namespace from other namespaces.
- ClusterUserDefinedNetwork (CUDN)
- A cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster.
9.1.2. Using the default pod network Copiar enlaceEnlace copiado en el portapapeles!
- Connecting a virtual machine to the default pod network
- Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification.
- Exposing a virtual machine as a service
-
You can expose a VM within the cluster or outside the cluster by creating a
Service
object.
9.1.3. Configuring a primary user-defined network Copiar enlaceEnlace copiado en el portapapeles!
- Connecting a virtual machine to a primary user-defined network
You can connect a virtual machine (VM) to a user-defined network (UDN) on the primary interface of the VM. The primary UDN replaces the default pod network to connect pods and VMs in selected namespaces.
Cluster administrators can configure a primary
UserDefinedNetwork
CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use theClusterUserDefinedNetwork
CRD to create a shared OVNlayer2
network across multiple namespaces.User-defined networks with the
layer2
overlay topology are useful for VM workloads, and a good alternative to secondary networks in environments where physical network access is limited, such as the public cloud. Thelayer2
topology enables seamless migration of VMs without the need for Network Address Translation (NAT), and also provides persistent IP addresses that are preserved between reboots and during live migration.
9.1.4. Configuring VM secondary network interfaces Copiar enlaceEnlace copiado en el portapapeles!
You can connect a virtual machine to a secondary network by using an OVN-Kubernetes Container Network Interface (CNI) plugin. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface.
- Connecting a virtual machine to an OVN-Kubernetes secondary network
You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the
layer2
topology for OVN-Kubernetes.A
layer2
topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes CNI plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps:
- Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD).
- Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification.
- Hot plugging secondary network interfaces
-
You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the OVN-Kubernetes
layer2
topology.
- Configuring and viewing IP addresses
- You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent.
9.1.5. Integrating with OpenShift Service Mesh Copiar enlaceEnlace copiado en el portapapeles!
- Connecting a virtual machine to a service mesh
- OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines.
9.1.6. Managing MAC address pools Copiar enlaceEnlace copiado en el portapapeles!
- Managing MAC address pools for network interfaces
- The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots.
9.1.7. Configuring SSH access Copiar enlaceEnlace copiado en el portapapeles!
- Configuring SSH access to virtual machines
You can configure SSH access to VMs by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
virtctl ssh
command with the private key.You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
virtctl port-foward
command to your.ssh/config
file and connect to the VM by using OpenSSH.You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address.
9.2. Connecting a virtual machine to the default pod network Copiar enlaceEnlace copiado en el portapapeles!
You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade
binding mode.
Traffic passing through network interfaces to the default pod network is interrupted during live migration.
9.2.1. Configuring masquerade mode from the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - The virtual machine must be configured to use DHCP to acquire IPv4 addresses.
Procedure
Edit the
interfaces
spec of your virtual machine configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Connect using masquerade mode.
- 2
- Optional: List the ports that you want to expose from the virtual machine, each specified by the
port
field. Theport
value must be a number between 0 and 65536. When theports
array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port80
.
NotePorts 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.
Create the virtual machine:
oc create -f <vm-name>.yaml
$ oc create -f <vm-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) Copiar enlaceEnlace copiado en el portapapeles!
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR
field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120
. You can edit this value based on your network requirements.
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
Prerequisites
- The Red Hat OpenShift Service on AWS cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack.
-
You have installed the OpenShift CLI (
oc
).
Procedure
In a new virtual machine configuration, include an interface with
masquerade
and configure the IPv6 address and default gateway by using cloud-init.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Connect using masquerade mode.
- 2
- Allows incoming traffic on port 80 to the virtual machine.
- 3
- The static IPv6 address as determined by the
Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration. The default value isfd10:0:2::2/120
. - 4
- The gateway IP address as determined by the
Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration. The default value isfd10:0:2::1
.
Create the virtual machine in the namespace:
oc create -f example-vm-ipv6.yaml
$ oc create -f example-vm-ipv6.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
9.2.3. About jumbo frames support Copiar enlaceEnlace copiado en el portapapeles!
When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes.
The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways:
-
libvirt
: If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device. - DHCP: If the guest DHCP client can read the MTU value from the DHCP server response.
For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh
or a similar tool. This is because the Windows DHCP client does not read the MTU value.
9.3. Connecting a virtual machine to a primary user-defined network Copiar enlaceEnlace copiado en el portapapeles!
You can connect a virtual machine (VM) to a user-defined network (UDN) on the VM’s primary interface by using the Red Hat OpenShift Service on AWS web console or the CLI. The primary user-defined network replaces the default pod network in your specified namespace. Unlike the pod network, you can define the primary UDN per project, where each project can use its specific subnet and topology.
OpenShift Virtualization supports the namespace-scoped UserDefinedNetwork
and the cluster-scoped ClusterUserDefinedNetwork
custom resource definitions (CRD).
Cluster administrators can configure a primary UserDefinedNetwork
CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork
CRD to create a shared OVN network across multiple namespaces.
You must add the k8s.ovn.org/primary-user-defined-network
label when you create a namespace that is to be used with user-defined networks.
With the layer 2 topology, OVN-Kubernetes creates an overlay network between nodes. You can use this overlay network to connect VMs on different nodes without having to configure any additional physical networking infrastructure.
The layer 2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT) because persistent IP addresses are preserved across cluster nodes during live migration.
You must consider the following limitations before implementing a primary UDN:
-
You cannot use the
virtctl ssh
command to configure SSH access to a VM. -
You cannot use the
oc port-forward
command to forward ports to a VM. - You cannot use headless services to access a VM.
- You cannot define readiness and liveness probes to configure VM health checks.
9.3.1. Creating a primary user-defined network by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can use the Red Hat OpenShift Service on AWS web console to create a primary namespace-scoped UserDefinedNetwork
or a cluster-scoped ClusterUserDefinedNetwork
CRD. The UDN serves as the default primary network for pods and VMs that you create in namespaces associated with the network.
9.3.1.1. Creating a namespace for user-defined networks by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a namespace to be used with primary user-defined networks (UDNs) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
Log in to the Red Hat OpenShift Service on AWS web console as a user with
cluster-admin
permissions.
Procedure
- From the Administrator perspective, click Administration → Namespaces.
- Click Create Namespace.
- In the Name field, specify a name for the namespace. The name must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character.
-
In the Labels field, add the
k8s.ovn.org/primary-user-defined-network
label. -
Optional: If the namespace is to be used with an existing cluster-scoped UDN, add the appropriate labels as defined in the
spec.namespaceSelector
field in theClusterUserDefinedNetwork
custom resource. - Optional: Specify a default network policy.
- Click Create to create the namespace.
9.3.1.2. Creating a primary namespace-scoped user-defined network by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create an isolated primary network in your project namespace by creating a UserDefinedNetwork
custom resource in the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You have access to the Red Hat OpenShift Service on AWS web console as a user with
cluster-admin
permissions. -
You have created a namespace and applied the
k8s.ovn.org/primary-user-defined-network
label. For more information, see "Creating a namespace for user-defined networks by using the web console".
Procedure
- From the Administrator perspective, click Networking → UserDefinedNetworks.
- Click Create UserDefinedNetwork.
- From the Project name list, select the namespace that you previously created.
- Specify a value in the Subnet field.
- Click Create. The user-defined network serves as the default primary network for pods and virtual machines that you create in this namespace.
9.3.1.3. Creating a primary cluster-scoped user-defined network by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can connect multiple namespaces to the same primary user-defined network (UDN) by creating a ClusterUserDefinedNetwork
custom resource in the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You have access to the Red Hat OpenShift Service on AWS web console as a user with
cluster-admin
permissions.
Procedure
- From the Administrator perspective, click Networking → UserDefinedNetworks.
- From the Create list, select ClusterUserDefinedNetwork.
- In the Name field, specify a name for the cluster-scoped UDN.
- Specify a value in the Subnet field.
- In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to.
- Click Create. The cluster-scoped UDN serves as the default primary network for pods and virtual machines located in namespaces that contain the labels that you specified in step 5.
9.3.2. Creating a primary user-defined network by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a primary UserDefinedNetwork
or ClusterUserDefinedNetwork
CRD by using the CLI.
9.3.2.1. Creating a namespace for user-defined networks by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a namespace to be used with primary user-defined networks (UDNs) by using the CLI.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
Namespace
object as a YAML file similar to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This label is required for the namespace to be associated with a UDN. If the namespace is to be used with an existing cluster UDN, you must also add the appropriate labels that are defined in the
spec.namespaceSelector
field of theClusterUserDefinedNetwork
custom resource.
Apply the
Namespace
manifest by running the following command:oc apply -f <filename>.yaml
oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.2.2. Creating a primary namespace-scoped user-defined network by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create an isolated primary network in your project namespace by using the CLI. You must use the OVN-Kubernetes layer 2 topology and enable persistent IP address allocation in the user-defined network (UDN) configuration to ensure VM live migration support.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have created a namespace and applied the
k8s.ovn.org/primary-user-defined-network
label.
Procedure
Create a
UserDefinedNetwork
object to specify the custom network configuration:Example
UserDefinedNetwork
manifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name of the
UserDefinedNetwork
custom resource. - 2
- Specifies the namespace in which the VM is located. The namespace must have the
k8s.ovn.org/primary-user-defined-network
label. The namespace must not bedefault
, anopenshift-*
namespace, or match any global namespaces that are defined by the Cluster Network Operator (CNO). - 3
- Specifies the topological configuration of the network. The required value is
Layer2
. ALayer2
topology creates a logical switch that is shared by all nodes. - 4
- Specifies whether the UDN is primary or secondary. The
Primary
role means that the UDN acts as the primary network for the VM and all default traffic passes through this network. - 5
- Specifies that virtual workloads have consistent IP addresses across reboots and migration. The
spec.layer2.subnets
field is required whenipam.lifecycle: Persistent
is specified.
Apply the
UserDefinedNetwork
manifest by running the following command:oc apply -f --validate=true <filename>.yaml
$ oc apply -f --validate=true <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.2.3. Creating a primary cluster-scoped user-defined network by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can connect multiple namespaces to the same primary user-defined network (UDN) to achieve native tenant isolation by using the CLI.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
ClusterUserDefinedNetwork
object to specify the custom network configuration:Example
ClusterUserDefinedNetwork
manifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name of the
ClusterUserDefinedNetwork
custom resource. - 2
- Specifies the set of namespaces that the cluster UDN applies to. The namespace selector must not point to
default
, anopenshift-*
namespace, or any global namespaces that are defined by the Cluster Network Operator (CNO). - 3
- Specifies the type of selector. In this example, the
matchExpressions
selector selects objects that have the labelkubernetes.io/metadata.name
with the valuered-namespace
orblue-namespace
. - 4
- Specifies the type of operator. Possible values are
In
,NotIn
, andExists
. - 5
- Specifies the topological configuration of the network. The required value is
Layer2
. ALayer2
topology creates a logical switch that is shared by all nodes. - 6
- Specifies whether the UDN is primary or secondary. The
Primary
role means that the UDN acts as the primary network for the VM and all default traffic passes through this network.
Apply the
ClusterUserDefinedNetwork
manifest by running the following command:oc apply -f --validate=true <filename>.yaml
$ oc apply -f --validate=true <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.3. Attaching a virtual machine to the primary user-defined network by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by requesting the pod network attachment, and configuring the interface binding.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
VirtualMachine
manifest to add the UDN interface details, as in the following example:Example
VirtualMachine
manifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace in which the VM is located. This value must match the namespace in which the UDN is defined.
- 2
- The name of the user-defined network interface.
- 3
- The name of the binding plugin that is used to connect the interface to the VM. The required value is
l2bridge
. - 4
- The name of the network. This must match the value of the
spec.template.spec.domain.devices.interfaces.name
field.
Apply the
VirtualMachine
manifest by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.4. Exposing a virtual machine by using a service Copiar enlaceEnlace copiado en el portapapeles!
You can expose a virtual machine within the cluster or outside the cluster by creating a Service
object.
9.4.1. About services Copiar enlaceEnlace copiado en el portapapeles!
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort
and LoadBalancer
types, exposure to the outside world.
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIP
is the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePort
makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
For Red Hat OpenShift Service on AWS, you must use externalTrafficPolicy: Cluster
when configuring a load-balancing service, to minimize the network downtime during live migration.
9.4.2. Dual-stack support Copiar enlaceEnlace copiado en el portapapeles!
If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy
and the spec.ipFamilies
fields in the Service
object.
The spec.ipFamilyPolicy
field can be set to one of the following values:
- SingleStack
- The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.
- PreferDualStack
- The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.
- RequireDualStack
-
This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to
PreferDualStack
. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.
You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies
field to one of the following array values:
-
[IPv4]
-
[IPv6]
-
[IPv4, IPv6]
-
[IPv6, IPv4]
9.4.3. Creating a service by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
VirtualMachine
manifest to add the label for service creation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add
special: key
to thespec.template.metadata.labels
stanza.
NoteLabels on a virtual machine are passed through to the pod. The
special: key
label must match the label in thespec.selector
attribute of theService
manifest.-
Save the
VirtualMachine
manifest file to apply your changes. Create a
Service
manifest to expose the VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
Service
manifest file. Create the service by running the following command:
oc create -f example-service.yaml
$ oc create -f example-service.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the VM to apply the changes.
Verification
Query the
Service
object to verify that it is available:oc get service -n example-namespace
$ oc get service -n example-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.5. Connecting a virtual machine to an OVN-Kubernetes layer 2 secondary network Copiar enlaceEnlace copiado en el portapapeles!
You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the layer2
topology for OVN-Kubernetes.
A layer2
topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
To configure an OVN-Kubernetes layer2
secondary network and attach a VM to that network, perform the following steps:
9.5.1. Creating an OVN-Kubernetes layer 2 NAD Copiar enlaceEnlace copiado en el portapapeles!
You can create an OVN-Kubernetes network attachment definition (NAD) for the layer 2 network topology by using the Red Hat OpenShift Service on AWS web console or the CLI.
Configuring IP address management (IPAM) by specifying the spec.config.ipam.subnet
attribute in a network attachment definition for virtual machines is not supported.
9.5.1.1. Creating a NAD for layer 2 topology by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
NetworkAttachmentDefinition
object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The Container Network Interface (CNI) specification version. The required value is
0.3.1
. - 2
- The name of the network. This attribute is not namespaced. For example, you can have a network named
l2-network
referenced from two differentNetworkAttachmentDefinition
objects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. - 3
- The name of the CNI plugin. The required value is
ovn-k8s-cni-overlay
. - 4
- The topological configuration for the network. The required value is
layer2
. - 5
- Optional: The maximum transmission unit (MTU) value. If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, such as the Geneve (Generic Network Virtualization Encapsulation), and byte capacity of any enabled features, such as IPsec.
- 6
- The value of the
namespace
andname
fields in themetadata
stanza of theNetworkAttachmentDefinition
object.
NoteThe previous example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address.
Apply the manifest by running the following command:
oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.5.1.2. Creating a NAD for layer 2 topology by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges.
Procedure
- Go to Networking → NetworkAttachmentDefinitions in the web console.
- Click Create Network Attachment Definition. The network attachment definition must be in the same namespace as the pod or virtual machine using it.
- Enter a unique Name and optional Description.
- Select OVN Kubernetes L2 overlay network from the Network Type list.
- Click Create.
9.5.2. Attaching a virtual machine to the OVN-Kubernetes layer 2 secondary network Copiar enlaceEnlace copiado en el portapapeles!
You can attach a virtual machine (VM) to the OVN-Kubernetes layer 2 secondary network interface by using the Red Hat OpenShift Service on AWS web console or the CLI.
9.5.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
VirtualMachine
manifest to add the OVN-Kubernetes secondary network interface details, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the OVN-Kubernetes secondary interface.
- 2
- The name of the network. This must match the value of the
spec.template.spec.domain.devices.interfaces.name
field. - 3
- The name of the
NetworkAttachmentDefinition
object. - 4
- Specifies the nodes on which the VM can be scheduled. The recommended node selector value is
node-role.kubernetes.io/worker: ''
.
Apply the
VirtualMachine
manifest:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
9.6. Hot plugging secondary network interfaces Copiar enlaceEnlace copiado en el portapapeles!
You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver.
9.6.1. VirtIO limitations Copiar enlaceEnlace copiado en el portapapeles!
Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces.
The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation.
If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces.
9.6.2. Hot plugging a secondary network interface by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
Hot plug a secondary network interface to a virtual machine (VM) while the VM is running.
Prerequisites
- A network attachment definition is configured in the same namespace as your VM.
- The VM to which you want to hot plug the network interface is running.
-
You have installed the
virtctl
tool. -
You have permission to create and list
VirtualMachineInstanceMigration
objects. -
You have installed the OpenShift CLI (
oc
).
Procedure
Use your preferred text editor to edit the
VirtualMachine
manifest, as shown in the following example:Example VM configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To attach the network interface to the running VM, live migrate the VM by running the following command:
virtctl migrate <vm_name>
$ virtctl migrate <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the VM live migration is successful by using the following command:
oc get VirtualMachineInstanceMigration -w
$ oc get VirtualMachineInstanceMigration -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new interface is added to the VM by checking the VMI status:
oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"
$ oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The hot plugged interface appears in the VMI status.
9.6.3. Hot unplugging a secondary network interface by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can remove a secondary network interface from a running virtual machine (VM).
Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces.
Prerequisites
- Your VM must be running.
- The VM must be created on a cluster running OpenShift Virtualization 4.14 or later.
- The VM must have a bridge network interface attached.
-
You have permission to create and list
VirtualMachineInstanceMigration
objects. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the VM specification to hot unplug a secondary network interface. Setting the interface state to
absent
detaches the network interface from the guest, but the interface still exists in the pod.oc edit vm <vm_name>
$ oc edit vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example VM configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the interface state to
absent
to detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface.
Remove the interface from the pod by migrating the VM:
virtctl migrate <vm_name>
$ virtctl migrate <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.7. Connecting a virtual machine to a service mesh Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.
9.7.1. Adding a virtual machine to a service mesh Copiar enlaceEnlace copiado en el portapapeles!
To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject
annotation to true
. Then expose your VM as a service to view your application in the mesh.
To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You installed the Service Mesh Operators.
- You created the Service Mesh control plane.
- You added the VM project to the Service Mesh member roll.
Procedure
Edit the VM configuration file to add the
sidecar.istio.io/inject: "true"
annotation:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the VM configuration:
oc apply -f <vm_name>.yaml
$ oc apply -f <vm_name>.yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the virtual machine YAML file.
Create a
Service
object to expose your VM to the service mesh.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The service selector that determines the set of pods targeted by a service. This attribute corresponds to the
spec.metadata.labels
field in the VM configuration file. In the above example, theService
object namedvm-istio
targets TCP port 8080 on any pod with the labelapp=vm-istio
.
Create the service:
oc create -f <service_name>.yaml
$ oc create -f <service_name>.yaml
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the service YAML file.
9.8. Configuring a dedicated network for live migration Copiar enlaceEnlace copiado en el portapapeles!
You can configure a dedicated secondary network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
9.8.1. Configuring a dedicated secondary network for live migration Copiar enlaceEnlace copiado en el portapapeles!
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition
object to the HyperConverged
custom resource (CR).
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You logged in to the cluster as a user with the
cluster-admin
role. - Each node has at least two Network Interface Cards (NICs).
- The NICs for live migration are connected to the same VLAN.
Procedure
Create a
NetworkAttachmentDefinition
manifest according to the following example:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
NetworkAttachmentDefinition
object. - 2
- Specify the name of the NIC to be used for live migration.
- 3
- Specify the name of the CNI plugin that provides the network for the NAD.
- 4
- Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the name of the
NetworkAttachmentDefinition
object to thespec.liveMigrationConfig
stanza of theHyperConverged
CR:Example
HyperConverged
manifestCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the Multus
NetworkAttachmentDefinition
object to be used for live migrations.
-
Save your changes and exit the editor. The
virt-handler
pods restart and connect to the secondary network.
Verification
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.8.2. Selecting a dedicated network by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You configured a Multus network for live migration.
- You created a network attachment definition for the network.
Procedure
- Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console.
- Click the Settings tab and then click Live migration.
- Select the network from the Live migration network list.
9.9. Configuring and viewing IP addresses Copiar enlaceEnlace copiado en el portapapeles!
You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent.
9.9.1. Configuring IP addresses for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line.
You can configure a dynamic IP address when you create a VM by using the command line.
The IP address is provisioned with cloud-init.
9.9.1.1. Configuring an IP address when creating a virtual machine by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init.
If the VM is connected to the pod network, the pod network interface is the default route unless you update it.
Prerequisites
- The virtual machine is connected to a secondary network.
- You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine.
Procedure
Edit the
spec.template.spec.volumes.cloudInitNoCloud.networkData
stanza of the virtual machine configuration:To configure a dynamic IP address, specify the interface name and enable DHCP:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the interface name.
To configure a static IP, specify the interface name and the IP address:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.9.2. Viewing IP addresses of virtual machines Copiar enlaceEnlace copiado en el portapapeles!
You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line.
The network information is collected by the QEMU guest agent.
9.9.2.1. Viewing the IP address of a virtual machine by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can view the IP address of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Procedure
- In the Red Hat OpenShift Service on AWS console, click Virtualization → VirtualMachines from the side menu.
- Select a VM to open the VirtualMachine details page.
- Click the Details tab to view the IP address.
9.9.2.2. Viewing the IP address of a virtual machine by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can view the IP address of a virtual machine (VM) by using the command line.
You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Obtain the virtual machine instance configuration by running the following command:
oc describe vmi <vmi_name>
$ oc describe vmi <vmi_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.10. Managing MAC address pools for network interfaces Copiar enlaceEnlace copiado en el portapapeles!
The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address.
A virtual machine instance created from that VM retains the assigned MAC address across reboots.
KubeMacPool does not handle virtual machine instances created independently from a virtual machine.
9.10.1. Managing KubeMacPool by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can disable and re-enable KubeMacPool by using the command line.
KubeMacPool is enabled by default.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
To disable KubeMacPool in two namespaces, run the following command:
oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To re-enable KubeMacPool in two namespaces, run the following command:
oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-
$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Storage Copiar enlaceEnlace copiado en el portapapeles!
10.1. Storage configuration overview Copiar enlaceEnlace copiado en el portapapeles!
You can configure a default storage class, storage profiles, Containerized Data Importer (CDI), data volumes, and automatic boot source updates.
10.1.1. Storage Copiar enlaceEnlace copiado en el portapapeles!
The following storage configuration tasks are mandatory:
- Configure storage profiles
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
The following storage configuration tasks are optional:
- Reserve additional PVC space for file system overhead
- By default, 5.5% of a file system PVC is reserved for overhead, reducing the space available for VM disks by that amount. You can configure a different overhead value.
- Configure local storage by using the hostpath provisioner
- You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the HPP Operator is automatically installed.
- Configure user permissions to clone data volumes between namespaces
- You can configure RBAC roles to enable users to clone data volumes between namespaces.
10.1.2. Containerized Data Importer Copiar enlaceEnlace copiado en el portapapeles!
You can perform the following Containerized Data Importer (CDI) configuration tasks:
- Override the resource request limits of a namespace
- You can configure CDI to import, upload, and clone VM disks into namespaces that are subject to CPU and memory resource restrictions.
- Configure CDI scratch space
- CDI requires scratch space (temporary storage) to complete some operations, such as importing and uploading VM images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV).
10.1.3. Data volumes Copiar enlaceEnlace copiado en el portapapeles!
You can perform the following data volume configuration tasks:
- Enable preallocation for data volumes
- CDI can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes.
- Manage data volume annotations
- Data volume annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
10.1.4. Boot source updates Copiar enlaceEnlace copiado en el portapapeles!
You can perform the following boot source update configuration task:
- Manage automatic boot source updates
- Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, CDI imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. You can enable automatic updates for custom boot sources.
10.2. Configuring storage profiles Copiar enlaceEnlace copiado en el portapapeles!
A storage profile provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class.
The Containerized Data Importer (CDI) recognizes a storage provider if it has been configured to identify and interact with the storage provider’s capabilities.
For recognized storage types, the CDI provides values that optimize the creation of PVCs. You can also configure automatic settings for the storage class by customizing the storage profile. If the CDI does not recognize your storage provider, you must configure storage profiles.
10.2.1. Customizing the storage profile Copiar enlaceEnlace copiado en el portapapeles!
You can specify default parameters by editing the StorageProfile
object for the provisioner’s storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume
object.
You cannot modify storage class parameters. To make changes, delete and re-create the storage class. You must then reapply any customizations that were previously made to the storage profile.
An empty status
section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Importer (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations.
If you are creating a snapshot of a VM, a warning appears if the storage class of the disk has more than one VolumeSnapshotClass
associated with it. In this case, you must specify one volume snapshot class; otherwise, any disk that has more than one volume snapshot class is excluded from the snapshots list.
If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail.
Procedure
Edit the storage profile. In this example, the provisioner is not recognized by CDI.
oc edit storageprofile <storage_class>
$ oc edit storageprofile <storage_class>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
accessModes
andvolumeMode
values you want to configure for the storage profile. For example:Example storage profile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.1.1. Specifying a volume snapshot class by using the web console Copiar enlaceEnlace copiado en el portapapeles!
If you are creating a snapshot of a VM, a warning appears if the storage class of the disk has more than one volume snapshot class associated with it. In this case, you must specify one volume snapshot class; otherwise, any disk that has more than one volume snapshot class is excluded from the snapshots list.
You can specify the default volume snapshot class in the Red Hat OpenShift Service on AWS web console.
Procedure
- From the Virtualization focused view, select Storage.
- Click VolumeSnapshotClasses.
- Select a volume snapshot class from the list.
- Click the Annotations pencil icon.
-
Enter the following Key:
snapshot.storage.kubernetes.io/is-default-class
. -
Enter the following Value:
true
. - Click Save.
10.2.1.2. Specifying a volume snapshot class by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
If you are creating a snapshot of a VM, a warning appears if the storage class of the disk has more than one volume snapshot class associated with it. In this case, you must specify one volume snapshot class; otherwise, any disk that has more than one volume snapshot class is excluded from the snapshots list.
You can select which volume snapshot class to use by either:
-
Setting the
spec.snapshotClass
for the storage profile. - Setting a default volume snapshot class.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Set the
VolumeSnapshotClass
you want to use. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, set the default volume snapshot class by running the following command:
oc patch VolumeSnapshotClass ocs-storagecluster-cephfsplugin-snapclass --type=merge -p '{"metadata":{"annotations":{"snapshot.storage.kubernetes.io/is-default-class":"true"}}}'
# oc patch VolumeSnapshotClass ocs-storagecluster-cephfsplugin-snapclass --type=merge -p '{"metadata":{"annotations":{"snapshot.storage.kubernetes.io/is-default-class":"true"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.1.3. Viewing automatically created storage profiles Copiar enlaceEnlace copiado en el portapapeles!
The system creates storage profiles for each storage class automatically.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
To view the list of storage profiles, run the following command:
oc get storageprofile
$ oc get storageprofile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the details of a particular storage profile, run the following command:
oc describe storageprofile <name>
$ oc describe storageprofile <name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example storage profile details
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
Claim Property Sets
is an ordered list ofAccessMode
/VolumeMode
pairs, which describe the PVC modes that are used to provision VM disks.- 2
- The
Clone Strategy
line indicates the clone strategy to be used. - 3
Data Import Cron Source Format
indicates whether golden images on this storage are stored as PVCs or volume snapshots.
10.2.1.4. Setting a default cloning strategy by using a storage profile Copiar enlaceEnlace copiado en el portapapeles!
You can use storage profiles to set a default cloning method for a storage class by creating a cloning strategy. Setting cloning strategies can be helpful, for example, if your storage vendor supports only certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance.
Cloning strategies are specified by setting the cloneStrategy
attribute in a storage profile to one of the following values:
-
snapshot
is used by default when snapshots are configured. The Containerized Data Importer (CDI) will use the snapshot method if it recognizes the storage provider and the provider supports Container Storage Interface (CSI) snapshots. This cloning strategy uses a temporary volume snapshot to clone the volume. -
copy
uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning. -
csi-clone
uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlikesnapshot
orcopy
, which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in theStorageProfile
object for the provisioner’s storage class.
You can set clone strategies using the CLI without modifying the default claimPropertySets
in your YAML spec
section.
Example storage profile
10.3. Managing automatic boot source updates Copiar enlaceEnlace copiado en el portapapeles!
You can manage automatic updates for the following boot sources:
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources.
10.3.1. Managing Red Hat boot source updates Copiar enlaceEnlace copiado en el portapapeles!
You can opt out of automatic updates for all system-defined boot sources by setting the enableCommonBootImageImport
field value to false
. If you set the value to false
, all DataImportCron
objects are deleted. This does not, however, remove previously imported boot source objects that store operating system images, though administrators can delete them manually.
When the enableCommonBootImageImport
field value is set to false
, DataSource
objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by creating a new persistent volume claim (PVC) or volume snapshot for the DataSource
object, and then populating it with an operating system image.
10.3.1.1. Managing automatic updates for all system-defined boot sources Copiar enlaceEnlace copiado en el portapapeles!
Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents CDIDataImportCronOutdated
alerts from filling up logs.
To disable automatic updates for all system-defined boot sources, set the enableCommonBootImageImport
field value to false
. Setting this value to true
turns automatic updates back on.
Custom boot sources are not affected by this setting.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Enable or disable automatic boot source updates by editing the
HyperConverged
custom resource (CR).To disable automatic boot source updates, set the
spec.enableCommonBootImageImport
field value in theHyperConverged
CR tofalse
. For example:oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/enableCommonBootImageImport", \ "value": false}]'
$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/enableCommonBootImageImport", \ "value": false}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To re-enable automatic boot source updates, set the
spec.enableCommonBootImageImport
field value in theHyperConverged
CR totrue
. For example:oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/enableCommonBootImageImport", \ "value": true}]'
$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/enableCommonBootImageImport", \ "value": true}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.2. Managing custom boot source updates Copiar enlaceEnlace copiado en el portapapeles!
Custom boot sources that are not provided by OpenShift Virtualization are not controlled by the feature gate. You must manage them individually by editing the HyperConverged
custom resource (CR).
You must configure a storage profile. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Configure storage profiles for details.
10.3.2.1. Configuring the default and virt-default storage classes Copiar enlaceEnlace copiado en el portapapeles!
A storage class determines how persistent storage is provisioned for workloads. In OpenShift Virtualization, the virt-default storage class takes precedence over the cluster default storage class and is used specifically for virtualization workloads. Only one storage class should be set as virt-default or cluster default at a time. If multiple storage classes are marked as default, the virt-default storage class overrides the cluster default. To ensure consistent behavior, configure only one storage class as the default for virtualization workloads.
Boot sources are created using the default storage class. When the default storage class changes, old boot sources are automatically updated using the new default storage class. If your cluster does not have a default storage class, you must define one.
If boot source images were stored as volume snapshots and both the cluster default and virt-default storage class have been unset, the volume snapshots are cleaned up and new data volumes will be created. However the newly created data volumes will not start importing until a default storage class is set.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Patch the current virt-default or a cluster default storage class to false:
Identify all storage classes currently marked as virt-default by running the following command:
oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubevirt.io/is-default-virt-class"=="true")|.name'
$ oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubevirt.io/is-default-virt-class"=="true")|.name'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each storage class returned, remove the virt-default annotation by running the following command:
oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "false"}}}'
$ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "false"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify all storage classes currently marked as cluster default by running the following command:
oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubernetes.io/is-default-class"=="true")|.name'
$ oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubernetes.io/is-default-class"=="true")|.name'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each storage class returned, remove the cluster default annotation by running the following command:
oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
$ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set a new default storage class:
Assign the virt-default role to a storage class by running the following command:
oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "true"}}}'
$ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "true"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, assign the cluster default role to a storage class by running the following command:
oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
$ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.2.2. Configuring a storage class for boot source images Copiar enlaceEnlace copiado en el portapapeles!
You can configure a specific storage class in the HyperConverged
resource.
To ensure stable behavior and avoid unnecessary re-importing, you can specify the storageClassName
in the dataImportCronTemplates
section of the HyperConverged
resource.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
dataImportCronTemplate
to the spec section of theHyperConverged
resource and set thestorageClassName
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.
For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the HyperConverged Operator (HCO) and Scheduling, Scale, and Performance (SSP) resources to complete reconciliation.
Delete any outdated
DataVolume
andVolumeSnapshot
objects from theopenshift-virtualization-os-images
namespace by running the following command.oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron
$ oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for all
DataSource
objects to reach a "Ready - True" status. Data sources can reference either a PersistentVolumeClaim (PVC) or a VolumeSnapshot. To check the expected source format, run the following command:oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat
$ oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.2.3. Enabling automatic updates for custom boot sources Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the HyperConverged
custom resource (CR).
Prerequisites
- The cluster has a default storage class.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
HyperConverged
CR, adding the appropriate template and boot source in thedataImportCronTemplates
section. For example:Example custom resource
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This annotation is required for storage classes with
volumeBindingMode
set toWaitForFirstConsumer
. - 2
- Schedule for the job specified in cron format.
- 3
- Use to create a data volume from a registry source. Use the default
pod
pullMethod
and notnode
pullMethod
, which is based on thenode
docker cache. Thenode
docker cache is useful when a registry image is available viaContainer.Image
, but the CDI importer is not authorized to access it. - 4
- For the custom image to be detected as an available boot source, the name of the image’s
managedDataSource
must match the name of the template’sDataSource
, which is found underspec.dataVolumeTemplates.spec.sourceRef.name
in the VM template YAML file.
- Save the file.
10.3.2.4. Enabling volume snapshot boot sources Copiar enlaceEnlace copiado en el portapapeles!
Enable volume snapshot boot sources by setting the parameter in the StorageProfile
associated with the storage class that stores operating system base images. Although DataImportCron
was originally designed to maintain only PVC sources, VolumeSnapshot
sources scale better than PVC sources for certain storage types.
Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot.
Prerequisites
- You must have access to a volume snapshot with the operating system image.
- The storage must support snapshotting.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command:
oc edit storageprofile <storage_class>
$ oc edit storageprofile <storage_class>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Review the
dataImportCronSourceFormat
specification of theStorageProfile
to confirm whether or not the VM is using PVC or volume snapshot by default. Edit the storage profile, if needed, by updating the
dataImportCronSourceFormat
specification tosnapshot
.Example storage profile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Open the storage profile object that corresponds to the storage class used to provision boot sources.
oc get storageprofile <storage_class> -oyaml
$ oc get storageprofile <storage_class> -oyaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm that the
dataImportCronSourceFormat
specification of theStorageProfile
is set to 'snapshot', and that anyDataSource
objects that theDataImportCron
points to now reference volume snapshots.
You can now use these boot sources to create virtual machines.
10.3.3. Disabling automatic updates for a single boot source Copiar enlaceEnlace copiado en el portapapeles!
You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the HyperConverged
custom resource (CR).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable automatic updates for an individual boot source by editing the
spec.dataImportCronTemplates
field.- Custom boot source
-
Remove the boot source from the
spec.dataImportCronTemplates
field. Automatic updates are disabled for custom boot sources by default.
-
Remove the boot source from the
- System-defined boot source
Add the boot source to
spec.dataImportCronTemplates
.NoteAutomatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them.
Set the value of the
dataimportcrontemplate.kubevirt.io/enable
annotation to'false'
.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Save the file.
10.3.4. Verifying the status of a boot source Copiar enlaceEnlace copiado en el portapapeles!
You can determine if a boot source is system-defined or custom by viewing the HyperConverged
custom resource (CR).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
View the contents of the
HyperConverged
CR by running the following command:oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml
$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the status of the boot source by reviewing the
status.dataImportCronTemplates.status
field.-
If the field contains
commonTemplate: true
, it is a system-defined boot source. -
If the
status.dataImportCronTemplates.status
field has the value{}
, it is a custom boot source.
-
If the field contains
10.4. Reserving PVC space for file system overhead Copiar enlaceEnlace copiado en el portapapeles!
When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem
volume mode, you must ensure that there is enough space on the PVC for the VM disk and for file system overhead, such as metadata.
By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount.
You can configure a different overhead value by editing the HCO
object. You can change the value globally and you can specify values for specific storage classes.
10.4.1. Overriding the default file system overhead value Copiar enlaceEnlace copiado en el portapapeles!
Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the spec.filesystemOverhead
attribute of the HCO
object.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Open the
HCO
object for editing by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
spec.filesystemOverhead
fields, populating them with your chosen values:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default file system overhead percentage used for any storage classes that do not already have a set value. For example,
global: "0.07"
reserves 7% of the PVC for file system overhead. - 2
- The file system overhead percentage for the specified storage class. For example,
mystorageclass: "0.04"
changes the default overhead value for PVCs in themystorageclass
storage class to 4%.
-
Save and exit the editor to update the
HCO
object.
Verification
View the
CDIConfig
status and verify your changes by running one of the following commands:To generally verify changes to
CDIConfig
:oc get cdiconfig -o yaml
$ oc get cdiconfig -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view your specific changes to
CDIConfig
:oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'
$ oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5. Configuring local storage by using the hostpath provisioner Copiar enlaceEnlace copiado en el portapapeles!
You can configure local storage for virtual machines by using the hostpath provisioner (HPP).
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner Operator is automatically installed. HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use HPP, you create an HPP custom resource (CR) with a basic storage pool.
10.5.1. Creating a hostpath provisioner with a basic storage pool Copiar enlaceEnlace copiado en el portapapeles!
You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools
stanza. The storage pool specifies the name and path used by the CSI driver.
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in
spec.storagePools.path
must have read/write access. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create an
hpp_cr.yaml
file with astoragePools
stanza as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file and exit.
Create the HPP by running the following command:
oc create -f hpp_cr.yaml
$ oc create -f hpp_cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5.1.1. About creating storage classes Copiar enlaceEnlace copiado en el portapapeles!
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools
stanza.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass
value with volumeBindingMode
parameter set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
10.5.1.2. Creating a storage class for the CSI driver with the storagePools stanza Copiar enlaceEnlace copiado en el portapapeles!
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass
value with volumeBindingMode
parameter set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
Procedure
Create a
storageclass_csi.yaml
file to define the storage class:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the default value isDelete
. - 2
- The
volumeBindingMode
parameter determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements. - 3
- Specify the name of the storage pool defined in the HPP CR.
- Save the file and exit.
Create the
StorageClass
object by running the following command:oc create -f storageclass_csi.yaml
$ oc create -f storageclass_csi.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5.2. About storage pools created with PVC templates Copiar enlaceEnlace copiado en el portapapeles!
If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR).
A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation.
The PVC template is based on the spec
stanza of the PersistentVolumeClaim
object:
Example PersistentVolumeClaim
object
- 1
- This value is only required for block volume mode PVs.
You define a storage pool using a pvcTemplate
specification in the HPP CR. The Operator creates a PVC from the pvcTemplate
specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes.
You can combine basic storage pools with storage pools created from PVC templates.
10.5.2.1. Creating a storage pool with a PVC template Copiar enlaceEnlace copiado en el portapapeles!
You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR).
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in
spec.storagePools.path
must have read/write access. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create an
hpp_pvc_template_pool.yaml
file for the HPP CR that specifies a persistent volume (PVC) template in thestoragePools
stanza according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
storagePools
stanza is an array that can contain both basic and PVC template storage pools. - 2
- Specify the storage pool directories under this node path.
- 3
- Optional: The
volumeMode
parameter can be eitherBlock
orFilesystem
as long as it matches the provisioned volume format. If no value is specified, the default isFilesystem
. If thevolumeMode
isBlock
, the mounting pod creates an XFS file system on the block volume before mounting it. - 4
- If the
storageClassName
parameter is omitted, the default storage class is used to create PVCs. If you omitstorageClassName
, ensure that the HPP storage class is not the default storage class. - 5
- You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
- Save the file and exit.
Create the HPP with a storage pool by running the following command:
oc create -f hpp_pvc_template_pool.yaml
$ oc create -f hpp_pvc_template_pool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.6. Enabling user permissions to clone data volumes across namespaces Copiar enlaceEnlace copiado en el portapapeles!
The isolating nature of namespaces means that users cannot by default clone resources between namespaces.
To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin
role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace.
10.6.1. Creating RBAC resources for cloning data volumes Copiar enlaceEnlace copiado en el portapapeles!
Create a new cluster role that enables permissions for all actions for the datavolumes
resource.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You must have cluster admin privileges.
If you are a non-admin user that is an administrator for both the source and target namespaces, you can create a Role
instead of a ClusterRole
where appropriate.
Procedure
Create a
ClusterRole
manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Unique name for the cluster role.
Create the cluster role in the cluster:
oc create -f <datavolume-cloner.yaml>
$ oc create -f <datavolume-cloner.yaml>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The file name of the
ClusterRole
manifest created in the previous step.
Create a
RoleBinding
manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the role binding in the cluster:
oc create -f <datavolume-cloner.yaml>
$ oc create -f <datavolume-cloner.yaml>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The file name of the
RoleBinding
manifest created in the previous step.
10.7. Configuring CDI to override CPU and memory quotas Copiar enlaceEnlace copiado en el portapapeles!
You can configure the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
10.7.1. About CPU and memory quotas in a namespace Copiar enlaceEnlace copiado en el portapapeles!
A resource quota, defined by the ResourceQuota
object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.
The HyperConverged
custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0
. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
10.7.2. Overriding CPU and memory defaults Copiar enlaceEnlace copiado en el portapapeles!
Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads
stanza to the HyperConverged
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
spec.resourceRequirements.storageWorkloads
stanza to the CR, setting the values based on your use case. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save and exit the editor to update the
HyperConverged
CR.
10.8. Preparing CDI scratch space Copiar enlaceEnlace copiado en el portapapeles!
10.8.1. About scratch space Copiar enlaceEnlace copiado en el portapapeles!
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts.
You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass
field of the HyperConverged
custom resource.
If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used.
CDI requires requesting scratch space with a file
volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block
volume mode, you must define a storage class capable of provisioning file
volume mode PVCs.
Manual provisioning
If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
10.8.2. CDI operations that require scratch space Copiar enlaceEnlace copiado en el portapapeles!
Type | Reason |
---|---|
Registry imports | CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
10.8.3. Defining a storage class Copiar enlaceEnlace copiado en el portapapeles!
You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass
field to the HyperConverged
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
spec.scratchSpaceStorageClass
field to the CR, setting the value to the name of a storage class that exists in the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated.
-
Save and exit your default editor to update the
HyperConverged
CR.
10.8.4. CDI supported operations matrix Copiar enlaceEnlace copiado en el portapapeles!
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
10.9. Using preallocation for data volumes Copiar enlaceEnlace copiado en el portapapeles!
The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes.
You can enable preallocation for specific data volumes.
10.9.1. About preallocation Copiar enlaceEnlace copiado en el portapapeles!
The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes.
If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type:
fallocate
-
If the file system supports it, CDI uses the operating system’s
fallocate
call to preallocate space by using theposix_fallocate
function, which allocates blocks and marks them as uninitialized. full
-
If
fallocate
mode cannot be used,full
mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed.
10.9.2. Enabling preallocation for a data volume Copiar enlaceEnlace copiado en el portapapeles!
You can enable preallocation for specific data volumes by including the spec.preallocation
field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI (oc
).
Preallocation mode is supported for all CDI source types.
10.10. Managing data volume annotations Copiar enlaceEnlace copiado en el portapapeles!
Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
10.10.1. Example: Data volume annotations Copiar enlaceEnlace copiado en el portapapeles!
This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network
annotation causes the pod to use the multus network named bridge-network
as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name>
annotation.
Multus network annotation example
- 1
- Multus network annotation
Chapter 11. Live migration Copiar enlaceEnlace copiado en el portapapeles!
11.1. About live migration Copiar enlaceEnlace copiado en el portapapeles!
Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. Live migration enables smooth transitions during cluster upgrades or any time a node needs to be drained for maintenance or configuration changes.
By default, live migration traffic is encrypted using Transport Layer Security (TLS).
11.1.1. Live migration requirements Copiar enlaceEnlace copiado en el portapapeles!
Live migration has the following requirements:
-
The cluster must have shared storage with
ReadWriteMany
(RWX) access mode. The cluster must have sufficient RAM and network bandwidth.
NoteYou must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default number of migrations that can run in parallel in the cluster is 5.
- If a VM uses a host model CPU, the nodes must support the CPU.
- Configuring a dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
11.1.2. About live migration permissions Copiar enlaceEnlace copiado en el portapapeles!
In OpenShift Virtualization 4.19 and later, live migration operations are restricted to users who are explicitly granted the kubevirt.io:migrate
cluster role. Users with this role can create, delete, and update virtual machine (VM) live migration requests, which are represented by VirtualMachineInstanceMigration
(VMIM) custom resources.
Cluster administrators can bind the kubevirt.io:migrate
role to trusted users or groups at either the namespace or cluster level.
Before OpenShift Virtualization 4.19, namespace administrators had live migration permissions by default. This behavior changed in version 4.19 to prevent unintended or malicious disruptions to infrastructure-critical migration operations.
As a cluster administrator, you can preserve the old behavior by creating a temporary cluster role before updating. After assigning the new role to users, delete the temporary role to enforce the more restrictive permissions. If you have already updated, you can still revert to the old behavior by aggregating the kubevirt.io:migrate
role into the admin
cluster role.
11.1.3. Preserving pre-4.19 live migration permissions during update Copiar enlaceEnlace copiado en el portapapeles!
Before you update to OpenShift Virtualization 4.19, you can create a temporary cluster role to preserve the previous live migration permissions until you are ready for the more restrictive default permissions to take effect.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. - You have cluster administrator permissions.
Procedure
Before updating to OpenShift Virtualization 4.19, create a temporary
ClusterRole
object. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This cluster role is aggregated into the
admin
role before you update OpenShift Virtualization. The update process does not modify it, ensuring the previous behavior is maintained.
Add the cluster role manifest to the cluster by running the following command:
oc apply -f <cluster_role_file_name>.yaml
$ oc apply -f <cluster_role_file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update OpenShift Virtualization to version 4.19.
Bind the
kubevirt.io:migrate
cluster role to trusted users or groups by running one of the following commands, replacing<namespace>
,<first_user>
,<second_user>
, and<group_name>
with your own values.To bind the role at the namespace level, run the following command:
oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To bind the role at the cluster level, run the following command:
oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When you have bound the
kubevirt.io:migrate
role to all necessary users, delete the temporaryClusterRole
object by running the following command:oc delete clusterrole kubevirt.io:upgrademigrate
$ oc delete clusterrole kubevirt.io:upgrademigrate
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you delete the temporary cluster role, only users with the
kubevirt.io:migrate
role can create, delete, and update live migration requests.
11.1.4. Granting live migration permissions Copiar enlaceEnlace copiado en el portapapeles!
Grant trusted users or groups the ability to create, delete, and update live migration instances.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. - You have cluster administrator permissions.
Procedure
(Optional) To change the default behavior so that namespace administrators always have permission to create, delete, and update live migrations, aggregate the
kubevirt.io:migrate
role into theadmin
cluster role by running the following command:oc label --overwrite clusterrole kubevirt.io:migrate rbac.authorization.k8s.io/aggregate-to-admin=true
$ oc label --overwrite clusterrole kubevirt.io:migrate rbac.authorization.k8s.io/aggregate-to-admin=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the
kubevirt.io:migrate
cluster role to trusted users or groups by running one of the following commands, replacing<namespace>
,<first_user>
,<second_user>
, and<group_name>
with your own values.To bind the role at the namespace level, run the following command:
oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create -n <namespace> rolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To bind the role at the cluster level, run the following command:
oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
$ oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --user=<first_user> --user=<second_user> --group=<group_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.1.5. VM migration tuning Copiar enlaceEnlace copiado en el portapapeles!
You can adjust your cluster-wide live migration settings based on the type of workload and migration scenario. This enables you to control how many VMs migrate at the same time, the network bandwidth you want to use for each migration, and how long OpenShift Virtualization attempts to complete the migration before canceling the process. Configure these settings in the HyperConverged
custom resource (CR).
If you are migrating multiple VMs per node at the same time, set a bandwidthPerMigration
limit to prevent a large or busy VM from using a large portion of the node’s network bandwidth. By default, the bandwidthPerMigration
value is 0
, which means unlimited.
A large VM running a heavy workload (for example, database processing), with higher memory dirty rates, requires a higher bandwidth to complete the migration.
Post copy mode, when enabled, triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime. This can impact performance during the transfer.
Post copy mode should not be used for critical data, or with unstable networks.
11.1.6. Common live migration tasks Copiar enlaceEnlace copiado en el portapapeles!
You can perform the following live migration tasks:
- Configure live migration settings
- Configure live migration for heavy workloads
- Initiate and cancel live migration
- Monitor the progress of all live migrations in the Migration tab of the Red Hat OpenShift Service on AWS web console.
- View VM migration metrics in the Metrics tab of the web console.
11.1.7. Additional resources Copiar enlaceEnlace copiado en el portapapeles!
11.2. Configuring live migration Copiar enlaceEnlace copiado en el portapapeles!
You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster.
You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs).
11.2.1. Configuring live migration limits and timeouts Copiar enlaceEnlace copiado en el portapapeles!
Configure live migration limits and timeouts for the cluster by updating the HyperConverged
custom resource (CR), which is located in the openshift-cnv
namespace.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR and add the necessary live migration parameters:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of
2048Mi
means 2048 MiB/s. Default:0
, which is unlimited. - 2
- The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the
Migration Method
isBlockMigration
, the size of the migrating disks is included in the calculation. - 3
- Number of migrations running in parallel in the cluster. Default:
5
. - 4
- Maximum number of outbound migrations per node. Default:
2
. - 5
- The migration is canceled if memory copy fails to make progress in this time, in seconds. Default:
150
. - 6
- If a VM is running a heavy workload and the memory dirty rate is too high, this can prevent the migration from one node to another from converging. To prevent this, you can enable post copy mode. By default,
allowPostCopy
is set tofalse
.
You can restore the default value for any spec.liveMigrationConfig
field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value>
to restore the default progressTimeout: 150
.
11.2.2. Configure live migration for heavy workloads Copiar enlaceEnlace copiado en el portapapeles!
When migrating a VM running a heavy workload (for example, database processing) with higher memory dirty rates, you need a higher bandwidth to complete the migration.
If the dirty rate is too high, the migration from one node to another does not converge. To prevent this, enable post copy mode.
Post copy mode triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime.
Configure live migration for heavy workloads by updating the HyperConverged
custom resource (CR), which is located in the openshift-cnv
namespace.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR and add the necessary parameters for migrating heavy workloads:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Bandwidth limit of each migration, where the value is the quantity of bytes per second. The default is
0
, which is unlimited. - 2
- The migration is canceled if it is not completed in this time, and triggers post copy mode, when post copy is enabled. This value is measured in seconds per GiB of memory. You can lower
completionTimeoutPerGiB
to trigger post copy mode earlier in the migration process, or raise thecompletionTimeoutPerGiB
to trigger post copy mode later in the migration process. - 3
- Number of migrations running in parallel in the cluster. The default is
5
. Keeping theparallelMigrationsPerCluster
setting low is better when migrating heavy workloads. - 4
- Maximum number of outbound migrations per node. Configure a single VM per node for heavy workloads.
- 5
- The migration is canceled if memory copy fails to make progress in this time. This value is measured in seconds. Increase this parameter for large memory sizes running heavy workloads.
- 6
- Use post copy mode when memory dirty rates are high to ensure the migration converges. Set
allowPostCopy
totrue
to enable post copy mode.
- Optional: If your main network is too busy for the migration, configure a secondary, dedicated migration network.
Post copy mode can impact performance during the transfer, and should not be used for critical data, or with unstable networks.
11.2.4. Live migration policies Copiar enlaceEnlace copiado en el portapapeles!
You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels.
You can create live migration policies by using the Red Hat OpenShift Service on AWS web console.
11.2.4.1. Creating a live migration policy by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a live migration policy by using the command line. KubeVirt applies the live migration policy to selected virtual machines (VMs) by using any combination of labels:
-
VM labels such as
size
,os
, orgpu
-
Project labels such as
priority
,bandwidth
, orhpc-workload
For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy.
If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence.
If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the VM object to which you want to apply a live migration policy, and add the corresponding VM labels.
Open the YAML configuration of the resource:
oc edit vm <vm_name>
$ oc edit vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Adjust the required label values in the
.spec.template.metadata.labels
section of the configuration. For example, to mark the VM as aproduction
VM for the purposes of migration policies, add thekubevirt.io/environment: production
line:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save and exit the configuration.
Configure a
MigrationPolicy
object with the corresponding labels. The following example configures a policy that applies to all VMs that are labeled asproduction
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the migration policy by running the following command:
oc create -f <migration_policy>.yaml
$ oc create -f <migration_policy>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3. Initiating and canceling live migration Copiar enlaceEnlace copiado en el portapapeles!
You can initiate the live migration of a virtual machine (VM) to another node by using the Red Hat OpenShift Service on AWS web console or the command line.
You can cancel a live migration by using the web console or the command line. The VM remains on its original node.
You can also initiate and cancel live migration by using the virtctl migrate <vm_name>
and virtctl migrate-cancel <vm_name>
commands.
11.3.1. Initiating live migration Copiar enlaceEnlace copiado en el portapapeles!
11.3.1.1. Initiating live migration by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can live migrate a running virtual machine (VM) to a different node in the cluster by using the Red Hat OpenShift Service on AWS web console.
The Migrate action is visible to all users but only cluster administrators can initiate a live migration.
Prerequisites
-
You have the
kubevirt.io:migrate
RBAC role or you are a cluster administrator. - The VM is migratable.
- If the VM is configured with a host model CPU, the cluster has an available node that supports the CPU model.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
-
Select Migrate from the Options menu
beside a VM.
- Click Migrate.
11.3.1.2. Initiating live migration by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can initiate the live migration of a running virtual machine (VM) by using the command line to create a VirtualMachineInstanceMigration
object for the VM.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have the
kubevirt.io:migrate
RBAC role or you are a cluster administrator.
Procedure
Create a
VirtualMachineInstanceMigration
manifest for the VM that you want to migrate:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the object by running the following command:
oc create -f <migration_name>.yaml
$ oc create -f <migration_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
VirtualMachineInstanceMigration
object triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted.
Verification
Obtain the VM status by running the following command:
oc describe vmi <vm_name> -n <namespace>
$ oc describe vmi <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.2. Canceling live migration Copiar enlaceEnlace copiado en el portapapeles!
11.3.2.1. Canceling live migration by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can cancel the live migration of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You have the
kubevirt.io:migrate
RBAC role or you are a cluster administrator.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
-
Select Cancel Migration on the Options menu
beside a VM.
11.3.2.2. Canceling live migration by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
Cancel the live migration of a virtual machine by deleting the VirtualMachineInstanceMigration
object associated with the migration.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have the
kubevirt.io:migrate
RBAC role or you are a cluster administrator.
Procedure
Delete the
VirtualMachineInstanceMigration
object that triggered the live migration,migration-job
in this example:oc delete vmim migration-job
$ oc delete vmim migration-job
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 12. Nodes Copiar enlaceEnlace copiado en el portapapeles!
12.1. Node maintenance Copiar enlaceEnlace copiado en el portapapeles!
Nodes can be placed into maintenance mode by using the oc adm
utility or NodeMaintenance
custom resources (CRs).
The node-maintenance-operator
(NMO) is no longer shipped with OpenShift Virtualization. It is deployed as a standalone Operator from the OperatorHub in the Red Hat OpenShift Service on AWS web console or by using the OpenShift CLI (oc
).
For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
Virtual machines (VMs) must have a persistent volume claim (PVC) with a shared ReadWriteMany
(RWX) access mode to be live migrated.
The Node Maintenance Operator watches for new or deleted NodeMaintenance
CRs. When a new NodeMaintenance
CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance
CR is deleted, the node that is referenced in the CR is made available for new workloads.
Using a NodeMaintenance
CR for node maintenance tasks achieves the same results as the oc adm cordon
and oc adm drain
commands using standard Red Hat OpenShift Service on AWS custom resource processing.
12.1.1. Eviction strategies Copiar enlaceEnlace copiado en el portapapeles!
Placing a node into maintenance marks the node as unschedulable and drains all the VMs and pods from it.
You can configure eviction strategies for virtual machines (VMs) or for the cluster.
- VM eviction strategy
The VM
LiveMigrate
eviction strategy ensures that a virtual machine instance (VMI) is not interrupted if the node is placed into maintenance or drained. VMIs with this eviction strategy will be live migrated to another node.You can configure eviction strategies for virtual machines (VMs) by using the Red Hat OpenShift Service on AWS web console or the command line.
ImportantThe default eviction strategy is
LiveMigrate
. A non-migratable VM with aLiveMigrate
eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in aPending
orScheduling
state unless you shut down the VM manually.You must set the eviction strategy of non-migratable VMs to
LiveMigrateIfPossible
, which does not block an upgrade, or toNone
, for VMs that should not be migrated.
- Cluster eviction strategy
- You can configure an eviction strategy for the cluster to prioritize workload continuity or infrastructure upgrade.
Eviction strategy | Description | Interrupts workflow | Blocks upgrades |
---|---|---|---|
| Prioritizes workload continuity over upgrades. | No | Yes 2 |
| Prioritizes upgrades over workload continuity to ensure that the environment is updated. | Yes | No |
| Shuts down VMs with no eviction strategy. | Yes | No |
- Default eviction strategy for multi-node clusters.
- If a VM blocks an upgrade, you must shut down the VM manually.
- Default eviction strategy for single-node OpenShift.
12.1.1.1. Configuring a VM eviction strategy using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can configure an eviction strategy for a virtual machine (VM) by using the command line.
The default eviction strategy is LiveMigrate
. A non-migratable VM with a LiveMigrate
eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a Pending
or Scheduling
state unless you shut down the VM manually.
You must set the eviction strategy of non-migratable VMs to LiveMigrateIfPossible
, which does not block an upgrade, or to None
, for VMs that should not be migrated.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
VirtualMachine
resource by running the following command:oc edit vm <vm_name> -n <namespace>
$ oc edit vm <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example eviction strategy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the eviction strategy. The default value is
LiveMigrate
.
Restart the VM to apply the changes:
virtctl restart <vm_name> -n <namespace>
$ virtctl restart <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.1.2. Configuring a cluster eviction strategy by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can configure an eviction strategy for a cluster by using the command line.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
hyperconverged
resource by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the cluster eviction strategy as shown in the following example:
Example cluster eviction strategy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.2. Run strategies Copiar enlaceEnlace copiado en el portapapeles!
The spec.runStrategy
key determines how a VM behaves under certain conditions.
12.1.2.1. Run strategies Copiar enlaceEnlace copiado en el portapapeles!
The spec.runStrategy
key has four possible values:
Always
- The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason.
RerunOnFailure
- The VMI is re-created on another node if the previous instance fails. The instance is not re-created if the VM stops successfully, such as when it is shut down.
Manual
-
You control the VMI state manually with the
start
,stop
, andrestart
virtctl client commands. The VM is not automatically restarted. Halted
- No VMI is present when a VM is created.
Different combinations of the virtctl start
, stop
and restart
commands affect the run strategy.
The following table describes a VM’s transition between states. The first column shows the VM’s initial run strategy. The remaining columns show a virtctl command and the new run strategy after that command is run.
Initial run strategy | Start | Stop | Restart |
---|---|---|---|
Always | - | Halted | Always |
RerunOnFailure | RerunOnFailure | RerunOnFailure | RerunOnFailure |
Manual | Manual | Manual | Manual |
Halted | Always | - | - |
If a node in a cluster installed by using installer-provisioned infrastructure fails the machine health check and is unavailable, VMs with runStrategy: Always
or runStrategy: RerunOnFailure
are rescheduled on a new node.
12.1.2.2. Configuring a VM run strategy by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can configure a run strategy for a virtual machine (VM) by using the command line.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
VirtualMachine
resource by running the following command:oc edit vm <vm_name> -n <namespace>
$ oc edit vm <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example run strategy
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always # ...
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.3. Maintaining bare metal nodes Copiar enlaceEnlace copiado en el portapapeles!
When you deploy Red Hat OpenShift Service on AWS on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks.
When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance.
12.2. Managing node labeling for obsolete CPU models Copiar enlaceEnlace copiado en el portapapeles!
You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node.
12.2.1. About node labeling for obsolete CPU models Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift Virtualization Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs.
By default, the following CPU models are eliminated from the list of labels generated for the node:
Example 12.1. Obsolete CPU models
This predefined list is not visible in the HyperConverged
CR. You cannot remove CPU models from this list, but you can add to the list by editing the spec.obsoleteCPUs.cpuModels
field of the HyperConverged
CR.
12.2.2. Configuring obsolete CPU models Copiar enlaceEnlace copiado en el portapapeles!
You can configure a list of obsolete CPU models by editing the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
custom resource, specifying the obsolete CPU models in theobsoleteCPUs
array. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the example values in the
cpuModels
array with obsolete CPU models. Any value that you specify is added to a predefined list of obsolete CPU models. The predefined list is not visible in the CR.
12.3. Preventing node reconciliation Copiar enlaceEnlace copiado en el portapapeles!
Use skip-node
annotation to prevent the node-labeller
from reconciling a node.
12.3.1. Using skip-node annotation Copiar enlaceEnlace copiado en el portapapeles!
If you want the node-labeller
to skip a node, annotate that node by using the oc
CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Annotate the node that you want to skip by running the following command:
oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true
$ oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<node_name>
with the name of the relevant node to skip.
Reconciliation resumes on the next cycle after the node annotation is removed or set to false.
Chapter 13. Monitoring Copiar enlaceEnlace copiado en el portapapeles!
13.1. Monitoring overview Copiar enlaceEnlace copiado en el portapapeles!
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- Monitoring OpenShift Virtualization VM health status
- View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home → Overview page in the Red Hat OpenShift Service on AWS web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions.
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporter
service to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
13.2. Prometheus queries for virtual resources Copiar enlaceEnlace copiado en el portapapeles!
Use the Red Hat OpenShift Service on AWS monitoring dashboard to query virtualization metrics. OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including network, storage, and guest memory swapping. You can also use metrics to query live migration status.
13.2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
13.2.2. Querying metrics for all projects with the Red Hat OpenShift Service on AWS web console Copiar enlaceEnlace copiado en el portapapeles!
You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Service on AWS and user-defined projects in the Metrics UI.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc
).
Procedure
- In the Red Hat OpenShift Service on AWS web console, click Observe → Metrics.
To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
13.2.3. Querying metrics for user-defined projects with the Red Hat OpenShift Service on AWS web console Copiar enlaceEnlace copiado en el portapapeles!
You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet. These queries are restricted to the selected project. You can also run custom Prometheus Query Language (PromQL) queries for the project.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitor
custom resource definition (CRD) for the service to define how the service is monitored.
Procedure
- In the Red Hat OpenShift Service on AWS web console, click Observe → Metrics.
To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
13.2.4. Virtualization metrics Copiar enlaceEnlace copiado en el portapapeles!
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use topk
queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.
13.2.4.1. vCPU metrics Copiar enlaceEnlace copiado en el portapapeles!
The following query can identify virtual machines that are waiting for Input/Output (I/O):
kubevirt_vmi_vcpu_wait_seconds_total
- Returns the wait time (in seconds) on I/O for vCPUs of a virtual machine. Type: Counter.
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
To query the vCPU metric, the schedstats=enable
kernel argument must first be applied to the MachineConfig
object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.
Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0
- 1
- This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
13.2.4.2. Network metrics Copiar enlaceEnlace copiado en el portapapeles!
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total
- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total
- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
13.2.4.3. Storage metrics Copiar enlaceEnlace copiado en el portapapeles!
13.2.4.3.1. Storage-related traffic Copiar enlaceEnlace copiado en el portapapeles!
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total
- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total
- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
13.2.4.3.2. Storage snapshot data Copiar enlaceEnlace copiado en el portapapeles!
kubevirt_vmsnapshot_disks_restored_from_source
- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes
- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
13.2.4.3.3. I/O performance Copiar enlaceEnlace copiado en el portapapeles!
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total
- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total
- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
13.2.4.4. Guest memory swapping metrics Copiar enlaceEnlace copiado en el portapapeles!
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes
- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes
- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
13.2.4.5. Live migration metrics Copiar enlaceEnlace copiado en el portapapeles!
The following metrics can be queried to show live migration status:
kubevirt_vmi_migration_data_processed_bytes
- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_vmi_migration_data_remaining_bytes
- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_vmi_migration_memory_transfer_rate_bytes
- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_vmi_migrations_in_pending_phase
- The number of pending migrations. Type: Gauge.
kubevirt_vmi_migrations_in_scheduling_phase
- The number of scheduling migrations. Type: Gauge.
kubevirt_vmi_migrations_in_running_phase
- The number of running migrations. Type: Gauge.
kubevirt_vmi_migration_succeeded
- The number of successfully completed migrations. Type: Gauge.
kubevirt_vmi_migration_failed
- The number of failed migrations. Type: Gauge.
13.3. Exposing custom metrics for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Service on AWS includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the Red Hat OpenShift Service on AWS monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter
service.
13.3.1. Configuring the node exporter service Copiar enlaceEnlace copiado en el portapapeles!
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster as a user with
cluster-admin
privileges. -
Create the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
project. -
Configure the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project by settingenableUserWorkload
totrue
.
Procedure
Create the
Service
YAML file. In the following example, the file is callednode-exporter-service.yaml
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitor
uses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIP
service. - 5
- The target port used by
node-exporter-service
to listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitor
label. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitor
and a value ofmetrics
will be matched.
Create the node-exporter service:
oc create -f node-exporter-service.yaml
$ oc create -f node-exporter-service.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.2. Configuring a virtual machine with the node exporter service Copiar enlaceEnlace copiado en el portapapeles!
Download the node-exporter
file on to the virtual machine. Then, create a systemd
service that runs the node-exporter service when the virtual machine boots.
Prerequisites
-
The pods for the component are running in the
openshift-user-workload-monitoring
project. -
Grant the
monitoring-edit
role to users who need to monitor this user-defined project.
Procedure
- Log on to the virtual machine.
Download the
node-exporter
file on to the virtual machine by using the directory path that applies to the version ofnode-exporter
file.wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gz
$ wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the executable and place it in the
/usr/bin
directory.sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"
$ sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
node_exporter.service
file in this directory path:/etc/systemd/system
. Thissystemd
service file runs the node-exporter service when the virtual machine reboots.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the
systemd
service.sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service
$ sudo systemctl enable node_exporter.service $ sudo systemctl start node_exporter.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
curl http://localhost:9100/metrics
$ curl http://localhost:9100/metrics
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.3. Creating a custom monitoring label for virtual machines Copiar enlaceEnlace copiado en el portapapeles!
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges. - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
template
spec of your virtual machine configuration file. In this example, the labelmonitor
has the valuemetrics
.spec: template: metadata: labels: monitor: metrics
spec: template: metadata: labels: monitor: metrics
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Stop and restart the virtual machine to create a new pod with the label name given to the
monitor
label.
13.3.3.1. Querying the node-exporter service for metrics Copiar enlaceEnlace copiado en el portapapeles!
Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics
canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges or themonitoring-edit
role. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
oc get service -n <namespace> <node-exporter-service>
$ oc get service -n <namespace> <node-exporter-service>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list all available metrics for the node-exporter service, query the
metrics
resource.curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"
$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.4. Creating a ServiceMonitor resource for the node exporter service Copiar enlaceEnlace copiado en el portapapeles!
You can use a Prometheus client library and scrape metrics from the /metrics
endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor
custom resource definition (CRD) to monitor the node exporter service.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges or themonitoring-edit
role. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Create a YAML file for the
ServiceMonitor
resource configuration. In this example, the service monitor matches any service with the labelmetrics
and queries theexmet
port every 30 seconds.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ServiceMonitor
configuration for the node-exporter service.oc create -f node-exporter-metrics-monitor.yaml
$ oc create -f node-exporter-metrics-monitor.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.4.1. Accessing the node exporter service outside the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges or themonitoring-edit
role. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Expose the node-exporter service.
oc expose service -n <namespace> <node_exporter_service_name>
$ oc expose service -n <namespace> <node_exporter_service_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the FQDN (Fully Qualified Domain Name) for the route.
oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curl
command to display metrics for the node-exporter service.curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4. Virtual machine health checks Copiar enlaceEnlace copiado en el portapapeles!
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine
resource.
13.4.1. About readiness and liveness probes Copiar enlaceEnlace copiado en el portapapeles!
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe
and the spec.livenessProbe
fields of the VirtualMachine
object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-ping
command to determine if the QEMU guest agent is running on the virtual machine.
13.4.1.1. Defining an HTTP readiness probe Copiar enlaceEnlace copiado en el portapapeles!
Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet
field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready
. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.1.2. Defining a TCP readiness probe Copiar enlaceEnlace copiado en el portapapeles!
Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket
field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.1.3. Defining an HTTP liveness probe Copiar enlaceEnlace copiado en el portapapeles!
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet
field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthz
path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.2. Defining a watchdog Copiar enlaceEnlace copiado en el portapapeles!
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
poweroff
: The VM powers down immediately. Ifspec.runStrategy
is not set tomanual
, the VM reboots. reset
: The VM reboots in place and the guest operating system cannot react.NoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
shutdown
: The VM gracefully powers down by stopping all services.
Watchdog is not available for Windows VMs.
13.4.2.1. Configuring a watchdog device for the virtual machine Copiar enlaceEnlace copiado en el portapapeles!
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
For
x86
systems, the VM must use a kernel that works with thei6300esb
watchdog device. If you uses390x
architecture, the kernel must be enabled fordiag288
. Red Hat Enterprise Linux (RHEL) images supporti6300esb
anddiag288
. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a
YAML
file with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example above configures the watchdog device on a VM with the
poweroff
action and exposes the device as/dev/watchdog
.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
lspci | grep watchdog -i
$ lspci | grep watchdog -i
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
echo c > /proc/sysrq-trigger
# echo c > /proc/sysrq-trigger
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the watchdog service:
pkill -9 watchdog
# pkill -9 watchdog
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.2.2. Installing the watchdog agent on the guest Copiar enlaceEnlace copiado en el portapapeles!
You install the watchdog agent on the guest and start the watchdog
service.
Procedure
- Log in to the virtual machine as root user.
This step is only required when installing on IBM Z® (
s390x
). Enablewatchdog
by running the following command:modprobe diag288_wdt
# modprobe diag288_wdt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
/dev/watchdog
file path is present in the VM by running the following command:ls /dev/watchdog
# ls /dev/watchdog
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
watchdog
package and its dependencies:yum install watchdog
# yum install watchdog
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Uncomment the following line in the
/etc/watchdog.conf
file and save the changes:#watchdog-device = /dev/watchdog
#watchdog-device = /dev/watchdog
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
watchdog
service to start on boot:systemctl enable --now watchdog.service
# systemctl enable --now watchdog.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.5. OpenShift Virtualization runbooks Copiar enlaceEnlace copiado en el portapapeles!
To diagnose and resolve issues that trigger OpenShift Virtualization alerts, follow the procedures in the runbooks for the OpenShift Virtualization Operator. Triggered OpenShift Virtualization alerts can be viewed in the main Observe → Alerts tab in the web console, and also in the Virtualization → Overview tab.
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub.
13.5.1. CDIDataImportCronOutdated Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDIDataImportCronOutdated
alert.
13.5.2. CDIDataVolumeUnusualRestartCount Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDIDataVolumeUnusualRestartCount
alert.
13.5.3. CDIDefaultStorageClassDegraded Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDIDefaultStorageClassDegraded
alert.
13.5.4. CDIMultipleDefaultVirtStorageClasses Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDIMultipleDefaultVirtStorageClasses
alert.
13.5.5. CDINoDefaultStorageClass Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDINoDefaultStorageClass
alert.
13.5.6. CDINotReady Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDINotReady
alert.
13.5.7. CDIOperatorDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDIOperatorDown
alert.
13.5.8. CDIStorageProfilesIncomplete Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CDIStorageProfilesIncomplete
alert.
13.5.9. CnaoDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CnaoDown
alert.
13.5.10. CnaoNMstateMigration Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
CnaoNMstateMigration
alert.
13.5.11. HAControlPlaneDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
HAControlPlaneDown
alert.
13.5.12. HCOInstallationIncomplete Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
HCOInstallationIncomplete
alert.
13.5.13. HCOMisconfiguredDescheduler Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
HCOMisconfiguredDescheduler
alert.
13.5.14. HPPNotReady Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
HPPNotReady
alert.
13.5.15. HPPOperatorDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
HPPOperatorDown
alert.
13.5.16. HPPSharingPoolPathWithOS Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
HPPSharingPoolPathWithOS
alert.
13.5.17. HighCPUWorkload Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
HighCPUWorkload
alert.
13.5.18. KubemacpoolDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
KubemacpoolDown
alert.
13.5.19. KubeMacPoolDuplicateMacsFound Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
KubeMacPoolDuplicateMacsFound
alert.
13.5.20. KubeVirtComponentExceedsRequestedCPU Copiar enlaceEnlace copiado en el portapapeles!
-
The
KubeVirtComponentExceedsRequestedCPU
alert is deprecated.
13.5.21. KubeVirtComponentExceedsRequestedMemory Copiar enlaceEnlace copiado en el portapapeles!
-
The
KubeVirtComponentExceedsRequestedMemory
alert is deprecated.
13.5.22. KubeVirtCRModified Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
KubeVirtCRModified
alert.
13.5.23. KubeVirtDeprecatedAPIRequested Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
KubeVirtDeprecatedAPIRequested
alert.
13.5.24. KubeVirtNoAvailableNodesToRunVMs Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
KubeVirtNoAvailableNodesToRunVMs
alert.
13.5.25. KubevirtVmHighMemoryUsage Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
KubevirtVmHighMemoryUsage
alert.
13.5.26. KubeVirtVMIExcessiveMigrations Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
KubeVirtVMIExcessiveMigrations
alert.
13.5.27. LowKVMNodesCount Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
LowKVMNodesCount
alert.
13.5.28. LowReadyVirtControllersCount Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
LowReadyVirtControllersCount
alert.
13.5.29. LowReadyVirtOperatorsCount Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
LowReadyVirtOperatorsCount
alert.
13.5.30. LowVirtAPICount Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
LowVirtAPICount
alert.
13.5.31. LowVirtControllersCount Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
LowVirtControllersCount
alert.
13.5.32. LowVirtOperatorCount Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
LowVirtOperatorCount
alert.
13.5.33. NetworkAddonsConfigNotReady Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
NetworkAddonsConfigNotReady
alert.
13.5.34. NoLeadingVirtOperator Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
NoLeadingVirtOperator
alert.
13.5.35. NoReadyVirtController Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
NoReadyVirtController
alert.
13.5.36. NoReadyVirtOperator Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
NoReadyVirtOperator
alert.
13.5.37. NodeNetworkInterfaceDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
NodeNetworkInterfaceDown
alert.
13.5.38. OperatorConditionsUnhealthy Copiar enlaceEnlace copiado en el portapapeles!
-
The
OperatorConditionsUnhealthy
alert is deprecated.
13.5.39. OrphanedVirtualMachineInstances Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
OrphanedVirtualMachineInstances
alert.
13.5.40. OutdatedVirtualMachineInstanceWorkloads Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
OutdatedVirtualMachineInstanceWorkloads
alert.
13.5.41. SingleStackIPv6Unsupported Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
SingleStackIPv6Unsupported
alert.
13.5.42. SSPCommonTemplatesModificationReverted Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
SSPCommonTemplatesModificationReverted
alert.
13.5.43. SSPDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
SSPDown
alert.
13.5.44. SSPFailingToReconcile Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
SSPFailingToReconcile
alert.
13.5.45. SSPHighRateRejectedVms Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
SSPHighRateRejectedVms
alert.
13.5.46. SSPOperatorDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
SSPOperatorDown
alert.
13.5.47. SSPTemplateValidatorDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
SSPTemplateValidatorDown
alert.
13.5.48. UnsupportedHCOModification Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
UnsupportedHCOModification
alert.
13.5.49. VirtAPIDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtAPIDown
alert.
13.5.50. VirtApiRESTErrorsBurst Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtApiRESTErrorsBurst
alert.
13.5.51. VirtApiRESTErrorsHigh Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtApiRESTErrorsHigh
alert.
13.5.52. VirtControllerDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtControllerDown
alert.
13.5.53. VirtControllerRESTErrorsBurst Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtControllerRESTErrorsBurst
alert.
13.5.54. VirtControllerRESTErrorsHigh Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtControllerRESTErrorsHigh
alert.
13.5.55. VirtHandlerDaemonSetRolloutFailing Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtHandlerDaemonSetRolloutFailing
alert.
13.5.56. VirtHandlerRESTErrorsBurst Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtHandlerRESTErrorsBurst
alert.
13.5.57. VirtHandlerRESTErrorsHigh Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtHandlerRESTErrorsHigh
alert.
13.5.58. VirtOperatorDown Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtOperatorDown
alert.
13.5.59. VirtOperatorRESTErrorsBurst Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtOperatorRESTErrorsBurst
alert.
13.5.60. VirtOperatorRESTErrorsHigh Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VirtOperatorRESTErrorsHigh
alert.
13.5.61. VirtualMachineCRCErrors Copiar enlaceEnlace copiado en el portapapeles!
The
VirtualMachineCRCErrors
alert is deprecated.The alert is now called
VMStorageClassWarning
.
13.5.62. VMCannotBeEvicted Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VMCannotBeEvicted
alert.
13.5.63. VMStorageClassWarning Copiar enlaceEnlace copiado en el portapapeles!
-
View the runbook for the
VMStorageClassWarning
alert.
Chapter 14. Support Copiar enlaceEnlace copiado en el portapapeles!
14.1. Support overview Copiar enlaceEnlace copiado en el portapapeles!
You can request assistance from Red Hat Support, report bugs, collect data about your environment, and monitor the health of your cluster and virtual machines (VMs) with the following tools.
14.1.1. Opening support tickets Copiar enlaceEnlace copiado en el portapapeles!
If you have encountered an issue that requires immediate assistance from Red Hat Support, you can submit a support case.
To report a bug, you can create a Jira issue directly.
14.1.1.1. Submitting a support case Copiar enlaceEnlace copiado en el portapapeles!
To request support from Red Hat Support, follow the instructions for submitting a support case.
It is helpful to collect debugging data to include with your support request.
14.1.1.1.1. Collecting data for Red Hat Support Copiar enlaceEnlace copiado en el portapapeles!
You can gather debugging information by performing the following steps:
- Collecting data about your environment
-
Configure Prometheus and Alertmanager and collect
must-gather
data for Red Hat OpenShift Service on AWS and OpenShift Virtualization.
14.1.1.2. Creating a Jira issue Copiar enlaceEnlace copiado en el portapapeles!
To report a bug, you can create a Jira issue directly by filling out the form on the Create Issue page.
14.1.2. Web console monitoring Copiar enlaceEnlace copiado en el portapapeles!
You can monitor the health of your cluster and VMs by using the Red Hat OpenShift Service on AWS web console. The web console displays resource usage, alerts, events, and trends for your cluster and for OpenShift Virtualization components and resources.
Page | Description |
---|---|
Overview page | Cluster details, status, alerts, inventory, and resource usage |
Virtualization → Overview tab | OpenShift Virtualization resources, usage, alerts, and status |
Virtualization → Top consumers tab | Top consumers of CPU, memory, and storage |
Virtualization → Migrations tab | Progress of live migrations |
Virtualization → VirtualMachines tab | CPU, memory, and storage usage summary |
Virtualization → VirtualMachines → VirtualMachine details → Metrics tab | VM resource usage, storage, network, and migration |
Virtualization → VirtualMachines → VirtualMachine details → Events tab | List of VM events |
Virtualization → VirtualMachines → VirtualMachine details → Diagnostics tab | VM status conditions and volume snapshot status |
14.2. Collecting data for Red Hat Support Copiar enlaceEnlace copiado en el portapapeles!
When you submit a support case to Red Hat Support, it is helpful to provide debugging information for Red Hat OpenShift Service on AWS and OpenShift Virtualization by using the following tools:
- Prometheus
- Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing.
- Alertmanager
- The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems.
14.2.1. Collecting data about your environment Copiar enlaceEnlace copiado en el portapapeles!
Collecting data about your environment minimizes the time required to analyze and determine the root cause.
Prerequisites
- Record the exact number of affected nodes and virtual machines.
14.2.2. Collecting data about virtual machines Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.
Prerequisites
- Linux VMs: Install the latest QEMU guest agent.
Windows VMs:
- Record the Windows patch update details.
- Install the latest VirtIO drivers.
- Install the latest QEMU guest agent.
- If Remote Desktop Protocol (RDP) is enabled, connect by using the desktop viewer to determine whether there is a problem with the connection software.
Procedure
- Collect screenshots of VMs that have crashed before you restart them.
- Collect memory dumps from VMs before remediation attempts.
- Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.
14.3. Troubleshooting Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Virtualization provides tools and logs for troubleshooting virtual machines (VMs) and virtualization components.
You can troubleshoot OpenShift Virtualization components by using the tools provided in the web console or by using the oc
CLI tool.
14.3.1. Events Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Service on AWS events are records of important life-cycle information and are useful for monitoring and troubleshooting virtual machine, namespace, and resource issues.
VM events: Navigate to the Events tab of the VirtualMachine details page in the web console.
- Namespace events
You can view namespace events by running the following command:
oc get events -n <namespace>
$ oc get events -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the list of events for details about specific events.
- Resource events
You can view resource events by running the following command:
oc describe <resource> <resource_name>
$ oc describe <resource> <resource_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.2. Pod logs Copiar enlaceEnlace copiado en el portapapeles!
You can view logs for OpenShift Virtualization pods by using the web console or the CLI. You can also view aggregated logs by using the LokiStack in the web console.
14.3.2.1. Configuring OpenShift Virtualization pod log verbosity Copiar enlaceEnlace copiado en el portapapeles!
You can configure the verbosity level of OpenShift Virtualization pod logs by editing the HyperConverged
custom resource (CR).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
To set log verbosity for specific components, open the
HyperConverged
CR in your default text editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the log level for one or more components by editing the
spec.logVerbosityConfig
stanza. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The log verbosity value must be an integer in the range
1–9
, where a higher number indicates a more detailed log. In this example, thevirtAPI
component logs are exposed if their priority level is5
or higher.
- Apply your changes by saving and exiting the editor.
14.3.2.2. Viewing virt-launcher pod logs with the web console Copiar enlaceEnlace copiado en el portapapeles!
You can view the virt-launcher
pod logs for a virtual machine by using the Red Hat OpenShift Service on AWS web console.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Select a virtual machine to open the VirtualMachine details page.
- On the General tile, click the pod name to open the Pod details page.
- Click the Logs tab to view the logs.
14.3.2.3. Viewing OpenShift Virtualization pod logs with the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can view logs for the OpenShift Virtualization pods by using the oc
CLI tool.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
View a list of pods in the OpenShift Virtualization namespace by running the following command:
oc get pods -n openshift-cnv
$ oc get pods -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 14.1. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the pod log by running the following command:
oc logs -n openshift-cnv <pod_name>
$ oc logs -n openshift-cnv <pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a pod fails to start, you can use the
--previous
option to view logs from the last attempt.To monitor log output in real time, use the
-f
option.Example 14.2. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.3. Guest system logs Copiar enlaceEnlace copiado en el portapapeles!
Viewing the boot logs of VM guests can help diagnose issues. You can configure access to guests' logs and view them by using either the Red Hat OpenShift Service on AWS web console or the oc
CLI.
This feature is disabled by default. If a VM does not explicitly have this setting enabled or disabled, it inherits the cluster-wide default setting.
If sensitive information such as credentials or other personally identifiable information (PII) is written to the serial console, it is logged with all other visible text. Red Hat recommends using SSH to send sensitive data instead of the serial console.
14.3.3.1. Enabling default access to VM guest system logs with the web console Copiar enlaceEnlace copiado en el portapapeles!
You can enable default access to VM guest system logs by using the web console.
Procedure
- From the side menu, click Virtualization → Overview.
- Click the Settings tab.
- Click Cluster → Guest management.
- Set Enable guest system log access to on.
14.3.3.2. Enabling default access to VM guest system logs with the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can enable default access to VM guest system logs by editing the HyperConverged
custom resource (CR).
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
disableSerialConsoleLog
value. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the value of
disableSerialConsoleLog
tofalse
if you want serial console access to be enabled on VMs by default.
14.3.3.3. Setting guest system log access for a single VM with the web console Copiar enlaceEnlace copiado en el portapapeles!
You can configure access to VM guest system logs for a single VM by using the web console. This setting takes precedence over the cluster-wide default configuration.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Configuration tab.
- Set Guest system log access to on or off.
14.3.3.4. Setting guest system log access for a single VM with the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can configure access to VM guest system logs for a single VM by editing the VirtualMachine
CR. This setting takes precedence over the cluster-wide default configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the virtual machine manifest by running the following command:
oc edit vm <vm_name>
$ oc edit vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the value of the
logSerialConsole
field. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- To enable access to the guest’s serial console log, set the
logSerialConsole
value totrue
.
Apply the new configuration to the VM by running the following command:
oc apply vm <vm_name>
$ oc apply vm <vm_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you edited a running VM, restart the VM to apply the new configuration. For example:
virtctl restart <vm_name> -n <namespace>
$ virtctl restart <vm_name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.3.5. Viewing guest system logs with the web console Copiar enlaceEnlace copiado en el portapapeles!
You can view the serial console logs of a virtual machine (VM) guest by using the web console.
Prerequisites
- Guest system log access is enabled.
Procedure
- Click Virtualization → VirtualMachines from the side menu.
- Select a virtual machine to open the VirtualMachine details page.
- Click the Diagnostics tab.
- Click Guest system logs to load the serial console.
14.3.3.6. Viewing guest system logs with the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can view the serial console logs of a VM guest by running the oc logs
command.
Prerequisites
- Guest system log access is enabled.
-
You have installed the OpenShift CLI (
oc
).
Procedure
View the logs by running the following command, substituting your own values for
<namespace>
and<vm_name>
:oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log
$ oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3.4. Log aggregation Copiar enlaceEnlace copiado en el portapapeles!
You can facilitate troubleshooting by aggregating and filtering logs.
14.3.4.1. Viewing aggregated OpenShift Virtualization logs with the LokiStack Copiar enlaceEnlace copiado en el portapapeles!
You can view aggregated logs for OpenShift Virtualization pods and containers by using the LokiStack in the web console.
Prerequisites
- You deployed the LokiStack.
Procedure
- Navigate to Observe → Logs in the web console.
-
Select application, for
virt-launcher
pod logs, or infrastructure, for OpenShift Virtualization control plane pods and containers, from the log type list. - Click Show Query to display the query field.
- Enter the LogQL query in the query field and click Run Query to display the filtered logs.
14.3.4.2. OpenShift Virtualization LogQL queries Copiar enlaceEnlace copiado en el portapapeles!
You can view and filter aggregated logs for OpenShift Virtualization components by running Loki Query Language (LogQL) queries on the Observe → Logs page in the web console.
The default log type is infrastructure. The virt-launcher
log type is application.
Optional: You can include or exclude strings or regular expressions by using line filter expressions.
If the query matches a large number of logs, the query might time out.
Component | LogQL query |
---|---|
All |
{log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|
|
{log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="storage"
|
|
{log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="deployment"
|
|
{log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="network"
|
|
{log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="compute"
|
|
{log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="schedule"
|
Container |
{log_type=~".+",kubernetes_container_name=~"<container>|<container>"} |json|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|
| You must select application from the log type list before running this query. {log_type=~".+", kubernetes_container_name="compute"}|json |!= "custom-ga-command"
|
You can filter log lines to include or exclude strings or regular expressions by using line filter expressions.
Line filter expression | Description |
---|---|
| Log line contains string |
| Log line does not contain string |
| Log line contains regular expression |
| Log line does not contain regular expression |
Example line filter expression
{log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |= "error" != "timeout"
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|= "error" != "timeout"
14.3.5. Common error messages Copiar enlaceEnlace copiado en el portapapeles!
The following error messages might appear in OpenShift Virtualization logs:
ErrImagePull
orImagePullBackOff
- Indicates an incorrect deployment configuration or problems with the images that are referenced.
14.3.6. Troubleshooting data volumes Copiar enlaceEnlace copiado en el portapapeles!
You can check the Conditions
and Events
sections of the DataVolume
object to analyze and resolve issues.
14.3.6.1. About data volume conditions and events Copiar enlaceEnlace copiado en el portapapeles!
You can diagnose data volume issues by examining the output of the Conditions
and Events
sections generated by the command:
oc describe dv <DataVolume>
$ oc describe dv <DataVolume>
The Conditions
section displays the following Types
:
-
Bound
-
Running
-
Ready
The Events
section provides the following additional information:
-
Type
of event -
Reason
for logging -
Source
of the event -
Message
containing additional diagnostic information.
The output from oc describe
does not always contains Events
.
An event is generated when the Status
, Reason
, or Message
changes. Both conditions and events react to changes in the state of the data volume.
For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions
section is updated as well.
14.3.6.2. Analyzing data volume conditions and events Copiar enlaceEnlace copiado en el portapapeles!
By inspecting the Conditions
and Events
sections generated by the describe
command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state.
There are many different combinations of conditions. Each must be evaluated in its unique context.
Examples of various combinations follow.
Bound
- A successfully bound PVC displays in this example.Note that the
Type
isBound
, so theStatus
isTrue
. If the PVC is not bound, theStatus
isFalse
.When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the
Reason
isBound
andStatus
isTrue
. TheMessage
indicates which PVC owns the data volume.Message
, in theEvents
section, provides further details including how long the PVC has been bound (Age
) and by what resource (From
), in this casedatavolume-controller
:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Running
- In this case, note thatType
isRunning
andStatus
isFalse
, indicating that an event has occurred that caused an attempted operation to fail, changing the Status fromTrue
toFalse
.However, note that
Reason
isCompleted
and theMessage
field indicatesImport Complete
.In the
Events
section, theReason
andMessage
contain additional troubleshooting information about the failed operation. In this example, theMessage
displays an inability to connect due to a404
, listed in theEvents
section’s firstWarning
.From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ready
– IfType
isReady
andStatus
isTrue
, then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, theStatus
isFalse
:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. Backup and restore Copiar enlaceEnlace copiado en el portapapeles!
15.1. Backup and restore by using VM snapshots Copiar enlaceEnlace copiado en el portapapeles!
You can back up and restore virtual machines (VMs) by using snapshots. Snapshots are supported by the following storage providers:
- Any cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API
To create snapshots of a VM in the Running
state with the highest integrity, install the QEMU guest agent if it is not included with your operating system. The QEMU guest agent is included with the default Red Hat templates.
Online snapshots are supported for virtual machines that have hot plugged virtual disks. However, hot plugged disks that are not in the virtual machine specification are not included in the snapshot.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken.
The conditions under which a snapshot is taken are reflected in the snapshot indications that are displayed in the web console or CLI. If these conditions do not meet your requirements, try creating the snapshot again or use an offline snapshot
15.1.1. About snapshots Copiar enlaceEnlace copiado en el portapapeles!
A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a previous development version.
A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state).
When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken.
The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation.
You can perform the following snapshot actions:
- Create a new snapshot
Create a clone of a virtual machine from a snapshot
ImportantCloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported.
- List all snapshots attached to a specific VM
- Restore a VM from a snapshot
- Delete an existing VM snapshot
VM snapshot controller and custom resources
The VM snapshot feature introduces three new API objects defined as custom resource definitions (CRDs) for managing snapshots:
-
VirtualMachineSnapshot
: Represents a user request to create a snapshot. It contains information about the current state of the VM. -
VirtualMachineSnapshotContent
: Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. -
VirtualMachineRestore
: Represents a user request to restore a VM from a snapshot.
The VM snapshot controller binds a VirtualMachineSnapshotContent
object with the VirtualMachineSnapshot
object for which it was created, with a one-to-one mapping.
15.1.2. About application-consistent snapshots and backups Copiar enlaceEnlace copiado en el portapapeles!
You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can either configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin.
On a Linux VM, freeze and thaw processes trigger automatically when a snapshot is taken or a backup is started by using, for example, a plugin from Velero or another backup vendor. The freeze process, performed by QEMU Guest Agent (QEMU GA) freeze hooks, ensures that before the snapshot or backup of a VM occurs, all of the VM’s filesystems are frozen and each appropriately configured application is informed that a snapshot or backup is about to start. This notification affords each application the opportunity to quiesce its state. Depending on the application, quiescing might involve temporarily refusing new requests, finishing in-progress operations, and flushing data to disk. The operating system is then directed to quiesce the filesystems by flushing outstanding writes to disk and freezing new write activity. All new connection requests are refused. When all applications have become inactive, the QEMU GA freezes the filesystems, and a snapshot is taken or a backup initiated. After the taking of the snapshot or start of the backup, the thawing process begins. Filesystems writing is reactivated and applications receive notification to resume normal operations.
The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service.
15.1.3. Creating snapshots Copiar enlaceEnlace copiado en el portapapeles!
You can create snapshots of virtual machines (VMs) by using the Red Hat OpenShift Service on AWS web console or the command line.
15.1.3.1. Creating a snapshot by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can create a snapshot of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
The
snapshot
feature gate is enabled in the YAML configuration of thekubevirt
CR. The VM snapshot includes disks that meet the following requirements:
- The disks are data volumes or persistent volume claims.
- The disks belong to a storage class that supports Container Storage Interface (CSI) volume snapshots.
- The disks are bound to a persistent volume (PV) and populated with a datasource.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
Click the Snapshots tab and then click Take Snapshot.
Alternatively, right-click the VM and select Create snapshot from the pop-up menu.
- Enter the snapshot name.
- Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot.
- If your VM has disks that cannot be included in the snapshot and you wish to proceed, select I am aware of this warning and wish to proceed.
- Click Save.
15.1.3.2. Creating a snapshot by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot
object.
Prerequisites
Ensure the
Snapshot
feature gate is enabled for thekubevirt
CR by using the following command:oc get kubevirt kubevirt-hyperconverged -n openshift-cnv -o yaml
$ oc get kubevirt kubevirt-hyperconverged -n openshift-cnv -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Truncated output
spec: developerConfiguration: featureGates: - Snapshot
spec: developerConfiguration: featureGates: - Snapshot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the VM snapshot includes disks that meet the following requirements:
- The disks are data volumes or persistent volume claims.
- The disks belong to a storage class that supports Container Storage Interface (CSI) volume snapshots.
- The disks are bound to a persistent volume (PV) and populated with a datasource.
-
Install the OpenShift CLI (
oc
). - Optional: Power down the VM for which you want to create a snapshot.
Procedure
Create a YAML file to define a
VirtualMachineSnapshot
object that specifies the name of the newVirtualMachineSnapshot
and the name of the source VM as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
VirtualMachineSnapshot
object:oc create -f <snapshot_name>.yaml
$ oc create -f <snapshot_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The snapshot controller creates a
VirtualMachineSnapshotContent
object, binds it to theVirtualMachineSnapshot
, and updates thestatus
andreadyToUse
fields of theVirtualMachineSnapshot
object.
Verification
Optional: During the snapshot creation process, you can use the
wait
command to monitor the status of the snapshot and wait until it is ready for use:Enter the following command:
oc wait <vm_name> <snapshot_name> --for condition=Ready
$ oc wait <vm_name> <snapshot_name> --for condition=Ready
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the status of the snapshot:
-
InProgress
- The snapshot operation is still in progress. -
Succeeded
- The snapshot operation completed successfully. Failed
- The snapshot operaton failed.NoteOnline snapshots have a default time deadline of five minutes (
5m
). If the snapshot does not complete successfully in five minutes, the status is set tofailed
. Afterwards, the file system will be thawed and the VM unfrozen but the status remainsfailed
until you delete the failed snapshot image.To change the default time deadline, add the
FailureDeadline
attribute to the VM snapshot spec with the time designated in minutes (m
) or in seconds (s
) that you want to specify before the snapshot operation times out.To set no deadline, you can specify
0
, though this is generally not recommended, as it can result in an unresponsive VM.If you do not specify a unit of time such as
m
ors
, the default is seconds (s
).
-
Verify that the
VirtualMachineSnapshot
object is created and bound withVirtualMachineSnapshotContent
and that thereadyToUse
flag is set totrue
:oc describe vmsnapshot <snapshot_name>
$ oc describe vmsnapshot <snapshot_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
status
field of theProgressing
condition specifies if the snapshot is still being created. - 2
- The
status
field of theReady
condition specifies if the snapshot creation process is complete. - 3
- Specifies if the snapshot is ready to be used.
- 4
- Specifies that the snapshot is bound to a
VirtualMachineSnapshotContent
object created by the snapshot controller. - 5
- Specifies additional information about the snapshot, such as whether it is an online snapshot, or whether it was created with QEMU guest agent running.
- 6
- Lists the storage volumes that are part of the snapshot, as well as their parameters.
-
Check the
includedVolumes
section in the snapshot description to verify that the expected PVCs are included in the snapshot.
15.1.4. Verifying online snapshots by using snapshot indications Copiar enlaceEnlace copiado en el portapapeles!
Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation.
Prerequisites
- You must have attempted to create an online VM snapshot.
Procedure
Display the output from the snapshot indications by performing one of the following actions:
-
Use the command line to view indicator output in the
status
stanza of theVirtualMachineSnapshot
object YAML. - In the web console, click VirtualMachineSnapshot → Status in the Snapshot details screen.
-
Use the command line to view indicator output in the
Verify the status of your online VM snapshot by viewing the values of the
status.indications
parameter:-
Online
indicates that the VM was running during online snapshot creation. -
GuestAgent
indicates that the QEMU guest agent was running during online snapshot creation. -
NoGuestAgent
indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error.
-
15.1.5. Restoring virtual machines from snapshots Copiar enlaceEnlace copiado en el portapapeles!
You can restore virtual machines (VMs) from snapshots by using the Red Hat OpenShift Service on AWS web console or the command line.
15.1.5.1. Restoring a VM from a snapshot by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can restore a virtual machine (VM) to a previous configuration represented by a snapshot in the Red Hat OpenShift Service on AWS web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
-
If the VM is running, click the Options menu
and select Stop to power it down.
- Click the Snapshots tab to view a list of snapshots associated with the VM.
- Select a snapshot to open the Snapshot Details screen.
-
Click the Options menu
and select Restore VirtualMachine from snapshot.
- Click Restore.
Optional: You can also create a new VM based on the snapshot. To do so:
-
In the Options menu
of the snapshot, select Create VirtualMachine from Snapshot.
- Provide a name for the new VM.
- Click Create
-
In the Options menu
15.1.5.2. Restoring a VM from a snapshot by using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can restore an existing virtual machine (VM) to a previous configuration by using the command line. You can only restore from an offline VM snapshot.
Prerequisites
-
Install the OpenShift CLI (
oc
). - Power down the VM you want to restore.
Optional: Adjust what happens if the target VM is not fully stopped (ready). To do so, set the
targetReadinessPolicy
parameter in thevmrestore
YAML configuration to one of the following values:-
FailImmediate
- The restore process fails immediately if the VM is not ready. -
StopTarget
- If the VM is not ready, it gets stopped, and the restore process starts. -
WaitGracePeriod 5
- The restore process waits for a set amount of time, in minutes, for the VM to be ready. This is the default setting, with the default value set to 5 minutes. -
WaitEventually
- The restore process waits indefinitely for the VM to be ready.
-
Procedure
Create a YAML file to define a
VirtualMachineRestore
object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
VirtualMachineRestore
object:oc create -f <vm_restore>.yaml
$ oc create -f <vm_restore>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The snapshot controller updates the status fields of the
VirtualMachineRestore
object and replaces the existing VM configuration with the snapshot content.
Verification
Verify that the VM is restored to the previous state represented by the snapshot and that the
complete
flag is set totrue
:oc get vmrestore <vm_restore>
$ oc get vmrestore <vm_restore>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.1.6. Deleting snapshots Copiar enlaceEnlace copiado en el portapapeles!
You can delete snapshots of virtual machines (VMs) by using the Red Hat OpenShift Service on AWS web console or the command line.
15.1.6.1. Deleting a snapshot by using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can delete an existing virtual machine (VM) snapshot by using the web console.
Procedure
- Navigate to Virtualization → VirtualMachines in the web console.
- Select a VM to open the VirtualMachine details page.
- Click the Snapshots tab to view a list of snapshots associated with the VM.
-
Click the Options menu
beside a snapshot and select Delete snapshot.
- Click Delete.
15.1.6.2. Deleting a virtual machine snapshot in the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot
object.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Delete the
VirtualMachineSnapshot
object:oc delete vmsnapshot <snapshot_name>
$ oc delete vmsnapshot <snapshot_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The snapshot controller deletes the
VirtualMachineSnapshot
along with the associatedVirtualMachineSnapshotContent
object.
Verification
Verify that the snapshot is deleted and no longer attached to this VM:
oc get vmsnapshot
$ oc get vmsnapshot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.2. Backing up and restoring virtual machines Copiar enlaceEnlace copiado en el portapapeles!
Red Hat supports using OpenShift Virtualization 4.14 or later with OADP 1.3.x or later.
OADP versions earlier than 1.3.0 are not supported for back up and restore of OpenShift Virtualization.
Back up and restore virtual machines by using the OpenShift API for Data Protection.
You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. You can then install the Data Protection Application.
OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options:
- Container Storage Interface (CSI) backups
- Container Storage Interface (CSI) backups with DataMover
The following storage options are excluded:
- File system backup and restore
- Volume snapshot backup and restore
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog.
15.2.1. Installing and configuring OADP with OpenShift Virtualization Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you install OADP by installing the OADP Operator.
The latest version of the OADP Operator installs Velero 1.16.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
- Install the OADP Operator according to the instructions for your storage provider.
-
Install the Data Protection Application (DPA) with the
kubevirt
andopenshift
OADP plugins. Back up virtual machines by creating a
Backup
custom resource (CR).WarningRed Hat support is limited to only the following options:
- CSI backups
- CSI backups with DataMover.
You restore the Backup
CR by creating a Restore
CR.
15.2.2. Installing the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication
API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a
Secret
with the default name,cloud-credentials
.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secret
with an emptycredentials-velero
file. If there is no defaultSecret
, the installation will fail.
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplication
manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace for OADP is
openshift-adp
. The namespace is a variable and is configurable. - 2
- The
kubevirt
plugin is mandatory for OpenShift Virtualization. - 3
- Specify the plugin for the backup provider, for example,
gcp
, if it exists. - 4
- The
csi
plugin is mandatory for backing up PVs with CSI snapshots. Thecsi
plugin uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location. - 5
- The
openshift
plugin is mandatory. - 6
- Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
- 7
- The administrative agent that routes the administrative requests to servers.
- 8
- Set this value to
true
if you want to enablenodeAgent
and perform File System Backup. - 9
- Enter
kopia
as your uploader to use the Built-in DataMover. ThenodeAgent
deploys a daemon set, which means that thenodeAgent
pods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: true
to theBackup
CR. - 10
- Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes.
- 11
- Specify the backup provider.
- 12
- Specify the correct default name for the
Secret
, for example,cloud-credentials-gcp
, if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify aSecret
name, the default name is used. - 13
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 14
- Specify a prefix for Velero backups, for example,
velero
, if the bucket is used for multiple purposes.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication
(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
type
is set toReconciled
. Verify the backup storage location and confirm that the
PHASE
isAvailable
by running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copiar enlaceEnlace copiado en el portapapeles!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.