이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 6. Virtual machines
6.1. Creating VMs from Red Hat images
6.1.1. Creating virtual machines from Red Hat images overview
Red Hat images are golden images. They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images
project as snapshots or persistent volume claims (PVCs).
Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates.
Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console.
You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods:
Do not create VMs in the default openshift-*
namespaces. Instead, create a new namespace or use an existing namespace without the openshift
prefix.
6.1.1.1. About golden images
A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently.
6.1.1.1.1. How do golden images work?
Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences.
After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image.
6.1.1.1.2. Red Hat implementation of golden images
Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs.
6.1.1.2. About VM boot sources
Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications.
Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source.
Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster’s default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the previous default storage class.
6.1.2. Creating virtual machines from templates
You can create virtual machines (VMs) from Red Hat templates by using the Red Hat OpenShift Service on AWS web console.
6.1.2.1. About VM templates
- Boot sources
You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label.
Templates without a boot source are labeled Boot source required. See Creating virtual machines from custom images.
- Customization
- You can customize the disk source and VM parameters before you start the VM.
See storage volume types and storage fields for details about disk source settings.
If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Customizing a VM template by using the web console.
- Single-node OpenShift
-
Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the
evictionStrategy
field for templates or VMs that use data volumes or storage profiles.
6.1.2.2. Creating a VM from a template
You can create a virtual machine (VM) from a template with an available boot source by using the Red Hat OpenShift Service on AWS web console.
Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM.
Procedure
-
Navigate to Virtualization
Catalog in the web console. Click Boot source available to filter templates with boot sources.
The catalog displays the default templates. Click All Items to view all available templates for your filters.
- Click a template tile to view its details.
- Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the Mount Windows drivers disk checkbox.
If you do not need to customize the template or VM parameters, click Quick create VirtualMachine to create a VM from the template.
If you need to customize the template or VM parameters, do the following:
- Click Customize VirtualMachine.
- Expand Storage or Optional parameters to edit data source settings.
Click Customize VirtualMachine parameters.
The Customize and create VirtualMachine pane displays the Overview, YAML, Scheduling, Environment, Network interfaces, Disks, Scripts, and Metadata tabs.
- Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key.
Click Create VirtualMachine.
The VirtualMachine details page displays the provisioning status.
6.1.2.2.1. Storage volume types
Type | Description |
---|---|
ephemeral | A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. |
persistentVolumeClaim | Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into Red Hat OpenShift Service on AWS. There are some requirements for the disk to be used within a PVC. |
dataVolume |
Data volumes build on the
Specify |
cloudInitNoCloud | Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. |
containerDisk | References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.
A Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note
A |
emptyDisk | Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. |
6.1.2.2.2. Storage fields
Field | Description |
---|---|
Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). |
Use an existing PVC | Use a PVC that is already available in the cluster. |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. |
Import via Registry (creates PVC) | Import content via container registry. |
Name |
Name of the disk. The name can contain lowercase letters ( |
Size | Size of the disk in GiB. |
Type | Type of disk. Example: Disk or CD-ROM |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
Storage Class | The storage class that is used to create the disk. |
Advanced storage settings
The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.
If you do not specify these parameters, the system uses the default storage profile values.
Parameter | Option | Parameter description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode | ReadWriteOnce (RWO) | Volume can be mounted as read-write by a single node. |
ReadWriteMany (RWX) | Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. |
6.1.2.2.3. Customizing a VM template by using the web console
You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed.
You can remove the deprecated designation from the customized template.
Procedure
-
Navigate to Virtualization
Templates in the web console. - From the list of VM templates, click the template marked as deprecated.
- Click Edit next to the pencil icon beside Labels.
Remove the following two labels:
-
template.kubevirt.io/type: "base"
-
template.kubevirt.io/version: "version"
-
- Click Save.
- Click the pencil icon beside the number of existing Annotations.
Remove the following annotation:
-
template.kubevirt.io/deprecated
-
- Click Save.
6.1.3. Creating virtual machines from instance types
You can simplify virtual machine (VM) creation by using instance types, whether you use the Red Hat OpenShift Service on AWS web console or the CLI to create VMs.
Creating a VM from an instance type in OpenShift Virtualization 4.15 and higher is supported on Red Hat OpenShift Service on AWS clusters. In OpenShift Virtualization 4.14, creating a VM from an instance type is a Technology Preview feature and is not supported on Red Hat OpenShift Service on AWS clusters.
6.1.3.1. About instance types
An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install OpenShift Virtualization.
To create a new instance type, you must first create a manifest, either manually or by using the virtctl
CLI tool. You then create the instance type object by applying the manifest to your cluster.
OpenShift Virtualization provides two CRDs for configuring instance types:
-
A namespaced object:
VirtualMachineInstancetype
-
A cluster-wide object:
VirtualMachineClusterInstancetype
These objects use the same VirtualMachineInstancetypeSpec
.
6.1.3.1.1. Required attributes
When you configure an instance type, you must define the cpu
and memory
attributes. Other attributes are optional.
When you create a VM from an instance type, you cannot override any parameters defined in the instance type.
Because instance types require defined CPU and memory attributes, OpenShift Virtualization always rejects additional requests for these resources when creating a VM from an instance type.
You can manually create an instance type manifest. For example:
Example YAML file with required fields
apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2
You can create an instance type manifest by using the virtctl
CLI utility. For example:
Example virtctl
command with required fields
$ virtctl create instancetype --cpu 2 --memory 256Mi
where:
--cpu <value>
- Specifies the number of vCPUs to allocate to the guest. Required.
--memory <value>
- Specifies an amount of memory to allocate to the guest. Required.
You can immediately create the object from the new manifest by running the following command:
$ virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -
6.1.3.1.2. Optional attributes
In addition to the required cpu
and memory
attributes, you can include the following optional attributes in the VirtualMachineInstancetypeSpec
:
annotations
- List annotations to apply to the VM.
gpus
- List vGPUs for passthrough.
hostDevices
- List host devices for passthrough.
ioThreadsPolicy
- Define an IO threads policy for managing dedicated disk access.
launchSecurity
- Configure Secure Encrypted Virtualization (SEV).
nodeSelector
- Specify node selectors to control the nodes where this VM is scheduled.
schedulerName
- Define a custom scheduler to use for this VM instead of the default scheduler.
6.1.3.2. Pre-defined instance types
OpenShift Virtualization includes a set of pre-defined instance types called common-instancetypes
. Some are specialized for specific workloads and others are workload-agnostic.
These instance type resources are named according to their series, version, and size. The size value follows the .
delimiter and ranges from nano
to 8xlarge
.
Use case | Series | Characteristics | vCPU to memory ratio | Example resource |
---|---|---|---|---|
Universal | U |
| 1:4 |
|
Overcommitted | O |
| 1:4 |
|
Compute-exclusive | CX |
| 1:2 |
|
NVIDIA GPU | GN |
| 1:4 |
|
Memory-intensive | M |
| 1:8 |
|
Network-intensive | N |
| 1:2 |
|
6.1.3.3. Creating manifests by using the virtctl tool
You can use the virtctl
CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands.
If you have a VirtualMachine
manifest, you can create a VM from the command line.
6.1.3.4. Creating a VM from an instance type by using the web console
You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
Procedure
In the web console, navigate to Virtualization
Catalog. The InstanceTypes tab opens by default.
Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
6.1.4. Creating virtual machines from the command line
You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine
manifest. You can simplify VM configuration by using an instance type in your VM manifest.
You can also create VMs from instance types by using the web console.
6.1.4.1. Creating manifests by using the virtctl tool
You can use the virtctl
CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands.
6.1.4.2. Creating a VM from a VirtualMachine manifest
You can create a virtual machine (VM) from a VirtualMachine
manifest.
Procedure
Edit the
VirtualMachine
manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM:NoteThis example manifest does not configure VM authentication.
Example manifest for a RHEL VM
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk
- 1
- The
rhel9
golden image is used to install RHEL 9 as the guest operating system. - 2
- Golden images are stored in the
openshift-virtualization-os-images
namespace. - 3
- The
u1.medium
instance type requests 1 vCPU and 4Gi memory for the VM. These resource values cannot be overridden within the VM. - 4
- The
rhel.9
preference specifies additional attributes that support the RHEL 9 guest operating system.
Create a virtual machine by using the manifest file:
$ oc create -f <vm_manifest_file>.yaml
Optional: Start the virtual machine:
$ virtctl start <vm_name> -n <namespace>
Next steps
6.2. Creating VMs from custom images
6.2.1. Creating virtual machines from custom images overview
You can create virtual machines (VMs) from custom operating system images by using one of the following methods:
Importing the image as a container disk from a registry.
Optional: You can enable auto updates for your container disks. See Managing automatic boot source updates for details.
- Importing the image from a web page.
- Uploading the image from a local machine.
- Cloning a persistent volume claim (PVC) that contains the image.
The Containerized Data Importer (CDI) imports the image into a PVC by using a data volume. You add the PVC to the VM by using the Red Hat OpenShift Service on AWS web console or command line.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
You must also install VirtIO drivers on Windows VMs.
The QEMU guest agent is included with Red Hat images.
6.2.2. Creating VMs by using container disks
You can create virtual machines (VMs) by using container disks built from operating system images.
You can enable auto updates for your container disks. See Managing automatic boot source updates for details.
If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can prune DeploymentConfig
objects to resolve this issue:
You create a VM from a container disk by performing the following steps:
- Build an operating system image into a container disk and upload it to your container registry.
- If your container registry does not have TLS, configure your environment to disable TLS for your registry.
- Create a VM with the container disk as the disk source by using the web console or the command line.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
6.2.2.1. Building and uploading a container disk
You can build a virtual machine (VM) image into a container disk and upload it to a registry.
The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted.
For Red Hat Quay, you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed.
Prerequisites
-
You must have
podman
installed. - You must have a QCOW2 or RAW image file.
Procedure
Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of
107
, and placed in the/disk/
directory inside the container. Permissions for the/disk/
directory must then be set to0440
.The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal
scratch
image in the second stage to store the result:$ cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF
- 1
- Where
<vm_image>
is the image in either QCOW2 or RAW format. If you use a remote image, replace<vm_image>.qcow2
with the complete URL.
Build and tag the container:
$ podman build -t <registry>/<container_disk_name>:latest .
Push the container image to the registry:
$ podman push <registry>/<container_disk_name>:latest
6.2.2.2. Disabling TLS for a container registry
You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries
field of the HyperConverged
custom resource.
Prerequisites
Open the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add a list of insecure registries to the
spec.storageImport.insecureRegistries
field.Example
HyperConverged
custom resourceapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000"
- 1
- Replace the examples in this list with valid registry hostnames.
6.2.2.3. Creating a VM from a container disk by using the web console
You can create a virtual machine (VM) by importing a container disk from a container registry by using the Red Hat OpenShift Service on AWS web console.
Procedure
-
Navigate to Virtualization
Catalog in the web console. - Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list.
-
Enter the container image URL. Example:
https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
- Set the disk size.
- Click Next.
- Click Create VirtualMachine.
6.2.2.4. Creating a VM from a container disk by using the command line
You can create a virtual machine (VM) from a container disk by using the command line.
When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage.
Prerequisites
- You must have access credentials for the container registry that contains the container disk.
Procedure
If the container registry requires authentication, create a
Secret
manifest, specifying the credentials, and save it as adata-source-secret.yaml
file:apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2
Apply the
Secret
manifest by running the following command:$ oc apply -f data-source-secret.yaml
If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM:
$ oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2
Edit the
VirtualMachine
manifest and save it as avm-fedora-datavolume.yaml
file:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}
- 1
- Specify the name of the VM.
- 2
- Specify the name of the data volume.
- 3
- Specify the size of the storage requested for the data volume.
- 4
- Optional: If you do not specify a storage class, the default storage class is used.
- 5
- Specify the URL of the container registry.
- 6
- Optional: Specify the secret name if you created a secret for the container registry access credentials.
- 7
- Optional: Specify a CA certificate config map.
Create the VM by running the following command:
$ oc create -f vm-fedora-datavolume.yaml
The
oc create
command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes toSucceeded
. You can start the VM.Data volume provisioning happens in the background, so there is no need to monitor the process.
Verification
The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:
$ oc get pods
Monitor the data volume until its status is
Succeeded
by running the following command:$ oc describe dv fedora-dv 1
- 1
- Specify the data volume name that you defined in the
VirtualMachine
manifest.
Verify that provisioning is complete and that the VM has started by accessing its serial console:
$ virtctl console vm-fedora-datavolume
6.2.3. Creating VMs by importing images from web pages
You can create virtual machines (VMs) by importing operating system images from web pages.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
6.2.3.1. Creating a VM from an image on a web page by using the web console
You can create a virtual machine (VM) by importing an image from a web page by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You must have access to the web page that contains the image.
Procedure
-
Navigate to Virtualization
Catalog in the web console. - Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list.
-
Enter the image URL. Example:
https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software
-
Enter the container image URL. Example:
https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
- Set the disk size.
- Click Next.
- Click Create VirtualMachine.
6.2.3.2. Creating a VM from an image on a web page by using the command line
You can create a virtual machine (VM) from an image on a web page by using the command line.
When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage.
Prerequisites
- You must have access credentials for the web page that contains the image.
Procedure
If the web page requires authentication, create a
Secret
manifest, specifying the credentials, and save it as adata-source-secret.yaml
file:apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2
Apply the
Secret
manifest by running the following command:$ oc apply -f data-source-secret.yaml
If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM:
$ oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2
Edit the
VirtualMachine
manifest and save it as avm-fedora-datavolume.yaml
file:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 5 registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}
- 1
- Specify the name of the VM.
- 2
- Specify the name of the data volume.
- 3
- Specify the size of the storage requested for the data volume.
- 4
- Optional: If you do not specify a storage class, the default storage class is used.
- 5 6
- Specify the URL of the web page.
- 7
- Optional: Specify the secret name if you created a secret for the web page access credentials.
- 8
- Optional: Specify a CA certificate config map.
Create the VM by running the following command:
$ oc create -f vm-fedora-datavolume.yaml
The
oc create
command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes toSucceeded
. You can start the VM.Data volume provisioning happens in the background, so there is no need to monitor the process.
Verification
The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:
$ oc get pods
Monitor the data volume until its status is
Succeeded
by running the following command:$ oc describe dv fedora-dv 1
- 1
- Specify the data volume name that you defined in the
VirtualMachine
manifest.
Verify that provisioning is complete and that the VM has started by accessing its serial console:
$ virtctl console vm-fedora-datavolume
6.2.4. Creating VMs by uploading images
You can create virtual machines (VMs) by uploading operating system images from your local machine.
You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
You must also install VirtIO drivers on Windows VMs.
6.2.4.1. Creating a VM from an uploaded image by using the web console
You can create a virtual machine (VM) from an uploaded operating system image by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
-
You must have an
IMG
,ISO
, orQCOW2
image file.
Procedure
-
Navigate to Virtualization
Catalog in the web console. - Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list.
- Browse to the image on your local machine and set the disk size.
- Click Customize VirtualMachine.
- Click Create VirtualMachine.
6.2.4.1.1. Generalizing a VM image
You can generalize a Red Hat Enterprise Linux (RHEL) image to remove all system-specific configuration data before you use the image to create a golden image, a preconfigured snapshot of a virtual machine (VM). You can use a golden image to deploy new VMs.
You can generalize a RHEL VM by using the virtctl
, guestfs
, and virt-sysprep
tools.
Prerequisites
- You have a RHEL virtual machine (VM) to use as a base VM.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the
virtctl
tool.
Procedure
Stop the RHEL VM if it is running, by entering the following command:
$ virtctl stop <my_vm_name>
- Optional: Clone the virtual machine to avoid losing the data from your original VM. You can then generalize the cloned VM.
Retrieve the
dataVolume
that stores the root filesystem for the VM by running the following command:$ oc get vm <my_vm_name> -o jsonpath="{.spec.template.spec.volumes}{'\n'}"
Example output
[{"dataVolume":{"name":"<my_vm_volume>"},"name":"rootdisk"},{"cloudInitNoCloud":{...}]
Retrieve the persistent volume claim (PVC) that matches the listed
dataVolume
by running the followimg command:$ oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound …
NoteIf your cluster configuration does not enable you to clone a VM, to avoid losing the data from your original VM, you can clone the VM PVC to a data volume instead. You can then use the cloned PVC to create a golden image.
If you are creating a golden image by cloning a PVC, continue with the next steps, using the cloned PVC.
Deploy a new interactive container with
libguestfs-tools
and attach the PVC to it by running the following command:$ virtctl guestfs <my-vm-volume> --uid 107
This command opens a shell for you to run the next command.
Remove all configurations specific to your system by running the following command:
$ virt-sysprep -a disk.img
-
In the Red Hat OpenShift Service on AWS console, click Virtualization
Catalog. - Click Add volume.
In the Add volume window:
- From the Source type list, select Use existing Volume.
- From the Volume project list, select your project.
- From the Volume name list, select the correct PVC.
- In the Volume name field, enter a name for the new golden image.
- From the Preference list, select the RHEL version you are using.
- From the Default Instance Type list, select the instance type with the correct CPU and memory requirements for the version of RHEL you selected previously.
- Click Save.
The new volume appears in the Select volume to boot from list. This is your new golden image. You can use this volume to create new VMs.
Additional resources for generalizing VMs
6.2.4.2. Creating a Windows VM
You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You created a Windows installation DVD or USB with the Windows Media Creation Tool. See Create Windows 10 installation media in the Microsoft documentation.
-
You created an
autounattend.xml
answer file. See Answer files (unattend.xml) in the Microsoft documentation.
Procedure
Upload the Windows image as a new PVC:
-
Navigate to Storage
PersistentVolumeClaims in the web console. -
Click Create PersistentVolumeClaim
With Data upload form. - Browse to the Windows image and select it.
Enter the PVC name, select the storage class and size and then click Upload.
The Windows image is uploaded to a PVC.
-
Navigate to Storage
Configure a new VM by cloning the uploaded PVC:
-
Navigate to Virtualization
Catalog. - Select a Windows template tile and click Customize VirtualMachine.
- Select Clone (clone PVC) from the Disk source list.
- Select the PVC project, the Windows image PVC, and the disk size.
-
Navigate to Virtualization
Apply the answer file to the VM:
- Click Customize VirtualMachine parameters.
- On the Sysprep section of the Scripts tab, click Edit.
-
Browse to the
autounattend.xml
answer file and click Save.
Set the run strategy of the VM:
- Clear Start this VirtualMachine after creation so that the VM does not start immediately.
- Click Create VirtualMachine.
-
On the YAML tab, replace
running:false
withrunStrategy: RerunOnFailure
and click Save.
Click the options menu and select Start.
The VM boots from the
sysprep
disk containing theautounattend.xml
answer file.
6.2.4.2.1. Generalizing a Windows VM image
You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM).
Before generalizing the VM, you must ensure the sysprep
tool cannot detect an answer file after the unattended Windows installation.
Prerequisites
- A running Windows VM with the QEMU guest agent installed.
Procedure
-
In the Red Hat OpenShift Service on AWS console, click Virtualization
VirtualMachines. - Select a Windows VM to open the VirtualMachine details page.
-
Click Configuration
Disks. -
Click the Options menu
beside the
sysprep
disk and select Detach. - Click Detach.
-
Rename
C:\Windows\Panther\unattend.xml
to avoid detection by thesysprep
tool. Start the
sysprep
program by running the following command:%WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
-
After the
sysprep
tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.
You can now specialize the VM.
6.2.4.2.2. Specializing a Windows VM image
Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM.
Prerequisites
- You must have a generalized Windows disk image.
-
You must create an
unattend.xml
answer file. See the Microsoft documentation for details.
Procedure
-
In the Red Hat OpenShift Service on AWS console, click Virtualization
Catalog. - Select a Windows template and click Customize VirtualMachine.
- Select PVC (clone PVC) from the Disk source list.
- Select the PVC project and PVC name of the generalized Windows image.
- Click Customize VirtualMachine parameters.
- Click the Scripts tab.
-
In the Sysprep section, click Edit, browse to the
unattend.xml
answer file, and click Save. - Click Create VirtualMachine.
During the initial boot, Windows uses the unattend.xml
answer file to specialize the VM. The VM is now ready to use.
Additional resources for creating Windows VMs
6.2.4.3. Creating a VM from an uploaded image by using the command line
You can upload an operating system image by using the virtctl
command line tool. You can use an existing data volume or create a new data volume for the image.
Prerequisites
-
You must have an
ISO
,IMG
, orQCOW2
operating system image file. -
For best performance, compress the image file by using the virt-sparsify tool or the
xz
orgzip
utilities. -
You must have
virtctl
installed. - The client machine must be configured to trust the Red Hat OpenShift Service on AWS router’s certificate.
Procedure
Upload the image by running the
virtctl image-upload
command:$ virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3
Note-
If you do not want to create a new data volume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. When you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new data volume, omit the
Optional. To verify that a data volume was created, view all data volumes by running the following command:
$ oc get dvs
6.2.5. Cloning VMs
You can clone virtual machines (VMs) or create new VMs from snapshots.
6.2.5.1. Cloning a VM by using the web console
You can clone an existing VM by using the web console.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a VM to open the VirtualMachine details page.
- Click Actions.
- Select Clone.
- On the Clone VirtualMachine page, enter the name of the new VM.
- (Optional) Select the Start cloned VM checkbox to start the cloned VM.
- Click Clone.
6.2.5.2. Creating a VM from an existing snapshot by using the web console
You can create a new VM by copying an existing snapshot.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a VM to open the VirtualMachine details page.
- Click the Snapshots tab.
- Click the actions menu for the snapshot you want to copy.
- Select Create VirtualMachine.
- Enter the name of the virtual machine.
- (Optional) Select the Start this VirtualMachine after creation checkbox to start the new virtual machine.
- Click Create.
6.2.5.3. Additional resources
6.2.6. Creating VMs by cloning PVCs
You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
You clone a PVC by creating a data volume that references a source PVC.
6.2.6.1. About cloning
When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods:
- CSI volume cloning
- Smart cloning
Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods.
6.2.6.1.1. CSI volume cloning
Container Storage Interface (CSI) cloning uses CSI driver features to more efficiently clone a source data volume.
CSI volume cloning has the following requirements:
- The CSI driver that backs the storage class of the persistent volume claim (PVC) must support volume cloning.
-
For provisioners not recognized by the CDI, the corresponding storage profile must have the
cloneStrategy
set to CSI Volume Cloning. - The source and target PVCs must have the same storage class and volume mode.
-
If you create the data volume, you must have permission to create the
datavolumes/source
resource in the source namespace. - The source volume must not be in use.
6.2.6.1.2. Smart cloning
When a Container Storage Interface (CSI) plugin with snapshot capabilities is available, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) from a snapshot, which then allows efficient cloning of additional PVCs.
Smart cloning has the following requirements:
- A snapshot class associated with the storage class must exist.
- The source and target PVCs must have the same storage class and volume mode.
-
If you create the data volume, you must have permission to create the
datavolumes/source
resource in the source namespace. - The source volume must not be in use.
6.2.6.1.3. Host-assisted cloning
When the requirements for neither Container Storage Interface (CSI) volume cloning nor smart cloning have been met, host-assisted cloning is used as a fallback method. Host-assisted cloning is less efficient than either of the two other cloning methods.
Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created.
Example PVC target annotation
apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy
Example event
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible
6.2.6.2. Creating a VM from a PVC by using the web console
You can create a virtual machine (VM) by importing an image from a web page by using the Red Hat OpenShift Service on AWS web console. You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You must have access to the web page that contains the image.
- You must have access to the namespace that contains the source PVC.
Procedure
-
Navigate to Virtualization
Catalog in the web console. - Click a template tile without an available boot source.
- Click Customize VirtualMachine.
- On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list.
-
Enter the image URL. Example:
https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software
-
Enter the container image URL. Example:
https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
- Select the PVC project and the PVC name.
- Set the disk size.
- Click Next.
- Click Create VirtualMachine.
6.2.6.3. Creating a VM from a PVC by using the command line
You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line.
You can clone a PVC by using one of the following options:
Cloning a PVC to a new data volume.
This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.
Cloning a PVC by creating a
VirtualMachine
manifest with adataVolumeTemplates
stanza.This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC.
6.2.6.3.1. Cloning a PVC to a data volume
You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line.
You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.
Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the kubevirt
content type.
Prerequisites
- The VM with the source PVC must be powered down.
- If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace.
Additional prerequisites for smart-cloning:
- Your storage provider must support snapshots.
- The source and target PVCs must have the same storage provider and volume mode.
The value of the
driver
key of theVolumeSnapshotClass
object must match the value of theprovisioner
key of theStorageClass
object as shown in the following example:Example
VolumeSnapshotClass
objectkind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ...
Example
StorageClass
objectkind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com
Procedure
Create a
DataVolume
manifest as shown in the following example:apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: "<source_namespace>" 2 name: "<my_vm_disk>" 3 storage: {}
Create the data volume by running the following command:
$ oc create -f <datavolume>.yaml
NoteData volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned.
6.2.6.3.2. Creating a VM from a cloned PVC by using a data volume template
You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template.
This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC.
Prerequisites
- The VM with the source PVC must be powered down.
Procedure
Create a
VirtualMachine
manifest as shown in the following example:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: "<source_pvc>" 3
Create the virtual machine with the PVC-cloned data volume:
$ oc create -f <vm-clone-datavolumetemplate>.yaml
6.2.7. Installing the QEMU guest agent and VirtIO drivers
The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
6.2.7.1. Installing the QEMU guest agent
6.2.7.1.1. Installing the QEMU guest agent on a Linux VM
The qemu-guest-agent
is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
Procedure
- Log in to the VM by using a console or SSH.
Install the QEMU guest agent by running the following command:
$ yum install -y qemu-guest-agent
Ensure the service is persistent and start it:
$ systemctl enable --now qemu-guest-agent
Verification
Run the following command to verify that
AgentConnected
is listed in the VM spec:$ oc get vm <vm_name>
6.2.7.1.2. Installing the QEMU guest agent on a Windows VM
For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
Procedure
-
In the Windows guest operating system, use the File Explorer to navigate to the
guest-agent
directory in thevirtio-win
CD drive. -
Run the
qemu-ga-x86_64.msi
installer.
Verification
Obtain a list of network services by running the following command:
$ net start
-
Verify that the output contains the
QEMU Guest Agent
.
6.2.7.2. Installing VirtIO drivers on Windows VMs
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download.
The container-native-virtualization/virtio-win
container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the VM.
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
6.2.7.2.1. Attaching VirtIO container disk to Windows VMs during installation
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM.
Procedure
- When creating a Windows VM from a template, click Customize VirtualMachine.
- Select Mount Windows drivers disk.
- Click the Customize VirtualMachine parameters.
- Click Create VirtualMachine.
After the VM is created, the virtio-win
SATA CD disk will be attached to the VM.
6.2.7.2.2. Attaching VirtIO container disk to an existing Windows VM
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM.
Procedure
-
Navigate to the existing Windows VM, and click Actions
Stop. -
Go to VM Details
Configuration Disks and click Add disk. -
Add
windows-driver-disk
from container source, set the Type to CD-ROM, and then set the Interface to SATA. - Click Save.
- Start the VM, and connect to a graphical console.
6.2.7.2.3. Installing VirtIO drivers during Windows installation
You can install the VirtIO drivers while installing Windows on a virtual machine (VM).
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Prerequisites
-
A storage device containing the
virtio
drivers must be attached to the VM.
Procedure
-
In the Windows operating system, use the
File Explorer
to navigate to thevirtio-win
CD drive. Double-click the drive to run the appropriate installer for your VM.
For a 64-bit vCPU, select the
virtio-win-gt-x64
installer. 32-bit vCPUs are no longer supported.- Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default.
- After the installation is complete, select Finish.
- Reboot the VM.
Verification
-
Open the system disk on the PC. This is typically
C:
. -
Navigate to Program Files
Virtio-Win.
If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful.
6.2.7.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM
You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM).
This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps.
Prerequisites
- A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive.
Procedure
- Start the VM and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
- Open the Device Properties to identify the unknown device.
- Right-click the device and select Properties.
- Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the VM to complete the driver installation.
6.2.7.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive
You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive.
Downloading the container-native-virtualization/virtio-win
container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time.
Prerequisites
-
You must have access to the Red Hat registry or to the downloaded
container-native-virtualization/virtio-win
container disk in a restricted environment.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as a CD drive by editing theVirtualMachine
manifest:# ... spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- OpenShift Virtualization boots the VM disks in the order defined in the
VirtualMachine
manifest. You can either define other VM disks that boot before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks.
Apply the changes:
If the VM is not running, run the following command:
$ virtctl start <vm> -n <namespace>
If the VM is running, reboot the VM or run the following command:
$ oc apply -f <vm.yaml>
- After the VM has started, install the VirtIO drivers from the SATA CD drive.
6.2.7.3. Updating VirtIO drivers
6.2.7.3.1. Updating VirtIO drivers on a Windows VM
Update the virtio
drivers on a Windows virtual machine (VM) by using the Windows Update service.
Prerequisites
- The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service.
Procedure
- In the Windows Guest operating system, click the Windows key and select Settings.
-
Navigate to Windows Update
Advanced Options Optional Updates. - Install all updates from Red Hat, Inc..
- Reboot the VM.
Verification
- On the Windows VM, navigate to the Device Manager.
- Select a device.
- Select the Driver tab.
-
Click Driver Details and confirm that the
virtio
driver details displays the correct version.
6.3. Connecting to virtual machine consoles
You can connect to the following consoles to access running virtual machines (VMs):
6.3.1. Connecting to the VNC console
You can connect to the VNC console of a virtual machine by using the Red Hat OpenShift Service on AWS web console or the virtctl
command line tool.
6.3.1.1. Connecting to the VNC console by using the web console
You can connect to the VNC console of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Procedure
-
On the Virtualization
VirtualMachines page, click a VM to open the VirtualMachine details page. - Click the Console tab. The VNC console session starts automatically.
Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
6.3.1.2. Connecting to the VNC console by using virtctl
You can use the virtctl
command line tool to connect to the VNC console of a running virtual machine.
If you run the virtctl vnc
command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh
command with the -X
or -Y
flags.
Prerequisites
-
You must install the
virt-viewer
package.
Procedure
Run the following command to start the console session:
$ virtctl vnc <vm_name>
If the connection fails, run the following command to collect troubleshooting information:
$ virtctl vnc <vm_name> -v 4
6.3.1.3. Generating a temporary token for the VNC console
To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API.
Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command.
Prerequisites
-
A running VM with OpenShift Virtualization 4.14 or later and
ssp-operator
4.14 or later
Procedure
Enable the feature gate in the HyperConverged (
HCO
) custom resource (CR):$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]'
Generate a token by entering the following command:
$ curl --header "Authorization: Bearer ${TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>"
The
<duration>
parameter can be set in hours and minutes, with a minimum duration of 10 minutes. For example:5h30m
. If this parameter is not set, the token is valid for 10 minutes by default.Sample output:
{ "token": "eyJhb..." }
Optional: Use the token provided in the output to create a variable:
$ export VNC_TOKEN="<token>"
You can now use the token to access the VNC console of a VM.
Verification
Log in to the cluster by entering the following command:
$ oc login --token ${VNC_TOKEN}
Test access to the VNC console of the VM by using the
virtctl
command:$ virtctl vnc <vm_name> -n <namespace>
It is currently not possible to revoke a specific token.
To revoke a token, you must delete the service account that was used to create it. However, this also revokes all other tokens that were created by using the service account. Use the following command with caution:
$ virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access"
6.3.1.3.1. Granting token generation permission for the VNC console by using the cluster role
As a cluster administrator, you can install a cluster role and bind it to a user or service account to allow access to the endpoint that generates tokens for the VNC console.
Procedure
Choose to bind the cluster role to either a user or service account.
Run the following command to bind the cluster role to a user:
$ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="${USER_NAME}"
Run the following command to bind the cluster role to a service account:
$ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="${SERVICE_ACCOUNT_NAME}"
6.3.2. Connecting to the serial console
You can connect to the serial console of a virtual machine by using the Red Hat OpenShift Service on AWS web console or the virtctl
command line tool.
Running concurrent VNC connections to a single virtual machine is not currently supported.
6.3.2.1. Connecting to the serial console by using the web console
You can connect to the serial console of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Procedure
-
On the Virtualization
VirtualMachines page, click a VM to open the VirtualMachine details page. - Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Serial console from the console list.
- To end the console session, click outside the console pane and then click Disconnect.
6.3.2.2. Connecting to the serial console by using virtctl
You can use the virtctl
command line tool to connect to the serial console of a running virtual machine.
Procedure
Run the following command to start the console session:
$ virtctl console <vm_name>
-
Press
Ctrl+]
to end the console session.
6.3.3. Connecting to the desktop viewer
You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP).
6.3.3.1. Connecting to the desktop viewer by using the web console
You can connect to the desktop viewer of a Windows virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You installed the QEMU guest agent on the Windows VM.
- You have an RDP client installed.
Procedure
-
On the Virtualization
VirtualMachines page, click a VM to open the VirtualMachine details page. - Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Desktop viewer from the console list.
- Click Create RDP Service to open the RDP Service dialog.
- Select Expose RDP Service and click Save to create a node port service.
-
Click Launch Remote Desktop to download an
.rdp
file and launch the desktop viewer.
6.4. Configuring SSH access to virtual machines
You can configure SSH access to virtual machines (VMs) by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
virtctl ssh
command with the private key.You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
virtctl port-foward
command to your.ssh/config
file and connect to the VM by using OpenSSH.You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address.
6.4.1. Access configuration considerations
Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster.
If the internal cluster network cannot handle the traffic load, you can configure a secondary network.
virtctl ssh
andvirtctl port-forwarding
commands- Simple to configure.
- Recommended for troubleshooting VMs.
-
virtctl port-forwarding
recommended for automated configuration of VMs with Ansible. - Dynamic public SSH keys can be used to provision VMs with Ansible.
- Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server.
- The API server must be able to handle the traffic load.
- The clients must be able to access the API server.
- The clients must have access credentials for the cluster.
- Cluster IP service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access an internal cluster IP address.
- Node port service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access at least one node.
- Load balancer service
- A load balancer must be configured.
- Each node must be able to handle the traffic load of one or more load balancer services.
- Secondary network
- Excellent performance because traffic does not go through the internal cluster network.
- Allows a flexible approach to network topology.
- Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network.
6.4.2. Using virtctl ssh
You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh
command.
This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server.
6.4.2.1. About static and dynamic SSH key management
You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
Static SSH key management
You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot.
You can add the key by using one of the following methods:
- Add a key to a single VM when you create it by using the web console or the command line.
- Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project.
Use cases
- As a VM owner, you can provision all your newly created VMs with a single key.
Dynamic SSH key management
You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources.
You can disable dynamic key management for security reasons. Then, the VM inherits the key management setting of the image from which it was created.
Use cases
-
Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a
Secret
object that is applied to all VMs in a namespace. - User access: You can add your access credentials to all VMs that you create and manage.
Ansible provisioning:
- As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning.
- As a VM owner, you can create a VM and attach the keys used for Ansible provisioning.
Key rotation:
- As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace.
- As a workload owner, you can rotate the key for the VMs that you manage.
6.4.2.2. Static key management
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time.
You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create.
If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually.
6.4.2.2.1. Adding a key when creating a VM from a template
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
-
Navigate to Virtualization
Catalog in the web console. Click a template tile.
The guest operating system must support configuration from a cloud-init data source.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
6.4.2.2.2. Adding a key when creating a VM from an instance type by using the web console
You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Procedure
In the web console, navigate to Virtualization
Catalog. The InstanceTypes tab opens by default.
Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
6.4.2.2.3. Adding a key when creating a VM by using the command line
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot.
The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
Create a manifest file for a
VirtualMachine
object and aSecret
object:Example manifest
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3
Create the
VirtualMachine
andSecret
objects by running the following command:$ oc create -f <manifest_file>.yaml
Start the VM by running the following command:
$ virtctl start vm example-vm -n example-namespace
Verification
Get the VM configuration:
$ oc describe vm example-vm -n example-namespace
Example output
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys # ...
6.4.2.3. Dynamic key management
You can enable dynamic key injection for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console or the command line. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created.
6.4.2.3.1. Enabling dynamic key injection when creating a VM from a template
You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the Red Hat OpenShift Service on AWS web console. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
-
Navigate to Virtualization
Catalog in the web console. - Click the Red Hat Enterprise Linux 9 VM tile.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
6.4.2.3.2. Enabling dynamic key injection when creating a VM from an instance type by using the web console
You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. Then, you can add or revoke the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Procedure
In the web console, navigate to Virtualization
Catalog. The InstanceTypes tab opens by default.
Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
- Click the Red Hat Enterprise Linux 9 VM tile.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
- Set Dynamic SSH key injection in the VirtualMachine details section to on.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
6.4.2.3.3. Enabling dynamic SSH key injection by using the web console
You can enable dynamic key injection for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Then, you can update the public SSH key at runtime.
The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9.
Prerequisites
- The guest operating system is RHEL 9.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a VM to open the VirtualMachine details page.
- On the Configuration tab, click Scripts.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
6.4.2.3.4. Enabling dynamic key injection by using the command line
You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
Create a manifest file for a
VirtualMachine
object and aSecret
object:Example manifest
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3
Create the
VirtualMachine
andSecret
objects by running the following command:$ oc create -f <manifest_file>.yaml
Start the VM by running the following command:
$ virtctl start vm example-vm -n example-namespace
Verification
Get the VM configuration:
$ oc describe vm example-vm -n example-namespace
Example output
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys # ...
6.4.2.4. Using the virtctl ssh command
You can access a running virtual machine (VM) by using the virtcl ssh
command.
Prerequisites
-
You installed the
virtctl
command line tool. - You added a public SSH key to the VM.
- You have an SSH client installed.
-
The environment where you installed the
virtctl
tool has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Run the
virtctl ssh
command:$ virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1
- 1
- Specify the namespace, user name, and the SSH private key. The default SSH key location is
/home/user/.ssh
. If you save the key in a different location, you must specify the path.
Example
$ virtctl -n my-namespace ssh cloud-user@example-vm -i my-key
You can copy the virtctl ssh
command in the web console by selecting Copy SSH command from the options
menu beside a VM on the VirtualMachines page.
6.4.3. Using the virtctl port-forward command
You can use your local OpenSSH client and the virtctl port-forward
command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs.
This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server.
Prerequisites
-
You have installed the
virtctl
client. - The virtual machine you want to access is running.
-
The environment where you installed the
virtctl
tool has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Add the following text to the
~/.ssh/config
file on your client machine:Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p
Connect to the VM by running the following command:
$ ssh <user>@vm/<vm_name>.<namespace>
6.4.4. Using a service for SSH access
You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls.
If the cluster network cannot handle the traffic load, consider using a secondary network for VM access.
6.4.4.1. About services
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort
and LoadBalancer
types, exposure to the outside world.
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIP
is the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePort
makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
6.4.4.2. Creating a service
You can create a service to expose a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console, virtctl
command line tool, or a YAML file.
6.4.4.2.1. Enabling load balancer service creation by using the web console
You can enable the creation of load balancer services for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You are logged in as a user with the
cluster-admin
role.
Procedure
-
Navigate to Virtualization
Overview. - On the Settings tab, click Cluster.
- Expand General settings and SSH configuration.
- Set SSH over LoadBalancer service to on.
6.4.4.2.2. Creating a service by using the web console
You can create a node port or load balancer service for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You configured the cluster network to support either a load balancer or a node port.
- To create a load balancer service, you enabled the creation of load balancer services.
Procedure
- Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page.
- On the Details tab, select SSH over LoadBalancer from the SSH service type list.
-
Optional: Click the copy icon to copy the
SSH
command to your clipboard.
Verification
- Check the Services pane on the Details tab to view the new service.
6.4.4.2.3. Creating a service by using virtctl
You can create a service for a virtual machine (VM) by using the virtctl
command line tool.
Prerequisites
-
You installed the
virtctl
command line tool. - You configured the cluster network to support the service.
-
The environment where you installed
virtctl
has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Create a service by running the following command:
$ virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1
- 1
- Specify the
ClusterIP
,NodePort
, orLoadBalancer
service type.
Example
$ virtctl expose vm example-vm --name example-service --type NodePort --port 22
Verification
Verify the service by running the following command:
$ oc get service
Next steps
After you create a service with virtctl
, you must add special: key
to the spec.template.metadata.labels
stanza of the VirtualMachine
manifest. See Creating a service by using the command line.
6.4.4.2.4. Creating a service by using the command line
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
Procedure
Edit the
VirtualMachine
manifest to add the label for service creation:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ...
- 1
- Add
special: key
to thespec.template.metadata.labels
stanza.
NoteLabels on a virtual machine are passed through to the pod. The
special: key
label must match the label in thespec.selector
attribute of theService
manifest.-
Save the
VirtualMachine
manifest file to apply your changes. Create a
Service
manifest to expose the VM:apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000
-
Save the
Service
manifest file. Create the service by running the following command:
$ oc create -f example-service.yaml
- Restart the VM to apply the changes.
Verification
Query the
Service
object to verify that it is available:$ oc get service -n example-namespace
6.4.4.3. Connecting to a VM exposed by a service by using SSH
You can connect to a virtual machine (VM) that is exposed by a service by using SSH.
Prerequisites
- You created a service to expose the VM.
- You have an SSH client installed.
- You are logged in to the cluster.
Procedure
Run the following command to access the VM:
$ ssh <user_name>@<ip_address> -p <port> 1
- 1
- Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service.
6.4.5. Using a secondary network for SSH access
You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH.
Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method.
See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options.
Prerequisites
- You configured a secondary network.
- You created a network attachment definition.
6.4.5.1. Configuring a VM network interface by using the web console
You can configure a network interface for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Prerequisites
- You created a network attachment definition for the network.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Click a VM to view the VirtualMachine details page.
- On the Configuration tab, click the Network interfaces tab.
- Click Add network interface.
- Enter the interface name and select the network attachment definition from the Network list.
- Click Save.
- Restart the VM to apply the changes.
6.4.5.2. Connecting to a VM attached to a secondary network by using SSH
You can connect to a virtual machine (VM) attached to a secondary network by using SSH.
Prerequisites
- You attached a VM to a secondary network with a DHCP server.
- You have an SSH client installed.
Procedure
Obtain the IP address of the VM by running the following command:
$ oc describe vm <vm_name> -n <namespace>
Example output
# ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default # ...
Connect to the VM by running the following command:
$ ssh <user_name>@<ip_address> -i <ssh_key>
Example
$ ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user
6.5. Editing virtual machines
You can update a virtual machine (VM) configuration by using the Red Hat OpenShift Service on AWS web console. You can update the YAML file or the VirtualMachine details page.
You can also edit a VM by using the command line.
6.5.1. Hot plugging memory on a virtual machine
You can add or remove the amount of memory allocated to a virtual machine (VM) without having to restart the VM by using the Red Hat OpenShift Service on AWS web console.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Edit CPU|Memory.
- Enter the desired amount of memory and click Save.
The system applies these changes immediately. If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a RestartRequired
condition is added to the VM.
Linux guests require a kernel version of 5.16 or later and Windows guests require the latest viomem
drivers.
6.5.2. Hot plugging CPUs on a virtual machine
You can increase or decrease the number of CPU sockets allocated to a virtual machine (VM) without having to restart the VM by using the Red Hat OpenShift Service on AWS web console.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Edit CPU|Memory.
- Select the vCPU radio button.
Enter the desired number of vCPU sockets and click Save.
If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a
RestartRequired
condition is added to the VM.
6.5.3. Editing a virtual machine by using the command line
You can edit a virtual machine (VM) by using the command line.
Prerequisites
-
You installed the
oc
CLI.
Procedure
Obtain the virtual machine configuration by running the following command:
$ oc edit vm <vm_name>
- Edit the YAML configuration.
If you edit a running virtual machine, you need to do one of the following:
- Restart the virtual machine.
Run the following command for the new configuration to take effect:
$ oc apply vm <vm_name> -n <namespace>
6.5.4. Adding a disk to a virtual machine
You can add a virtual disk to a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a VM to open the VirtualMachine details page.
- On the Disks tab, click Add disk.
Specify the Source, Name, Size, Type, Interface, and Storage Class.
- Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
-
Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the
kubevirt-storage-class-defaults
config map.
- Click Add.
If the VM is running, you must restart the VM to apply the change.
6.5.4.1. Storage fields
Field | Description |
---|---|
Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). |
Use an existing PVC | Use a PVC that is already available in the cluster. |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. |
Import via Registry (creates PVC) | Import content via container registry. |
Name |
Name of the disk. The name can contain lowercase letters ( |
Size | Size of the disk in GiB. |
Type | Type of disk. Example: Disk or CD-ROM |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
Storage Class | The storage class that is used to create the disk. |
Advanced storage settings
The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.
If you do not specify these parameters, the system uses the default storage profile values.
Parameter | Option | Parameter description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode | ReadWriteOnce (RWO) | Volume can be mounted as read-write by a single node. |
ReadWriteMany (RWX) | Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. |
6.5.5. Mounting a Windows driver disk on a virtual machine
You can mount a Windows driver disk on a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Storage.
Select the Mount Windows drivers disk checkbox.
The Windows driver disk is displayed in the list of mounted disks.
6.5.6. Adding a secret, config map, or service account to a virtual machine
You add a secret, config map, or service account to a virtual machine by using the Red Hat OpenShift Service on AWS web console.
These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.
If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page.
Prerequisites
- The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
-
Click Configuration
Environment. - Click Add Config Map, Secret or Service Account.
- Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
- Optional: Click Reload to revert the environment to its last saved state.
- Click Save.
Verification
-
On the VirtualMachine details page, click Configuration
Disks and verify that the resource is displayed in the list of disks. -
Restart the virtual machine by clicking Actions
Restart.
You can now mount the secret, config map, or service account as you would mount any other disk.
Additional resources for config maps, secrets, and service accounts
6.6. Editing boot order
You can update the values for a boot order list by using the web console or the CLI.
With Boot Order in the Virtual Machine Overview page, you can:
- Select a disk or network interface controller (NIC) and add it to the boot order list.
- Edit the order of the disks or NICs in the boot order list.
- Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.
6.6.1. Adding items to a boot order list in the web console
Add items to a boot order list by using the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
- Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
- Add any additional disks or NICs to the boot order list.
- Click Save.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
6.6.2. Editing a boot order list in the web console
Edit the boot order list in the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
Choose the appropriate method to move the item in the boot order list:
- If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
- If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
- Click Save.
If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
6.6.3. Editing a boot order list in the YAML configuration file
Edit the boot order list in a YAML configuration file by using the CLI.
Procedure
Open the YAML configuration file for the virtual machine by running the following command:
$ oc edit vm <vm_name> -n <namespace>
Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:
disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default
- Save the YAML file.
6.6.4. Removing items from a boot order list in the web console
Remove items from a boot order list by using the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
- Click the Remove icon next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
6.7. Deleting virtual machines
You can delete a virtual machine from the web console or by using the oc
command line interface.
6.7.1. Deleting a virtual machine using the web console
Deleting a virtual machine permanently removes it from the cluster.
Procedure
-
In the Red Hat OpenShift Service on AWS console, click Virtualization
VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Delete.
Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions
Delete. - Optional: Select With grace period or clear Delete disks.
- Click Delete to permanently delete the virtual machine.
6.7.2. Deleting a virtual machine by using the CLI
You can delete a virtual machine by using the oc
command line interface (CLI). The oc
client enables you to perform actions on multiple virtual machines.
Prerequisites
- Identify the name of the virtual machine that you want to delete.
Procedure
Delete the virtual machine by running the following command:
$ oc delete vm <vm_name>
NoteThis command only deletes a VM in the current project. Specify the
-n <project_name>
option if the VM you want to delete is in a different project or namespace.
6.8. Exporting virtual machines
You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes.
You create a VirtualMachineExport
custom resource (CR) by using the command line interface.
Alternatively, you can use the virtctl vmexport
command to create a VirtualMachineExport
CR and to download exported volumes.
You can migrate virtual machines between OpenShift Virtualization clusters by using the Migration Toolkit for Virtualization.
6.8.1. Creating a VirtualMachineExport custom resource
You can create a VirtualMachineExport
custom resource (CR) to export the following objects:
- Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM.
-
VM snapshot: Exports PVCs contained in a
VirtualMachineSnapshot
CR. -
PVC: Exports a PVC. If the PVC is used by another pod, such as the
virt-launcher
pod, the export remains in aPending
state until the PVC is no longer in use.
The VirtualMachineExport
CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress
or Route
.
The export server supports the following file formats:
-
raw
: Raw disk image file. -
gzip
: Compressed disk image file. -
dir
: PVC directory and files. -
tar.gz
: Compressed PVC file.
Prerequisites
- The VM must be shut down for a VM export.
Procedure
Create a
VirtualMachineExport
manifest to export a volume from aVirtualMachine
,VirtualMachineSnapshot
, orPersistentVolumeClaim
CR according to the following example and save it asexample-export.yaml
:VirtualMachineExport
exampleapiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3
Create the
VirtualMachineExport
CR:$ oc create -f example-export.yaml
Get the
VirtualMachineExport
CR:$ oc get vmexport example-export -o yaml
The internal and external links for the exported volumes are displayed in the
status
stanza:Output example
apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export
6.8.2. Accessing exported virtual machine manifests
After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine
manifest and related information from the export server.
Prerequisites
You exported a virtual machine or VM snapshot by creating a
VirtualMachineExport
custom resource (CR).NoteVirtualMachineExport
objects that have thespec.source.kind: PersistentVolumeClaim
parameter do not generate virtual machine manifests.
Procedure
To access the manifests, you must first copy the certificates from the source cluster to the target cluster.
- Log in to the source cluster.
Save the certificates to the
cacert.crt
file by running the following command:$ oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1
- 1
- Replace
<export_name>
with themetadata.name
value from theVirtualMachineExport
object.
-
Copy the
cacert.crt
file to the target cluster.
Decode the token in the source cluster and save it to the
token_decode
file by running the following command:$ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1
- 1
- Replace
<export_name>
with themetadata.name
value from theVirtualMachineExport
object.
-
Copy the
token_decode
file to the target cluster. Get the
VirtualMachineExport
custom resource by running the following command:$ oc get vmexport <export_name> -o yaml
Review the
status.links
stanza, which is divided intoexternal
andinternal
sections. Note themanifests.url
fields within each section:Example output
apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: #... links: external: #... manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: #... manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export
- 1
- Contains the
VirtualMachine
manifest,DataVolume
manifest, if present, and aConfigMap
manifest that contains the public certificate for the external URL’s ingress or route. - 2
- Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token.
- 3
- Contains the
VirtualMachine
manifest,DataVolume
manifest, if present, and aConfigMap
manifest that contains the certificate for the internal URL’s export server.
- Log in to the target cluster.
Get the
Secret
manifest by running the following command:$ curl --cacert cacert.crt <secret_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml"
For example:
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
Get the manifests of
type: all
, such as theConfigMap
andVirtualMachine
manifests, by running the following command:$ curl --cacert cacert.crt <all_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml"
For example:
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
Next steps
-
You can now create the
ConfigMap
andVirtualMachine
objects on the target cluster by using the exported manifests.
6.9. Managing virtual machine instances
If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc
or virtctl
commands from the command-line interface (CLI).
The virtctl
command provides more virtualization options than the oc
command. For example, you can use virtctl
to pause a VM or expose a port.
6.9.1. About virtual machine instances
A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc
command-line interface (CLI).
A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:
- List standalone VMIs and their details.
- Edit labels and annotations for a standalone VMI.
- Delete a standalone VMI.
When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.
Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.
When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the RestartRequired
VM condition. Changes are effective on the next reboot, and the condition is removed.
6.9.2. Listing all virtual machine instances using the CLI
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc
command-line interface (CLI).
Procedure
List all VMIs by running the following command:
$ oc get vmis -A
6.9.3. Listing standalone virtual machine instances using the web console
Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).
VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.
Procedure
Click Virtualization
VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge next to its name.
6.9.4. Editing a standalone virtual machine instance using the web console
You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.
Procedure
-
In the Red Hat OpenShift Service on AWS console, click Virtualization
VirtualMachines from the side menu. - Select a standalone VMI to open the VirtualMachineInstance details page.
- On the Details tab, click the pencil icon beside Annotations or Labels.
- Make the relevant changes and click Save.
6.9.5. Deleting a standalone virtual machine instance using the CLI
You can delete a standalone virtual machine instance (VMI) by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the VMI that you want to delete.
Procedure
Delete the VMI by running the following command:
$ oc delete vmi <vmi_name>
6.9.6. Deleting a standalone virtual machine instance using the web console
Delete a standalone virtual machine instance (VMI) from the web console.
Procedure
-
In the Red Hat OpenShift Service on AWS web console, click Virtualization
VirtualMachines from the side menu. -
Click Actions
Delete VirtualMachineInstance. - In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.
6.10. Controlling virtual machine states
You can stop, start, restart, and unpause virtual machines from the web console.
You can use virtctl
to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl
to force stop a VM or expose a port.
6.10.1. Starting a virtual machine
You can start a virtual machine from the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to start.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row and click Start VirtualMachine.
To view comprehensive information about the selected virtual machine before you start it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Start.
When you start virtual machine that is provisioned from a URL
source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.
6.10.2. Stopping a virtual machine
You can stop a virtual machine from the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to stop.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row and click Stop VirtualMachine.
To view comprehensive information about the selected virtual machine before you stop it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Stop.
6.10.3. Restarting a virtual machine
You can restart a running virtual machine from the web console.
To avoid errors, do not restart a virtual machine while it has a status of Importing.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to restart.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row and click Restart.
To view comprehensive information about the selected virtual machine before you restart it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Restart.
6.10.4. Pausing a virtual machine
You can pause a virtual machine from the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to pause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row and click Pause VirtualMachine.
To view comprehensive information about the selected virtual machine before you pause it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Pause.
6.10.5. Unpausing a virtual machine
You can unpause a paused virtual machine from the web console.
Prerequisites
- At least one of your virtual machines must have a status of Paused.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to unpause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row and click Unpause VirtualMachine.
To view comprehensive information about the selected virtual machine before you unpause it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Unpause.
6.11. Using virtual Trusted Platform Module devices
Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine
(VM) or VirtualMachineInstance
(VMI) manifest.
6.11.1. About vTPM devices
A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip.
You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip.
If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one.
A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass
attribute in the HyperConverged
custom resource (CR):
kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name> # ...
The storage class must be of type Filesystem
and support the ReadWriteMany
(RWX) access mode.
6.11.2. Adding a vTPM device to a virtual machine
Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have configured a Persistent Volume Claim (PVC) to use a storage class of type
Filesystem
that supports theReadWriteMany
(RWX) access mode. This is necessary for the vTPM device data to persist across VM reboots.
Procedure
Run the following command to update the VM configuration:
$ oc edit vm <vm_name> -n <namespace>
Edit the VM specification to add the vTPM device. For example:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2 # ...
- To apply your changes, save and exit the editor.
- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
6.12. Managing virtual machines with OpenShift Pipelines
Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container.
The Scheduling, Scale, and Performance (SSP) Operator integrates OpenShift Virtualization with OpenShift Pipelines. The SSP Operator includes tasks and example pipelines that allow you to:
- Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes
- Run commands in VMs
-
Manipulate disk images with
libguestfs
tools
6.12.1. Prerequisites
-
You have access to an Red Hat OpenShift Service on AWS cluster with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). - You have installed OpenShift Pipelines.
6.12.2. Virtual machine tasks supported by the SSP Operator
The following table shows the tasks that are included as part of the SSP Operator.
Task | Description |
---|---|
|
Create a virtual machine from a provided manifest or with |
| Create a virtual machine from a template. |
| Copy a virtual machine template. |
| Modify a virtual machine template. |
| Create or delete data volumes or data sources. |
| Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. |
|
Use the |
|
Use the |
| Wait for a specific status of a virtual machine instance and fail or succeed based on the status. |
Virtual machine creation in pipelines now utilizes ClusterInstanceType
and ClusterPreference
instead of template-based tasks, which have been deprecated. The create-vm-from-template
, copy-template
, and modify-vm-template
commands remain available but are not used in default pipeline tasks.
6.12.3. Windows EFI installer pipeline
You can run the Windows EFI installer pipeline by using the web console or CLI.
The Windows EFI installer pipeline installs Windows 10, Windows 11, or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process.
The Windows EFI installer pipeline uses a config map file with sysprep
predefined by Red Hat OpenShift Service on AWS and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep
definition.
6.12.3.1. Running the example pipelines using the web console
You can run the example pipelines from the Pipelines menu in the web console.
Procedure
-
Click Pipelines
Pipelines in the side menu. - Select a pipeline to open the Pipeline details page.
- From the Actions list, select Start. The Start Pipeline dialog is displayed.
- Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status.
6.12.3.2. Running the example pipelines using the CLI
Use a PipelineRun
resource to run the example pipelines. A PipelineRun
object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun
object for each task in the pipeline.
Procedure
To run the Windows 10 installer pipeline, create the following
PipelineRun
manifest:apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> 1 pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template serviceAccountName: copy-template-task - pipelineTaskName: modify-vm-template serviceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template serviceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status serviceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv serviceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm serviceAccountName: cleanup-vm-task status: {}
- 1
- Specify the URL for the Windows 10 64-bit ISO file. The product language must be English (United States).
Apply the
PipelineRun
manifest:$ oc apply -f windows10-installer-run.yaml
To run the Windows 10 customize pipeline, create the following
PipelineRun
manifest:apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize serviceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize serviceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template serviceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status serviceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv serviceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm serviceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden serviceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden serviceAccountName: modify-vm-template-task status: {}
Apply the
PipelineRun
manifest:$ oc apply -f windows10-customize-run.yaml
6.12.4. Additional resources
6.13. Advanced virtual machine management
6.13.1. Working with resource quotas for virtual machines
Create and manage resource quotas for virtual machines.
6.13.1.1. Setting resource quota limits for virtual machines
Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.
Procedure
Set limits for a VM by editing the
VirtualMachine
manifest. For example:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1
- 1
- This configuration is supported because the
limits.memory
value is at least100Mi
larger than therequests.memory
value.
-
Save the
VirtualMachine
manifest.
6.13.1.2. Additional resources
6.13.2. Specifying nodes for virtual machines
You can place virtual machines (VMs) on specific nodes by using node placement rules.
6.13.2.1. About node placement for virtual machines
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:
- You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
- You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
- Your VMs require specific hardware features that are not present on all available nodes.
- You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.
You can use the following rule types in the spec
field of a VirtualMachine
manifest:
nodeSelector
- Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
-
Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the
VirtualMachine
workload type is based on thePod
object. tolerations
Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.
NoteAffinity rules only apply during scheduling. Red Hat OpenShift Service on AWS does not reschedule running workloads if the constraints are no longer met.
6.13.2.2. Node placement examples
The following example YAML file snippets use nodePlacement
, affinity
, and tolerations
fields to customize node placement for virtual machines.
6.13.2.2.1. Example: VM node placement with nodeSelector
In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1
and example-key-2 = example-value-2
labels.
If there are no nodes that fit this description, the virtual machine is not scheduled.
Example VM manifest
metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 # ...
6.13.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity
In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1
. If there is no such pod running on any node, the VM is not scheduled.
If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2
. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname # ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
6.13.2.2.3. Example: VM node placement with node affinity
In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1
or the label example.io/example-key = example-value-2
. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.
If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value
. However, if all candidate nodes have this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value # ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
6.13.2.2.4. Example: VM node placement with tolerations
In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule
taint. Because this virtual machine has matching tolerations
, it can schedule onto the tainted nodes.
A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.
Example VM manifest
metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" # ...
6.13.2.3. Additional resources
6.13.3. Configuring certificate rotation
Configure certificate rotation parameters to replace existing certificates.
6.13.3.1. Configuring certificate rotation
You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged
custom resource (CR).
Procedure
Open the
HyperConverged
CR by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Edit the
spec.certConfig
fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golangParseDuration
format.apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3
- Apply the YAML file to your cluster.
6.13.3.2. Troubleshooting certificate rotation parameters
Deleting one or more certConfig
values causes them to revert to the default values, unless the default values conflict with one of the following conditions:
-
The value of
ca.renewBefore
must be less than or equal to the value ofca.duration
. -
The value of
server.duration
must be less than or equal to the value ofca.duration
. -
The value of
server.renewBefore
must be less than or equal to the value ofserver.duration
.
If the default values conflict with these conditions, you will receive an error.
If you remove the server.duration
value in the following example, the default value of 24h0m0s
is greater than the value of ca.duration
, conflicting with the specified conditions.
Example
certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s
This results in the following error message:
error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration
The error message only mentions the first conflict. Review all certConfig values before you proceed.
6.13.4. Configuring the default CPU model
Use the defaultCPUModel
setting in the HyperConverged
custom resource (CR) to define a cluster-wide default CPU model.
The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster.
If the VM does not have a defined CPU model:
-
The
defaultCPUModel
is automatically set using the CPU model defined at the cluster-wide level.
-
The
If both the VM and the cluster have a defined CPU model:
- The VM’s CPU model takes precedence.
If neither the VM nor the cluster have a defined CPU model:
- The host-model is automatically set using the CPU model defined at the host level.
6.13.4.1. Configuring the default CPU model
Configure the defaultCPUModel
by updating the HyperConverged
custom resource (CR). You can change the defaultCPUModel
while OpenShift Virtualization is running.
The defaultCPUModel
is case sensitive.
Prerequisites
- Install the OpenShift CLI (oc).
Procedure
Open the
HyperConverged
CR by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the
defaultCPUModel
field to the CR and set the value to the name of a CPU model that exists in the cluster:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: "EPYC"
- Apply the YAML file to your cluster.
6.13.5. Using UEFI mode for virtual machines
You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode.
6.13.5.1. About UEFI mode for virtual machines
Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.
It stores all the information about initialization and startup in a file with a .efi
extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.
6.13.5.2. Booting virtual machines in UEFI mode
You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine
manifest.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit or create a
VirtualMachine
manifest file. Use thespec.firmware.bootloader
stanza to configure UEFI mode:Booting in UEFI mode with secure boot active
apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 # ...
- 1
- OpenShift Virtualization requires System Management Mode (
SMM
) to be enabled for Secure Boot in UEFI mode to occur. - 2
- OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot.
Apply the manifest to your cluster by running the following command:
$ oc create -f <file_name>.yaml
6.13.5.3. Enabling persistent EFI
You can enable EFI persistence in a VM by configuring an RWX storage class at the cluster level and adjusting the settings in the EFI section of the VM.
Prerequisites
- You must have cluster administrator privileges.
- You must have a storage class that supports RWX access mode and FS volume mode.
Procedure
Enable the
VMPersistentState
feature gate by running the following command:$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]'
6.13.5.4. Configuring VMs with persistent EFI
You can configure a VM to have EFI persistence enabled by editing its manifest file.
Prerequisites
-
VMPersistentState
feature gate enabled.
Procedure
Edit the VM manifest file and save to apply settings.
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true # ...
6.13.6. Configuring PXE booting for virtual machines
PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
6.13.6.1. PXE booting with a specified MAC address
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition
object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
- The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the network attachment definition file for PXE network
pxe-net-conf
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { "cniVersion": "0.3.1", "name": "pxe-net-conf", 2 "type": "bridge", 3 "bridge": "bridge-interface", 4 "macspoofchk": false, 5 "vlan": 100, 6 "disableContainerInterface": true, "preserveDefaultVlan": false 7 }
- 1
- The name for the
NetworkAttachmentDefinition
object. - 2
- The name for the configuration. It is recommended to match the configuration name to the
name
value of the network attachment definition. - 3
- The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin.
- 4
- The name of the Linux bridge configured on the node.
- 5
- Optional: A flag to enable the MAC spoof check. When set to
true
, you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. - 6
- Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
- 7
- Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is
true
.
Create the network attachment definition by using the file you created in the previous step:
$ oc create -f pxe-net-conf.yaml
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1
NoteBoot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2
Specify that the network is connected to the previously created network attachment definition. In this scenario,
<pxe-net>
is connected to the network attachment definition called<pxe-net-conf>
:networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yaml
Example output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
View the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot
- Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
$ virtctl console vmi-pxe-boot
Verification
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from Red Hat OpenShift Service on AWS.$ ip addr
Example output
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
6.13.6.2. OpenShift Virtualization networking glossary
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster.
6.13.7. Scheduling virtual machines
You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.
6.13.7.1. Policy attributes
You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.
Policy attribute | Description |
---|---|
force | The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU. |
require | Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model. |
optional | The VM is added to a node if that VM is supported by the host’s physical machine CPU. |
disable | The VM cannot be scheduled with CPU node discovery. |
forbid | The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. |
6.13.7.2. Setting a policy attribute and CPU feature
You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.
Procedure
Edit the
domain
spec of your VM configuration file. The following example sets the CPU feature and therequire
policy for a virtual machine (VM):apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2
6.13.7.3. Scheduling virtual machines with the supported CPU model
You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.
Procedure
Edit the
domain
spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1
- 1
- CPU model for the VM.
6.13.7.4. Scheduling virtual machines with the host model
When the CPU model for a virtual machine (VM) is set to host-model
, the VM inherits the CPU model of the node where it is scheduled.
Procedure
Edit the
domain
spec of your VM configuration file. The following example showshost-model
being specified for the virtual machine:apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1
- 1
- The VM that inherits the CPU model of the node where it is scheduled.
6.13.7.5. Scheduling virtual machines with a custom scheduler
You can use a custom scheduler to schedule a virtual machine (VM) on a node.
Prerequisites
- A secondary scheduler is configured for your cluster.
Procedure
Add the custom scheduler to the VM configuration by editing the
VirtualMachine
manifest. For example:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio # ...
- 1
- The name of the custom scheduler. If the
schedulerName
value does not match an existing scheduler, thevirt-launcher
pod stays in aPending
state until the specified scheduler is found.
Verification
Verify that the VM is using the custom scheduler specified in the
VirtualMachine
manifest by checking thevirt-launcher
pod events:View the list of pods in your cluster by entering the following command:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m
Run the following command to display the pod events:
$ oc describe pod virt-launcher-vm-fedora-dpc87
The value of the
From
field in the output verifies that the scheduler name matches the custom scheduler specified in theVirtualMachine
manifest:Example output
[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]
6.13.8. About high availability for virtual machines
You can enable high availability for virtual machines (VMs) by configuring remediating nodes.
You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks.
For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
6.13.9. Virtual machine control plane tuning
OpenShift Virtualization offers the following tuning options at the control-plane level:
-
The
highBurst
profile, which uses fixedQPS
andburst
rates, to create hundreds of virtual machines (VMs) in one batch - Migration setting adjustment based on workload type
6.13.9.1. Configuring a highBurst profile
Use the highBurst
profile to create and maintain a large number of virtual machines (VMs) in one cluster.
Procedure
Apply the following patch to enable the
highBurst
tuning policy profile:$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]'
Verification
Run the following command to verify the
highBurst
tuning policy profile is enabled:$ oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range $config, \ $value := .spec.configuration}} {{if eq $config "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{$config}} = {{$value}} {{end}} {{end}} {{"\n"}}
6.14. VM disks
6.14.1. Hot-plugging VM disks
You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI).
Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks.
A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Each VM has a virtio-scsi
controller so that hot plugged disks can use the scsi
bus. The virtio-scsi
controller overcomes the limitations of virtio
while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks.
Regular virtio
is not available for hot plugged disks because it is not scalable. Each virtio
disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand.
6.14.1.1. Hot plugging and hot unplugging a disk by using the web console
You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the Red Hat OpenShift Service on AWS web console.
The hot plugged disk remains attached to the VM until you unplug it.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have a data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a running VM to view its details.
-
On the VirtualMachine details page, click Configuration
Disks. Add a hot plugged disk:
- Click Add disk.
- In the Add disk (hot plugged) window, select the disk from the Source list and click Save.
Optional: Unplug a hot plugged disk:
- Click the options menu beside the disk and select Detach.
- Click Detach.
Optional: Make a hot plugged disk persistent:
- Click the options menu beside the disk and select Make persistent.
- Reboot the VM to apply the change.
6.14.1.2. Hot plugging and hot unplugging a disk by using the command line
You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have at least one data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
Hot plug a disk by running the following command:
$ virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>]
-
Use the optional
--persist
flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the--persist
flag, you can no longer hot plug or hot unplug the virtual disk. The--persist
flag applies to virtual machines, not virtual machine instances. -
The optional
--serial
flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC.
-
Use the optional
Hot unplug a disk by running the following command:
$ virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC>
6.14.2. Expanding virtual machine disks
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk.
If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes.
You cannot reduce the size of a VM disk.
6.14.2.1. Expanding a VM disk PVC
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk.
If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead.
Procedure
Edit the
PersistentVolumeClaim
manifest of the VM disk that you want to expand:$ oc edit pvc <pvc_name>
Update the disk size:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 # ...
- 1
- Specify the new disk size.
6.14.2.2. Expanding available virtual storage by adding blank data volumes
You can expand the available storage of a virtual machine (VM) by adding blank data volumes.
Prerequisites
- You must have at least one persistent volume.
Procedure
Create a
DataVolume
manifest as shown in the following example:Example
DataVolume
manifestapiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: "<storage_class>" 2
Create the data volume by running the following command:
$ oc create -f <blank-image-datavolume>.yaml
Additional resources for data volumes